[rllib] DDPG opensim env issue
See original GitHub issueSystem information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): ubuntu 16.04
- Ray installed from (source or binary): source
- Ray version: 0.5.0
- Python version: 3.6.1
- Exact command to reproduce:
Describe the problem
Executing DDPG in Pendulum-v0
environment has no problem.
However, when i execute DDPG in Prosthetics
environment, error occurs during gradient calculation.
Do I have to add some additional code in custom environment to solve this problem?
Source code / logs
import ray
import ray.tune.registry import register_env
from ray.rllib.agents import ddpg
from osim.env import ProstheticsEnv
def env_creator(env_config):
return ProstheticsEnv(False)
register_env("prosthetics", env_creator)
ray.init("<head-ip:port>")
agent = ddpg.DDPGAgent(env="prosthetics", config={"num_workers": 140, "gpu": True})
for i in range(1000):
result = agent.train()
print(result)
if i % 10 == 0:
agent.save()
Issue Analytics
- State:
- Created 5 years ago
- Comments:6 (3 by maintainers)
Top Results From Across the Web
RLlib Environments — Ray 0.7.4 documentation
RLlib uses Gym as its environment interface for single-agent training. For more information on how to implement a custom Gym environment, see the...
Read more >rllib · GitHub Topics
My attempt to reproduce a water down version of PBT (Population based training) for MARL (Multi-agent reinforcement learning) using DDPPO (Decentralized ...
Read more >Artificial Intelligence for Prosthetics — challenge solutions
Participants were provided with a human musculoskeletal model and a physics-based simulation environment (OpenSim [9, 42]) in which they.
Read more >RLlib - Error with Custom env - continuous action space - DDPG
Try with action space definition as follows: self.action_space = Box(0,50,shape=(1,), dtype=np.float32).
Read more >Reinforcement Learning Frameworks – An Overview
An RL environment is a system designed to be interacted with by one ... While MARL capabilities of frameworks like PyMARL, RLlib and...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
This is odd, you shouldn’t be getting MultiAgentBatch types there. Could you print the types (or even better the full contents) of each
s
on this line?: https://github.com/ray-project/ray/blob/d01dc9e22d5e8625ae6ac49e2e689eebf472b5f8/python/ray/rllib/evaluation/sample_batch.py#L224I’ve ran local desktop and it runs. I think one of my VM has environment problem something like that. I’ll find out and let you know if it belongs to ray. Please close this issue.