question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[rllib] DDPG opensim env issue

See original GitHub issue

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): ubuntu 16.04
  • Ray installed from (source or binary): source
  • Ray version: 0.5.0
  • Python version: 3.6.1
  • Exact command to reproduce:

Describe the problem

Executing DDPG in Pendulum-v0 environment has no problem.

However, when i execute DDPG in Prosthetics environment, error occurs during gradient calculation. Do I have to add some additional code in custom environment to solve this problem?

image

Source code / logs

import ray 
import ray.tune.registry import register_env
from ray.rllib.agents import ddpg
from osim.env import ProstheticsEnv

def env_creator(env_config):
    return ProstheticsEnv(False)

register_env("prosthetics", env_creator)
ray.init("<head-ip:port>")

agent = ddpg.DDPGAgent(env="prosthetics", config={"num_workers": 140, "gpu": True})

for i in range(1000):
    result = agent.train()
    print(result)
    if i % 10 == 0:
        agent.save()

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
ericlcommented, Aug 14, 2018

This is odd, you shouldn’t be getting MultiAgentBatch types there. Could you print the types (or even better the full contents) of each s on this line?: https://github.com/ray-project/ray/blob/d01dc9e22d5e8625ae6ac49e2e689eebf472b5f8/python/ray/rllib/evaluation/sample_batch.py#L224

0reactions
whikwoncommented, Aug 17, 2018

I’ve ran local desktop and it runs. I think one of my VM has environment problem something like that. I’ll find out and let you know if it belongs to ray. Please close this issue.

Read more comments on GitHub >

github_iconTop Results From Across the Web

RLlib Environments — Ray 0.7.4 documentation
RLlib uses Gym as its environment interface for single-agent training. For more information on how to implement a custom Gym environment, see the...
Read more >
rllib · GitHub Topics
My attempt to reproduce a water down version of PBT (Population based training) for MARL (Multi-agent reinforcement learning) using DDPPO (Decentralized ...
Read more >
Artificial Intelligence for Prosthetics — challenge solutions
Participants were provided with a human musculoskeletal model and a physics-based simulation environment (OpenSim [9, 42]) in which they.
Read more >
RLlib - Error with Custom env - continuous action space - DDPG
Try with action space definition as follows: self.action_space = Box(0,50,shape=(1,), dtype=np.float32).
Read more >
Reinforcement Learning Frameworks – An Overview
An RL environment is a system designed to be interacted with by one ... While MARL capabilities of frameworks like PyMARL, RLlib and...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found