question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Hi, Can you please specify the Ray version under which the rllib example code runs? I am currently getting this error with Ray 1.11.0:

Traceback (most recent call last):
  File "/Users/grigoriiguz/projects/speech_acts/meltingpot/examples/rllib/self_play_train.py", line 116, in <module>
    main()
  File "/Users/grigoriiguz/projects/speech_acts/meltingpot/examples/rllib/self_play_train.py", line 109, in main
    trainer = get_trainer_class(agent_algorithm)(env="meltingpot", config=config)
  File "/Users/grigoriiguz/miniconda3/envs/rlenv/lib/python3.9/site-packages/ray/rllib/agents/trainer.py", line 746, in __init__
    super().__init__(config, logger_creator, remote_checkpoint_dir,
  File "/Users/grigoriiguz/miniconda3/envs/rlenv/lib/python3.9/site-packages/ray/tune/trainable.py", line 124, in __init__
    self.setup(copy.deepcopy(self.config))
  File "/Users/grigoriiguz/miniconda3/envs/rlenv/lib/python3.9/site-packages/ray/rllib/agents/trainer.py", line 822, in setup
    self.workers = self._make_workers(
  File "/Users/grigoriiguz/miniconda3/envs/rlenv/lib/python3.9/site-packages/ray/rllib/agents/trainer.py", line 1995, in _make_workers
    return WorkerSet(
  File "/Users/grigoriiguz/miniconda3/envs/rlenv/lib/python3.9/site-packages/ray/rllib/evaluation/worker_set.py", line 101, in __init__
    remote_spaces = ray.get(self.remote_workers(
  File "/Users/grigoriiguz/miniconda3/envs/rlenv/lib/python3.9/site-packages/ray/_private/client_mode_hook.py", line 105, in wrapper
    return func(*args, **kwargs)
  File "/Users/grigoriiguz/miniconda3/envs/rlenv/lib/python3.9/site-packages/ray/worker.py", line 1765, in get
    raise value
ray.exceptions.RayActorError: The actor died because of an error raised in its creation task, ray::RolloutWorker.__init__() (pid=26916, ip=127.0.0.1, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x168f2cb80>)
  File "/Users/grigoriiguz/miniconda3/envs/rlenv/lib/python3.9/site-packages/ray/rllib/evaluation/rollout_worker.py", line 636, in __init__
    self.async_env: BaseEnv = convert_to_base_env(
  File "/Users/grigoriiguz/miniconda3/envs/rlenv/lib/python3.9/site-packages/ray/rllib/env/base_env.py", line 732, in convert_to_base_env
    return env.to_base_env(
  File "/Users/grigoriiguz/miniconda3/envs/rlenv/lib/python3.9/site-packages/ray/rllib/env/multi_agent_env.py", line 311, in to_base_env
    env = MultiAgentEnvWrapper(
  File "/Users/grigoriiguz/miniconda3/envs/rlenv/lib/python3.9/site-packages/ray/rllib/env/multi_agent_env.py", line 481, in __init__
    self._agent_ids = self._unwrapped_env.get_agent_ids()
  File "/Users/grigoriiguz/miniconda3/envs/rlenv/lib/python3.9/site-packages/ray/rllib/env/multi_agent_env.py", line 210, in get_agent_ids
    if not isinstance(self._agent_ids, set):
AttributeError: 'MeltingPotEnv' object has no attribute '_agent_ids'
(RolloutWorker pid=26916) 2022-04-06 15:27:14,711       ERROR worker.py:430 -- Exception raised in creation task: The actor died because of an error raised in its creation task, ray::RolloutWorker.__init__() (pid=26916, ip=127.0.0.1, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x168f2cb80>)
(RolloutWorker pid=26916)   File "/Users/grigoriiguz/miniconda3/envs/rlenv/lib/python3.9/site-packages/ray/rllib/evaluation/rollout_worker.py", line 636, in __init__
(RolloutWorker pid=26916)     self.async_env: BaseEnv = convert_to_base_env(
(RolloutWorker pid=26916)   File "/Users/grigoriiguz/miniconda3/envs/rlenv/lib/python3.9/site-packages/ray/rllib/env/base_env.py", line 732, in convert_to_base_env
(RolloutWorker pid=26916)     return env.to_base_env(
(RolloutWorker pid=26916)   File "/Users/grigoriiguz/miniconda3/envs/rlenv/lib/python3.9/site-packages/ray/rllib/env/multi_agent_env.py", line 311, in to_base_env
(RolloutWorker pid=26916)     env = MultiAgentEnvWrapper(
(RolloutWorker pid=26916)   File "/Users/grigoriiguz/miniconda3/envs/rlenv/lib/python3.9/site-packages/ray/rllib/env/multi_agent_env.py", line 481, in __init__
(RolloutWorker pid=26916)     self._agent_ids = self._unwrapped_env.get_agent_ids()
(RolloutWorker pid=26916)   File "/Users/grigoriiguz/miniconda3/envs/rlenv/lib/python3.9/site-packages/ray/rllib/env/multi_agent_env.py", line 210, in get_agent_ids
(RolloutWorker pid=26916)     if not isinstance(self._agent_ids, set):
(RolloutWorker pid=26916) AttributeError: 'MeltingPotEnv' object has no attribute '_agent_ids'

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:10 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
Muff2ncommented, Apr 15, 2022

Thank you for the heads up. It is a problem with ray==1.12.0. Taking a look at rllib it seems to have some bugs in it.

While I can give the new-style _agent_ids with:

    self._agent_ids = set(
        PLAYER_STR_FORMAT.format(index=index)
        for index in range(self._num_players)
    )   
    super().__init__()

There are other issues.

For example in ray/rllib/utils/pre_checks/env.py lines 333-341:

def _check_reward(reward, base_env=False, agent_ids=None):
    if base_env:
        for _, multi_agent_dict in reward.items():
            for agent_id, rew in multi_agent_dict.items():
                if not (
                    np.isreal(rew) and not isinstance(rew, bool) and np.isscalar(rew)
                ):  
                    error = ( 
                        "Your step function must return rewards that are"
                        f" integer or float. reward: {rew}. Instead it was a "
                        f"{type(reward)}"
                    )   
                    raise ValueError(error)

Here they test that the rewards returned are real, not boolean and scalar. That trips out for me because meltingpot is passing float rewards. The error message says that floats are acceptable (which I think they should be) and quotes the type of the reward dictionary rather than the type of the reward that they are testing.

Therefore I think it best to stick with ray=1.11.0 for now. I can look more closely and submit the PR next week.

1reaction
Muff2ncommented, Apr 11, 2022

This is added as part of PR 25 (if you are happy to have one PR address two issues). Though that is only to get the code to run, it does not address the ‘specify the Ray version’.

Read more comments on GitHub >

github_iconTop Results From Across the Web

RLlib: Industry-Grade Reinforcement Learning — Ray 2.2.0
RLlib is an open-source library for reinforcement learning (RL), offering support for production-level, highly distributed RL workloads while maintaining ...
Read more >
Releases · ray-project/ray - GitHub
Ray 2.2 is a stability-focused release, featuring stability improvements across ... Will be fully supported, across all of RLlib's algorithms, in Ray 2.3....
Read more >
rllib - PyPI
rllib 0.0.1. pip install rllib. Copy PIP instructions. Latest version. Released: Jun 10, 2021. A lib for RL ...
Read more >
Chapter 4. Reinforcement Learning with Ray RLlib - O'Reilly
Chapter 4. Reinforcement Learning with Ray RLlib A Note for Early Release Readers With Early Release ebooks, you get books in their earliest...
Read more >
ray | Read the Docs
Versions. master · latest · releases-2.2.0 · releases-2.1.0 · releases-2.0.1 · releases-2.0.0rc0 · releases-2.0.0 · releases-1.9.2 · releases-1.8.0 ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found