question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Error running example "self_play_train.py"

See original GitHub issue

I’m getting following error when I try to run the given example code

(marl) ➜  rllib git:(main) ✗ python self_play_train.py 
2021-12-16 20:46:53,589	INFO services.py:1338 -- View the Ray dashboard at http://127.0.0.1:8265
2021-12-16 20:46:54,490	INFO trainer.py:722 -- Your framework setting is 'tf', meaning you are using static-graph mode. Set framework='tf2' to enable eager execution with tf2.x. You may also want to then set `eager_tracing=True` in order to reach similar execution speed as with static-graph mode.
2021-12-16 20:46:54,491	WARNING ppo.py:143 -- `train_batch_size` (200) cannot be achieved with your other settings (num_workers=1 num_envs_per_worker=1 rollout_fragment_length=30)! Auto-adjusting `rollout_fragment_length` to 200.
2021-12-16 20:46:54,491	INFO ppo.py:166 -- In multi-agent mode, policies will be optimized sequentially by the multi-GPU optimizer. Consider setting simple_optimizer=True if this doesn't work for you.
Traceback (most recent call last):
  File "/home/kinal/Desktop/marl/meltingpot/examples/rllib/self_play_train.py", line 95, in <module>
    main()
  File "/home/kinal/Desktop/marl/meltingpot/examples/rllib/self_play_train.py", line 88, in main
    trainer = get_trainer_class(agent_algorithm)(env="meltingpot", config=config)
  File "/home/kinal/miniconda3/envs/marl/lib/python3.9/site-packages/ray/rllib/agents/trainer_template.py", line 102, in __init__
    Trainer.__init__(self, config, env, logger_creator,
  File "/home/kinal/miniconda3/envs/marl/lib/python3.9/site-packages/ray/rllib/agents/trainer.py", line 661, in __init__
    super().__init__(config, logger_creator, remote_checkpoint_dir,
  File "/home/kinal/miniconda3/envs/marl/lib/python3.9/site-packages/ray/tune/trainable.py", line 121, in __init__
    self.setup(copy.deepcopy(self.config))
  File "/home/kinal/miniconda3/envs/marl/lib/python3.9/site-packages/ray/rllib/agents/trainer_template.py", line 113, in setup
    super().setup(config)
  File "/home/kinal/miniconda3/envs/marl/lib/python3.9/site-packages/ray/rllib/agents/trainer.py", line 764, in setup
    self._init(self.config, self.env_creator)
  File "/home/kinal/miniconda3/envs/marl/lib/python3.9/site-packages/ray/rllib/agents/trainer_template.py", line 136, in _init
    self.workers = self._make_workers(
  File "/home/kinal/miniconda3/envs/marl/lib/python3.9/site-packages/ray/rllib/agents/trainer.py", line 1727, in _make_workers
    return WorkerSet(
  File "/home/kinal/miniconda3/envs/marl/lib/python3.9/site-packages/ray/rllib/evaluation/worker_set.py", line 87, in __init__
    remote_spaces = ray.get(self.remote_workers(
  File "/home/kinal/miniconda3/envs/marl/lib/python3.9/site-packages/ray/_private/client_mode_hook.py", line 105, in wrapper
    return func(*args, **kwargs)
  File "/home/kinal/miniconda3/envs/marl/lib/python3.9/site-packages/ray/worker.py", line 1715, in get
    raise value
ray.exceptions.RayActorError: The actor died because of an error raised in its creation task, ray::RolloutWorker.__init__() (pid=69363, ip=10.2.40.108)
  File "/home/kinal/miniconda3/envs/marl/lib/python3.9/site-packages/ray/rllib/evaluation/rollout_worker.py", line 587, in __init__
    self._build_policy_map(
  File "/home/kinal/miniconda3/envs/marl/lib/python3.9/site-packages/ray/rllib/evaluation/rollout_worker.py", line 1543, in _build_policy_map
    preprocessor = ModelCatalog.get_preprocessor_for_space(
  File "/home/kinal/miniconda3/envs/marl/lib/python3.9/site-packages/ray/rllib/models/catalog.py", line 703, in get_preprocessor_for_space
    prep = cls(observation_space, options)
  File "/home/kinal/miniconda3/envs/marl/lib/python3.9/site-packages/ray/rllib/models/preprocessors.py", line 40, in __init__
    self.shape = self._init_shape(obs_space, self._options)
  File "/home/kinal/miniconda3/envs/marl/lib/python3.9/site-packages/ray/rllib/models/preprocessors.py", line 265, in _init_shape
    preprocessor = preprocessor_class(space, self._options)
  File "/home/kinal/miniconda3/envs/marl/lib/python3.9/site-packages/ray/rllib/models/preprocessors.py", line 43, in __init__
    self._obs_for_type_matching = self._obs_space.sample()
  File "/home/kinal/miniconda3/envs/marl/lib/python3.9/site-packages/gym/spaces/box.py", line 132, in sample
    sample[bounded] = self.np_random.uniform(
  File "mtrand.pyx", line 1130, in numpy.random.mtrand.RandomState.uniform
OverflowError: Range exceeds valid bounds
(RolloutWorker pid=69363) 2021-12-16 20:47:04,439	INFO rollout_worker.py:1705 -- Validating sub-env at vector index=0 ... (ok)
(RolloutWorker pid=69363) 2021-12-16 20:47:04,463	DEBUG rollout_worker.py:1534 -- Creating policy for av
(RolloutWorker pid=69363) 2021-12-16 20:47:04,470	DEBUG preprocessors.py:262 -- Creating sub-preprocessor for Box([-2147483648 -2147483648 -2147483648 -2147483648 -2147483648 -2147483648
(RolloutWorker pid=69363)  -2147483648 -2147483648 -2147483648 -2147483648 -2147483648 -2147483648
(RolloutWorker pid=69363)  -2147483648 -2147483648 -2147483648 -2147483648], [2147483647 2147483647 2147483647 2147483647 2147483647 2147483647
(RolloutWorker pid=69363)  2147483647 2147483647 2147483647 2147483647 2147483647 2147483647
(RolloutWorker pid=69363)  2147483647 2147483647 2147483647 2147483647], (16,), int32)
(RolloutWorker pid=69363) 2021-12-16 20:47:04,471	DEBUG preprocessors.py:262 -- Creating sub-preprocessor for Box([-2147483648 -2147483648 -2147483648 -2147483648 -2147483648 -2147483648
(RolloutWorker pid=69363)  -2147483648 -2147483648 -2147483648 -2147483648 -2147483648 -2147483648
(RolloutWorker pid=69363)  -2147483648 -2147483648 -2147483648 -2147483648], [2147483647 2147483647 2147483647 2147483647 2147483647 2147483647
(RolloutWorker pid=69363)  2147483647 2147483647 2147483647 2147483647 2147483647 2147483647
(RolloutWorker pid=69363)  2147483647 2147483647 2147483647 2147483647], (16,), int32)
(RolloutWorker pid=69363) 2021-12-16 20:47:04,471	DEBUG preprocessors.py:262 -- Creating sub-preprocessor for Box(-1.7976931348623157e+308, 1.7976931348623157e+308, (), float64)
(RolloutWorker pid=69363) /home/kinal/miniconda3/envs/marl/lib/python3.9/site-packages/gym/spaces/box.py:132: RuntimeWarning: overflow encountered in subtract
(RolloutWorker pid=69363)   sample[bounded] = self.np_random.uniform(
(RolloutWorker pid=69363) 2021-12-16 20:47:04,472	ERROR worker.py:431 -- Exception raised in creation task: The actor died because of an error raised in its creation task, ray::RolloutWorker.__init__() (pid=69363, ip=10.2.40.108)
(RolloutWorker pid=69363)   File "/home/kinal/miniconda3/envs/marl/lib/python3.9/site-packages/ray/rllib/evaluation/rollout_worker.py", line 587, in __init__
(RolloutWorker pid=69363)     self._build_policy_map(
(RolloutWorker pid=69363)   File "/home/kinal/miniconda3/envs/marl/lib/python3.9/site-packages/ray/rllib/evaluation/rollout_worker.py", line 1543, in _build_policy_map
(RolloutWorker pid=69363)     preprocessor = ModelCatalog.get_preprocessor_for_space(
(RolloutWorker pid=69363)   File "/home/kinal/miniconda3/envs/marl/lib/python3.9/site-packages/ray/rllib/models/catalog.py", line 703, in get_preprocessor_for_space
(RolloutWorker pid=69363)     prep = cls(observation_space, options)
(RolloutWorker pid=69363)   File "/home/kinal/miniconda3/envs/marl/lib/python3.9/site-packages/ray/rllib/models/preprocessors.py", line 40, in __init__
(RolloutWorker pid=69363)     self.shape = self._init_shape(obs_space, self._options)
(RolloutWorker pid=69363)   File "/home/kinal/miniconda3/envs/marl/lib/python3.9/site-packages/ray/rllib/models/preprocessors.py", line 265, in _init_shape
(RolloutWorker pid=69363)     preprocessor = preprocessor_class(space, self._options)
(RolloutWorker pid=69363)   File "/home/kinal/miniconda3/envs/marl/lib/python3.9/site-packages/ray/rllib/models/preprocessors.py", line 43, in __init__
(RolloutWorker pid=69363)     self._obs_for_type_matching = self._obs_space.sample()
(RolloutWorker pid=69363)   File "/home/kinal/miniconda3/envs/marl/lib/python3.9/site-packages/gym/spaces/box.py", line 132, in sample
(RolloutWorker pid=69363)     sample[bounded] = self.np_random.uniform(
(RolloutWorker pid=69363)   File "mtrand.pyx", line 1130, in numpy.random.mtrand.RandomState.uniform
(RolloutWorker pid=69363) OverflowError: Range exceeds valid bounds

Environment details:

gym                       0.21.0                   pypi_0    pypi
dm-meltingpot                1.0.1     /home/kinal/Desktop/marl/meltingpot
numpy                        1.21.4
ray                       1.9.0                    pypi_0    pypi
tensorflow                2.7.0                    pypi_0    pypi

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:7 (1 by maintainers)

github_iconTop GitHub Comments

1reaction
jagapioucommented, Jan 11, 2022

Heya, thanks for raising these issues.

Regarding your first point on spaces.Box bounds, it looks like the fix is to pass in (-np.inf, np.inf) as the bounds for floating point numbers. 5b36a2a18da4f95f38be2428d27006cf3f852b8e should fix.

Note that these specs aren’t precise. For example, READY_TO_SHOOT is actually either 0. or 1. So sampling from N(0, 1) will lead to a lot of values that won’t be seen during behavior.

I’ll leave this open so your second issue can be dealt with.

1reaction
kinalmehtacommented, Dec 17, 2021

UPDATE:

I tried narrowing down the issue and I found that when converting dm_env to gym environment, the observation dict for the following keys has issues

Env name: “allelopathic_harvest” Keys with issue:

  • COLOR_ID
  • MOST_TASTY_BERRY_ID
  • READY_TO_SHOOT All of these have dtype Box(-1.7976931348623157e+308, 1.7976931348623157e+308, (), float64)

So basically this exists only for dtypes np.float64 as numpy gives overflow error for uniform sampling when sampling for min to max range of np.float64

I solved it by updating line 62 in file ./examples/rllib/multiagent_wrapper.py as below

return spaces.Box(info.min/10, info.max/10, spec.shape, spec.dtype)

Not sure if this is the right thing to do and whether it affects the environment in anyway.

After solving this I ran into another error stating

ValueError: No default configuration for obs shape [88, 88, 3], you must specify `conv_filters` manually as a model option. Default configurations are only available for inputs of shape [42, 42, K] and [84, 84, K]. You may alternatively want to use a custom model or preprocessor.

This seems to be because there is no inbuilt config in rllib for the observation space from the example environment. Writing a custom model for state space or updating the configuration to accommodate shape of (88,88,3) would solve this.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Unknown error when running train.py or model_main.py
I've tried both train.py and model_main.py as I'm running on tensorflow-gpu 1.8 through anaconda. in the training folder labelmap.pbtxt does ...
Read more >
werner-duvaud/muzero-general - GitHub
Examples of board games, Gym and Atari games (See list of implemented games) ... Run. python muzero.py. To visualize the training results, run...
Read more >
Files · master · Zhichen Wang / my-muzero - CERN GitLab
You only need to add a game file with the hyperparameters and the game class. Please refer to the documentation and the example....
Read more >
Macro Action Selection with Deep Reinforcement Learning in ...
This problem makes the current observations insufficient to infer the future states and rewards. The other challenge is the sparse reward problem, for...
Read more >
Reinforcement Learning for Generative Art - eScholarship.org
For example, to solve a CartPole problem5, the agent needs to ... prior knowledge of the game and took three days of self-play...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found