question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[rllib] Error when using SAC only ( ValueError: ('Cannot apply NormalizeActionActionWrapper to env of type {}, which does not subclass gym.Env.' )

See original GitHub issue

Hello,

When I am training to create a trainer with SAC, (SACTrainer object) I get the following error:

Traceback (most recent call last): File "./run_simulation.py", line 326, in <module> "log_level": "ERROR", File "/usr/local/lib/python3.7/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__ Trainer.__init__(self, config, env, logger_creator) File "/usr/local/lib/python3.7/dist-packages/ray/rllib/agents/trainer.py", line 448, in __init__ super().__init__(config, logger_creator) File "/usr/local/lib/python3.7/dist-packages/ray/tune/trainable.py", line 174, in __init__ self._setup(copy.deepcopy(self.config)) File "/usr/local/lib/python3.7/dist-packages/ray/rllib/agents/trainer.py", line 591, in _setup self._init(self.config, self.env_creator) File "/usr/local/lib/python3.7/dist-packages/ray/rllib/agents/trainer_template.py", line 117, in _init self.config["num_workers"]) File "/usr/local/lib/python3.7/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers logdir=self.logdir) File "/usr/local/lib/python3.7/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__ RolloutWorker, env_creator, policy, 0, self._local_config) File "/usr/local/lib/python3.7/dist-packages/ray/rllib/evaluation/worker_set.py", line 279, in _make_worker extra_python_environs=extra_python_environs) File "/usr/local/lib/python3.7/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 303, in __init__ self.env = _validate_env(env_creator(env_context)) File "/usr/local/lib/python3.7/dist-packages/ray/rllib/agents/trainer.py", line 567, in <lambda> self.env_creator = lambda env_config: normalize(inner(env_config)) File "/usr/local/lib/python3.7/dist-packages/ray/rllib/agents/trainer.py", line 564, in normalize "type {}, which does not subclass gym.Env.", type(env)) ValueError: ('Cannot apply NormalizeActionActionWrapper to env of type {}, which does not subclass gym.Env.', <class 'fisherman_rllib_randzinit.FishermanEnv'>)

The portion of the code where I create the trainer is as follows:

trainer = SACTrainer( env=env_title, config={ "num_workers": 1, #"num_gpus": 2, "model": nw_model, "multiagent": { "policy_graphs": policy_graphs, "policy_mapping_fn": policy_mapping_fn, "policies_to_train": ["ppo_policy{}".format(i) for i in range(n_agents)], }, "callbacks": { "on_episode_start": tune.function(on_episode_start), "on_episode_step": tune.function(on_episode_step), "on_episode_end": tune.function(on_episode_end), }, "log_level": "ERROR", })

Moreover, when I use another trainer (PPO and A3C, namely) with the same settings (i.e. when I just change the “SACTrainer” to “A3CTrainer”) I get no errors. I also didn’t get any errors when I tried the same code in CoLAB notebook. Any ideas on why that may be the case?

Ray version and other system information (Python version, TensorFlow version, OS):

Ray version: 0.8.5 Tensorflow version 2.2.0 Python version: 3.7.7 OS: Ubuntu, version 16.04

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:9 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
ericlcommented, May 20, 2020

Can you try setting "normalize_actions": False?

0reactions
stale[bot]commented, Nov 26, 2020

Hi again! The issue will be closed because there has been no more activity in the 14 days since the last message.

Please feel free to reopen or open a new issue if you’d still like it to be addressed.

Again, you can always ask for help on our discussion forum or Ray’s public slack channel.

Thanks again for opening the issue!

Read more comments on GitHub >

github_iconTop Results From Across the Web

Source code for ray.rllib.evaluation.rollout_worker
Useful in case a RolloutWorker is run as @ray.remote (Actor) and the owner would like to make sure the worker has been properly...
Read more >
Getting Started with RLlib — Ray 2.2.0 - the Ray documentation
In this guide, we will first walk you through running your first experiments with the RLlib CLI, and then discuss our Python API...
Read more >
Env precheck inconsistent with Trainer - RLlib - Ray
When I attempt to run this with rllib 1.12.1, I get: ValueError: The observation collected from env.reset was not contained within your ...
Read more >
Algorithms — Ray 2.2.0 - the Ray documentation
In offline RL, the algorithm has no access to an environment, but can only sample from a fixed dataset of pre-collected state-action-reward tuples....
Read more >
ray.rllib.env.base_env — Ray 2.1.0 - the Ray documentation
All other RLlib supported env types can be converted to BaseEnv. RLlib handles these conversions internally in RolloutWorker, for example: gym.Env => rllib....
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found