[rllib] Error when using SAC only ( ValueError: ('Cannot apply NormalizeActionActionWrapper to env of type {}, which does not subclass gym.Env.' )
See original GitHub issueHello,
When I am training to create a trainer with SAC, (SACTrainer object) I get the following error:
Traceback (most recent call last): File "./run_simulation.py", line 326, in <module> "log_level": "ERROR", File "/usr/local/lib/python3.7/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__ Trainer.__init__(self, config, env, logger_creator) File "/usr/local/lib/python3.7/dist-packages/ray/rllib/agents/trainer.py", line 448, in __init__ super().__init__(config, logger_creator) File "/usr/local/lib/python3.7/dist-packages/ray/tune/trainable.py", line 174, in __init__ self._setup(copy.deepcopy(self.config)) File "/usr/local/lib/python3.7/dist-packages/ray/rllib/agents/trainer.py", line 591, in _setup self._init(self.config, self.env_creator) File "/usr/local/lib/python3.7/dist-packages/ray/rllib/agents/trainer_template.py", line 117, in _init self.config["num_workers"]) File "/usr/local/lib/python3.7/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers logdir=self.logdir) File "/usr/local/lib/python3.7/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__ RolloutWorker, env_creator, policy, 0, self._local_config) File "/usr/local/lib/python3.7/dist-packages/ray/rllib/evaluation/worker_set.py", line 279, in _make_worker extra_python_environs=extra_python_environs) File "/usr/local/lib/python3.7/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 303, in __init__ self.env = _validate_env(env_creator(env_context)) File "/usr/local/lib/python3.7/dist-packages/ray/rllib/agents/trainer.py", line 567, in <lambda> self.env_creator = lambda env_config: normalize(inner(env_config)) File "/usr/local/lib/python3.7/dist-packages/ray/rllib/agents/trainer.py", line 564, in normalize "type {}, which does not subclass gym.Env.", type(env)) ValueError: ('Cannot apply NormalizeActionActionWrapper to env of type {}, which does not subclass gym.Env.', <class 'fisherman_rllib_randzinit.FishermanEnv'>)
The portion of the code where I create the trainer is as follows:
trainer = SACTrainer( env=env_title, config={ "num_workers": 1, #"num_gpus": 2, "model": nw_model, "multiagent": { "policy_graphs": policy_graphs, "policy_mapping_fn": policy_mapping_fn, "policies_to_train": ["ppo_policy{}".format(i) for i in range(n_agents)], }, "callbacks": { "on_episode_start": tune.function(on_episode_start), "on_episode_step": tune.function(on_episode_step), "on_episode_end": tune.function(on_episode_end), }, "log_level": "ERROR", })
Moreover, when I use another trainer (PPO and A3C, namely) with the same settings (i.e. when I just change the “SACTrainer” to “A3CTrainer”) I get no errors. I also didn’t get any errors when I tried the same code in CoLAB notebook. Any ideas on why that may be the case?
Ray version and other system information (Python version, TensorFlow version, OS):
Ray version: 0.8.5 Tensorflow version 2.2.0 Python version: 3.7.7 OS: Ubuntu, version 16.04
Issue Analytics
- State:
- Created 3 years ago
- Comments:9 (2 by maintainers)
Top GitHub Comments
Can you try setting
"normalize_actions": False
?Hi again! The issue will be closed because there has been no more activity in the 14 days since the last message.
Please feel free to reopen or open a new issue if you’d still like it to be addressed.
Again, you can always ask for help on our discussion forum or Ray’s public slack channel.
Thanks again for opening the issue!