TwoStepGame.py doesn't work for contrib/MADDPG [rllib]
See original GitHub issueReproduction
Running https://github.com/ray-project/ray/blob/master/rllib/examples/twostep_game.py with the option contrib/MADDPG for --run gives this error:
File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__ Trainer.__init__(self, config, env, logger_creator) File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/rllib/agents/trainer.py", line 398, in __init__ Trainable.__init__(self, config, logger_creator) File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/tune/trainable.py", line 96, in __init__ self._setup(copy.deepcopy(self.config)) File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/rllib/agents/trainer.py", line 523, in _setup self._init(self.config, self.env_creator) File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/rllib/agents/trainer_template.py", line 109, in _init self.config["num_workers"]) File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/rllib/agents/trainer.py", line 568, in _make_workers logdir=self.logdir) File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/rllib/evaluation/worker_set.py", line 64, in __init__ RolloutWorker, env_creator, policy, 0, self._local_config) File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/rllib/evaluation/worker_set.py", line 220, in _make_worker _fake_sampler=config.get("_fake_sampler", False)) File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/rllib/evaluation/rollout_worker.py", line 350, in __init__ self._build_policy_map(policy_dict, policy_config) File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/rllib/evaluation/rollout_worker.py", line 766, in _build_policy_map policy_map[name] = cls(obs_space, act_space, merged_conf) File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/rllib/contrib/maddpg/maddpg_policy.py", line 158, in __init__ scope="actor")) File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/rllib/contrib/maddpg/maddpg_policy.py", line 368, in _build_actor_network sampler = tfp.distributions.RelaxedOneHotCategorical( AttributeError: 'NoneType' object has no attribute 'distributions'
Is MADDPG not supposed to work for discrete observation spaces? I’ve also tried it on my own environment from #6884 (which uses a Tuple observation space) and it complains that the state space isn’t valid.
Issue Analytics
- State:
- Created 4 years ago
- Comments:7 (3 by maintainers)
Top GitHub Comments
No. The MADDPG implementation here is kinda cursed.
On Mon, Jun 1, 2020 at 9:31 PM Tuya notifications@github.com wrote:
– Thank you for your time, Justin Terry
Hey, I’m the maintainer of the MADDPG example code and I’ve used the implementation a fair bit. My first guess would be trying installing tensorflow-probability==0.7.0 and seeing if that fixes your error.