AtariWrapper does not use recommended defaults
See original GitHub issueThe current AtariWrapper by default has terminate_on_life_loss
set to True. This goes against the recommendations of Revisiting the Arcade Learning Environment (https://arxiv.org/pdf/1709.06009.pdf). I believe this should be set to False by default. They also recommend using sticky actions instead of noop resets, but I think that problem is outside the scope of this wrapper.
Issue Analytics
- State:
- Created 2 years ago
- Comments:6 (3 by maintainers)
Top Results From Across the Web
gym/atari_preprocessing.py at master · openai/gym - GitHub
Turned off by default. Not recommended by Machado et al. ... Grayscale observation: If the observation is colour or greyscale, by default, greyscale....
Read more >master PDF - Stable Baselines3 Documentation
If you are looking for docker images with stable-baselines already installed in it, we recommend using images from RL.
Read more >Reinforcement Learning: Deep Q-Learning with Atari games
Termination signal when a life is lost: turned off by default. Not recommended by Machado et al. (2018). Resize to a square image:...
Read more >Supersuit Wrappers - PettingZoo Documentation
Similarly, using SuperSuit with PettingZoo environments looks like ... The OpenAI baselines MaxAndSkip Atari wrapper is equivalent to doing memory=2 and ...
Read more >Pre-training with non-expert human demonstration for deep ...
Deep reinforcement learning (deep RL) has achieved superior performance in complex sequential tasks by using deep neural networks as ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
My concern, and I’m sure @JesseFarebro (the maintainer of the ALE) would agree, is that the settings in Gym environments for Atari was never really done to begin with and that for people doing future work with then should use what have been the recommended practices with then for years. This actually caused an issue with us working with Atari games for an ICML paper, which is why Ryan created the issue.
I just realized that I should have put this issue in the actual stable baselines 3 repo, but I guess it’s relevant here as well. I definitely understand the trade-off between using newer recommendations and preserving fair comparisons to previous work.