Pre-trained DQN (+variants) models on the deprecated Atari wrapper
See original GitHub issueHi,
I’ve been using some of the excellent pre-trained models from DQN and its variants. However, looking at more recent algorithms (PPO, A2C, TRPO, etc.) it seems like we’re now using a different Atari wrapper, wrap_deepmind
instead of the deprecated wrap_dqn
. I inspected the frames and the newer wrapper keeps the game scores in the images, while the older one had them cropped out of the pixels.
The deepq/experiments
directory still seems to be using the deprecated wrapper. Just to be clear,
- The pre-trained models were trained using the deprecated wrapper?
- Do you have any idea when the pre-trained models might be updated to use the up to date wrapper (along with the rest of the DQN training scripts)? The pre-trained models can be very useful to avoid us expending resources to train them again, and I think it is useful to standardize the environment processing as much as possible since there is a lot of variability in Deep RL.
Thanks.
Edit: follow up point, I noticed here that you’re telling us to copy over LazyFrames when using the updated wrapper. However, LazyFrames appears to be deprecated. Is there anything I need to do with LazyFrames to get Atari working on the newer wrapper?
Issue Analytics
- State:
- Created 6 years ago
- Comments:5
Top GitHub Comments
@LiYingTW -Any luck with the new wrapper and Atari. A new pull request for Rainbow paper was submitted today and it uses the old wrapper
Hey all, any updates? Best practices to mitigate deprecated wrapper?