Cannot reproduce the benchmark results of DQN on Breakout
See original GitHub issueI use the instruction below to train DQN on breakout environment, here is my instruction:
python3 -m baselines.run --alg=deepq --env=BreakoutNoFrameskip-v4 --num_timesteps=10000000
And at the end of training, I could only get 14-15 for 100 episode reward mean, I want to know how could I reproduce the results?
Issue Analytics
- State:
- Created 5 years ago
- Comments:9
Top Results From Across the Web
Cannot reproduce Breakout benchmark using Double DQN
I haven't been able to reproduce the results of the Breakout benchmark with Double DQN when using similar hyperparameter values than the ones...
Read more >Need some help with my Double DQN implementation which ...
I'm trying to replicate the Mnih et al. 2015/Double DQN results on Atari Breakout but the per-episode rewards (where one episode is a...
Read more >DQN — Stable Baselines3 1.7.0a5 documentation
The complete learning curves are available in the associated PR #110. How to replicate the results?¶. Clone the rl-zoo repo: git clone https://github ......
Read more >Atari score vs reward in rllib DQN implementation
While the average score of 2 is not much at all relative to the benchmarks for Breakout, 5M steps may not be large...
Read more >DQN in Pytorch Stream 2 of N - YouTube
In part two of my DQN series we will focus on optimizations. I will put the model and training onto my GPU, we...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@bywbilly Did you use the exploration schedule I had earlier? I probably should have made it clear, the exploration schedule is shown in the lower left plot in my figure above. Here it is in my actual code:
@bywbilly
To get DQN to work you need to adjust hyperparameters that are different from what’s the default in Baselines.
I got Breakout to work several times for different random seeds, all within the past week from master. Here is one example of a training curve I have with a code base I’m testing with (the top left is probably what you want, past 100 episode reward):
Off the top of my head:
edit: this is PDD-DQN, just to be clear. I ran for 2.5e7 steps.