question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[question] make_atari_env() is not used for Atari environments

See original GitHub issue

Question

Why does the zoo call standard make_vec_env() for all environments, including Atari, when sb3 has a special function for it make_atari_env()?

Train of thought

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:12 (1 by maintainers)

github_iconTop GitHub Comments

2reactions
rienathcommented, Jul 23, 2021

@Miffyli

Sorry for the late reply. Wanted to get as stuck as possible before contacting you. Cheers for the link, both your papers are really interesting. The 2019 one was actually my main reference for a scholarship I have received, so it is quite a coincidence that we met here! And thank you for co-writing it! 😂

Here is a link to my code: https://github.com/rienath/stable-baselines3/tree/gym-to-retro

What I have done

Indeed, Gym Retro allows to get audio and supports Atari games, however, unlike the conventional gym, it does not have different extensions like v4, v0, ram, NoFrameskip, and so on. So That is why I have added the make_retro_env() that will parse the normal gym-like environment names and appropriately wrap the environment to make it work. I promise I will make a proper RetroWrapper and make it neat when I am finished!

I have not used 2 wrappers that are there for standard Gym Atari:

  • NoopResetEnv, as it consistently gave worse results on DQN and PPO with Breakout.
  • EpisodicLifeEnv, because I have not found a way to get information on lives from Retro.

I have managed to achieve a pretty good reward/step ratio with the video-only Retro, but not as good as the default gym, probably because the action space of Retro is significantly larger than the gym. It basically explicitly has all the buttons and some of their combinations as actions instead of having ‘NOOP’, ‘FIRE’ etc. that the normal gym has.

Questions

  • Afaik, standard v0 and v4 environments have a frameskip of 2-4. A random number between these values is chosen at every step for a frameskip. The wrapper that tries to achieve that for gym Atari is MaxAndSkipEnv, but it only accepts a single value. Also, doesn’t the standard gym take care of frameskip and repeat_action_probability? If it does, then why are we using MaxAndSkipEnv, if it does not, don’t we make BreakoutNoFrameskip-v0 a BreakoutDeterministic-v4 by having a frameskip 4 and no repeat_action_probability variable?

  • Getting the audio out of gym retro is trivial em.get_audio(). I will apply FFT, however, before adding it to observations (especially after reading your new paper). However, I am not exactly sure where I should be modifying the observation space as you suggested. Also, I am not sure where to actually add the audio to observations.
  • Why do we make another action after firing in FireResetEnv?

How to run

If you want to run my code, just insert this into Colab:

!pip install -q gym
!pip install -q gym-retro
!git clone -b gym-to-retro https://github.com/rienath/stable-baselines3.git
!wget http://www.atarimania.com/roms/Roms.rar && unrar x Roms.rar && unzip ROMS.zip
!python -m retro.import ROMS/
!python -m atari_py.import_roms ROMS/
!mv stable-baselines3/stable_baselines3 stable_baselines3
!apt install xvfb -y
!pip install pyvirtualdisplay
!pip install piglet

from pyvirtualdisplay import Display
display = Display(visible=0, size=(1400, 900))
display.start()

import stable_baselines3
from stable_baselines3.ppo import PPO
from stable_baselines3.common.env_util import make_retro_env

env = make_retro_env('BreakoutDeterministic-v4', seed=0)
model = PPO("CnnPolicy", env, verbose=0, tensorboard_log="./PPO/")

%load_ext tensorboard
%tensorboard --logdir ./PPO/

and in a separate cell:

model.learn(total_timesteps=1000000)
env.close()
0reactions
Miffylicommented, Aug 6, 2021

NoopResetEnv wrapper, which can easily be added, but gives worse results when I use it, and I do not understand why theoretically we would need to have no actions for n steps at the beginning of episode anyway.

This is to add stochasticity to Atari environments. By default they are deterministic, so the agent could learn to do very simple “repeat this sequence of actions” type of things. By starting the environment in slightly different initial states, the agent has to learn to be a touch more dynamic 😃

Indeed, thanks for the link. I have looked deeper into this and made the environment that you have described. The game made has a 1D map.

Nice! This is exactly the type of “simple tests” that you should create to sanity-check new algorithms/implementations 😃. Good thing it works!

This is exactly what I am trying to find out! We have talked quite a lot here and I would have been stuck a long time ago without you! Thank you again and could I contact you more privately?

No problem, happy to help 😃. I am most active on Discord: my tag is [my-username-in-github]#0001. Feel free to add me there!

This is my code for the 1D environment in case anyone ever needs it.

Bit of a nitpick, but I hope you have a personal repository where you store all of this code etc 😃. It would also help others (and show that you have been productive to future job applications) if you put all of this work into a single, public github repo.

Read more comments on GitHub >

github_iconTop Results From Across the Web

'module could not be found' when running gym.make for atari ...
I have 'cmake' and i have installed the gym[atari] dependency. ... enviorment without any issues but cannot run any atari environments.
Read more >
Why does the Atari Gym Amidar environment only move after ...
I reviewed some videos of Amidar on YouTube, and it seems that the game screen is fixed for a few seconds before the...
Read more >
Atari Environments - endtoend.ai
The OpenAI Gym provides 59 Atari 2600 games as environments. State of the Art. Note: Most papers use 57 Atari 2600 games, and...
Read more >
How do I install the Atari environments from openai-gym?
Try using the full path with constructor syntax. Ultimately I would just like this code to work: environment_name = 'Breakout-v0' env = gym.make ......
Read more >
Simulating Atari environments | PyTorch 1.x Reinforcement ...
If you are looking to simulate an environment but are not sure of the name you should use in the make() method, you...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found