question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Issue with MlpLstm policy

See original GitHub issue

Hello there,

I’ve been using different methods (A2C, PPO2, ACKTR, etc) in my custom environment, using an MlpPolicy and everything went fine. Now, I’d like to try other policies, MlpLstm for instance.

After that I substitute the policy inside .learn() the simulation starts correctly (no errors are displayed) but, even waiting a lot of time (up to 8h or more), it does not yield to any results.

  • I’ve checked that verbose=1
  • stable-baselines==2.10.1
  • Some time references: using PPO2 with a MlpPolicy it took about 42min for 10M timesteps of evaluation.
  • I’ve also used env.checker() and everything seems in the right place.
  • OS : Fedora32
  • n_states = 18, n_actions=1
  • Definition of the actions and observations space: self.action_space = spaces.Box(low=-1, high=1,shape=(1,), dtype=np.float32) self.observation_space= spaces.Box(low=-self.u_lim, high=self.u_lim, shape=(n_states,), dtype=np.float32)

where u_lim is a a numerical constraint.

Can you help me in anyway? Do you need other information?

Thanks

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:8

github_iconTop GitHub Comments

2reactions
lorenzoschenacommented, Dec 19, 2020

Yeah, in my tests I usually put nminibatches=1 to avoid this issue. However, I’ve followed your suggestion removing tensorboard logging and reducing the number of steps and now the policy does work correctly!

Thank you! 👍

1reaction
Miffylicommented, Dec 19, 2020

Hmm when I ran the RL_advection.py code it worked as expected (which you said works). When I changed MlpPolicy to MlpLstmPolicy in line 62 I got the following error:

Traceback (most recent call last):
  File ".\RL_advection.py", line 115, in <module>
    main()
  File ".\RL_advection.py", line 62, in main
    model = PPO2(MlpLstmPolicy, env, verbose=1, n_cpu_tf_sess=None,n_steps = int(32000/num_cpu), nminibatches = 100, noptepochs = 5)
  File "c:\users\anssi\desktop\stable-baselines\stable_baselines\ppo2\ppo2.py", line 97, in __init__
    self.setup_model()
  File "c:\users\anssi\desktop\stable-baselines\stable_baselines\ppo2\ppo2.py", line 125, in setup_model
    assert self.n_envs % self.nminibatches == 0, "For recurrent policies, "\
AssertionError: For recurrent policies, the number of environments run in parallel should be a multiple of nminibatches.

I fixed this with nminibatches=5 and now code works.

I think I spotted the problem: you have very large n_steps and nminibatches. Computational requirement for LSTM increases a lot with larger n_steps (as it backpropagates through time over all n_steps), and with current settings it indeed takes very long. Try lower numbers, e.g. n_steps=512.

Note: I ran code on CPUs and removed tensorboard logging to make it work.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Policy Networks — Stable Baselines 2.10.3a0 documentation
Stable-baselines provides a set of default policies, that can be used with most ... MlpPolicy, Policy object that implements actor critic, using a...
Read more >
Stable Baselines Documentation - Read the Docs
sorflow in the past (see Issue #430) and so if you do not intend to use ... Stable baselines provides default policy networks...
Read more >
Shape of observation space for LSTM policies in OpenAI and ...
Suppose, I want to use the MlpLstmPolicy or any other policy that makes use of an LSTM in OpenAI or in stable baselines....
Read more >
One-hour-ahead solar radiation forecasting by MLP, LSTM ...
These problems cause great harm to human health and nature. ... the public electricity grid, planning for the future, and making policy.
Read more >
AI in Healthcare: Time-Series Forecasting Using Statistical ...
All models (ARIMA, MLP, LSTM, and ensemble) were fit to data at a ... the baseline performance in the time-series forecasting problem.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found