question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

MlpPolicy is the only working policy for my custom environment

See original GitHub issue

I tried making a custom Environment which I’m succeeding in training using DQN and A2C using the respective MlpPolicy.

env = FXTradingEnvironment(spot_rates, client_amounts, client_actions)
env = Monitor(env, filename=f'{log_dir}/FXTrading.log', allow_early_resets=True)
# The algorithms require a vectorized environment to run
env = DummyVecEnv([lambda: env])

model_dqn_mlp = DQN(MlpPolicy, env, verbose=0, tensorboard_log="./tmp/dqn_FX_tensorboard/")
model_dqn_cnn = DQN(CnnPolicy, env, verbose=0, tensorboard_log="./tmp/dqn_FX_tensorboard/")

And I get the following error:

model_dqn_cnn = DQN(CnnPolicy, env, verbose=0, tensorboard_log="./tmp/a2c_FX_tensorboard/")
Traceback (most recent call last):

  File "<ipython-input-69-cc2f20ccec35>", line 1, in <module>
    model_dqn_cnn = DQN(CnnPolicy, env, verbose=0, tensorboard_log="./tmp/a2c_FX_tensorboard/")

  File "C:\ProgramData\Anaconda3\lib\site-packages\stable_baselines\deepq\dqn.py", line 105, in __init__
    self.setup_model()

  File "C:\ProgramData\Anaconda3\lib\site-packages\stable_baselines\deepq\dqn.py", line 143, in setup_model
    double_q=self.double_q

  File "C:\ProgramData\Anaconda3\lib\site-packages\stable_baselines\deepq\build_graph.py", line 367, in build_train
    act_f, obs_phs = build_act(q_func, ob_space, ac_space, stochastic_ph, update_eps_ph, sess)

  File "C:\ProgramData\Anaconda3\lib\site-packages\stable_baselines\deepq\build_graph.py", line 141, in build_act
    policy = q_func(sess, ob_space, ac_space, 1, 1, None)

  File "C:\ProgramData\Anaconda3\lib\site-packages\stable_baselines\deepq\policies.py", line 176, in __init__
    layer_norm=False, **_kwargs)

  File "C:\ProgramData\Anaconda3\lib\site-packages\stable_baselines\deepq\policies.py", line 106, in __init__
    extracted_features = cnn_extractor(self.processed_obs, **kwargs)

  File "C:\ProgramData\Anaconda3\lib\site-packages\stable_baselines\common\policies.py", line 24, in nature_cnn
    layer_1 = activ(conv(scaled_images, 'c1', n_filters=32, filter_size=8, stride=4, init_scale=np.sqrt(2), **kwargs))

  File "C:\ProgramData\Anaconda3\lib\site-packages\stable_baselines\a2c\utils.py", line 133, in conv
    n_input = input_tensor.get_shape()[channel_ax].value

  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\framework\tensor_shape.py", line 870, in __getitem__
    return self._dims[key]

IndexError: list index out of range

Am I doing something wrong? Did I miss something from the docs?

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:7

github_iconTop GitHub Comments

2reactions
MentalGearcommented, Feb 7, 2020

Actually, found the issue for "You environment must inherit from gym.Env class cf " error. I used check_env after a (Monitor) env wrapper. Check_env must be used right after init the env and before other env augmentations.

Maybe the error message could be cleared up a bit to help future users quicker identify the issue, by returning: " Your environment must inherit directly from gym.Env class cf Note: Wrappers must be added after checking."

0reactions
araffincommented, Feb 5, 2020

Would you mind telling us how and if you have fixed this bug ?

Your environment must derive from gym.Env class, this is not a bug. Here, the environment is implemented in c++ so this is a corner case.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Custom Policy Network - Stable Baselines3 - Read the Docs
Stable Baselines3 provides policy networks for images (CnnPolicies), other type of input features (MlpPolicies) and multiple different inputs ( ...
Read more >
Custom Environments in OpenAI's Gym | Towards Data Science
Beginner's guide on how to set up, verify, and use a custom environment in reinforcement learning training with Python.
Read more >
Reinforcement Learning in Python with Stable Baselines 3
Once you find some algorithm that seems to be maybe working and learning something, ... Later I will cover how you can use...
Read more >
I am trying to implement PPO from stable baselines3 for my ...
env.close() is dependent on the environment, so it will do different things for each one. It basically is used to stop rendering the...
Read more >
Understanding custom policies in stable-baselines3 - Reddit
One of the FE is ignored. But this is unlikely because: · Both of the feature extractor is used, so in your case...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found