question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

robo-gym check env issue

See original GitHub issue

Important Note: We do not do technical support, nor consulting and don’t answer personal questions per email. Please post your question on the RL Discord, Reddit or Stack Overflow in that case.

🤖 Custom Gym Environment

Please check your environment first using:

from stable_baselines3.common.env_checker import check_env

env = gym.make('EndEffectorPositioningURSim-v0', ip=target_machine_ip, gui=True)
# It will check your custom environment and output additional warnings if needed
check_env(env)

### Describe the bug

A clear and concise description of what the bug is. Having issue with check_env with this custom environment

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
/home/isaac/robogym_ws/test.ipynb Cell 3' in <cell line: [1](vscode-notebook-cell:/home/isaac/robogym_ws/test.ipynb#ch0000008?line=0)>()
----> 1[ check_env(env)

File ~/.local/lib/python3.8/site-packages/stable_baselines3/common/env_checker.py:291, in check_env(env, warn, skip_render_check)
    ]()[289](file:///home/isaac/.local/lib/python3.8/site-packages/stable_baselines3/common/env_checker.py?line=288)[ # The check only works with numpy arrays
    ]()[290](file:///home/isaac/.local/lib/python3.8/site-packages/stable_baselines3/common/env_checker.py?line=289)[ if _is_numpy_array_space(observation_space) and _is_numpy_array_space(action_space):
--> ]()[291](file:///home/isaac/.local/lib/python3.8/site-packages/stable_baselines3/common/env_checker.py?line=290)[     _check_nan(env)

File ~/.local/lib/python3.8/site-packages/stable_baselines3/common/env_checker.py:93, in _check_nan(env)
     ]()[91](file:///home/isaac/.local/lib/python3.8/site-packages/stable_baselines3/common/env_checker.py?line=90)[ for _ in range(10):
     ]()[92](file:///home/isaac/.local/lib/python3.8/site-packages/stable_baselines3/common/env_checker.py?line=91)[     action = np.array([env.action_space.sample()])
---> ]()[93](file:///home/isaac/.local/lib/python3.8/site-packages/stable_baselines3/common/env_checker.py?line=92)[     _, _, _, _ = vec_env.step(action)

File ~/.local/lib/python3.8/site-packages/stable_baselines3/common/vec_env/base_vec_env.py:162, in VecEnv.step(self, actions)
    ]()[155](file:///home/isaac/.local/lib/python3.8/site-packages/stable_baselines3/common/vec_env/base_vec_env.py?line=154)[ """
    ]()[156](file:///home/isaac/.local/lib/python3.8/site-packages/stable_baselines3/common/vec_env/base_vec_env.py?line=155)[ Step the environments with the given action
    ]()[157](file:///home/isaac/.local/lib/python3.8/site-packages/stable_baselines3/common/vec_env/base_vec_env.py?line=156)[ 
    ]()[158](file:///home/isaac/.local/lib/python3.8/site-packages/stable_baselines3/common/vec_env/base_vec_env.py?line=157)[ :param actions: the action
    ]()[159](file:///home/isaac/.local/lib/python3.8/site-packages/stable_baselines3/common/vec_env/base_vec_env.py?line=158)[ :return: observation, reward, done, information
    ]()[160](file:///home/isaac/.local/lib/python3.8/site-packages/stable_baselines3/common/vec_env/base_vec_env.py?line=159)[ """
    ]()[161](file:///home/isaac/.local/lib/python3.8/site-packages/stable_baselines3/common/vec_env/base_vec_env.py?line=160)[ self.step_async(actions)
--> ]()[162](file:///home/isaac/.local/lib/python3.8/site-packages/stable_baselines3/common/vec_env/base_vec_env.py?line=161)[ return self.step_wait()

File ~/.local/lib/python3.8/site-packages/stable_baselines3/common/vec_env/vec_check_nan.py:35, in VecCheckNan.step_wait(self)
     ]()[34](file:///home/isaac/.local/lib/python3.8/site-packages/stable_baselines3/common/vec_env/vec_check_nan.py?line=33)[ def step_wait(self) -> VecEnvStepReturn:
---> ]()[35](file:///home/isaac/.local/lib/python3.8/site-packages/stable_baselines3/common/vec_env/vec_check_nan.py?line=34)[     observations, rewards, news, infos = self.venv.step_wait()
     ]()[37](file:///home/isaac/.local/lib/python3.8/site-packages/stable_baselines3/common/vec_env/vec_check_nan.py?line=36)[     self._check_val(async_step=False, observations=observations, rewards=rewards, news=news)
     ]()[39](file:///home/isaac/.local/lib/python3.8/site-packages/stable_baselines3/common/vec_env/vec_check_nan.py?line=38)[     self._observations = observations

File ~/.local/lib/python3.8/site-packages/stable_baselines3/common/vec_env/dummy_vec_env.py:51, in DummyVecEnv.step_wait(self)
     ]()[49](file:///home/isaac/.local/lib/python3.8/site-packages/stable_baselines3/common/vec_env/dummy_vec_env.py?line=48)[         obs = self.envs[env_idx].reset()
     ]()[50](file:///home/isaac/.local/lib/python3.8/site-packages/stable_baselines3/common/vec_env/dummy_vec_env.py?line=49)[     self._save_obs(env_idx, obs)
---> ]()[51](file:///home/isaac/.local/lib/python3.8/site-packages/stable_baselines3/common/vec_env/dummy_vec_env.py?line=50)[ return (self._obs_from_buf(), np.copy(self.buf_rews), np.copy(self.buf_dones), deepcopy(self.buf_infos))

File /usr/lib/python3.8/copy.py:146, in deepcopy(x, memo, _nil)
    ]()[144](file:///usr/lib/python3.8/copy.py?line=143)[ copier = _deepcopy_dispatch.get(cls)
    ]()[145](file:///usr/lib/python3.8/copy.py?line=144)[ if copier is not None:
--> ]()[146](file:///usr/lib/python3.8/copy.py?line=145)[     y = copier(x, memo)
    ]()[147](file:///usr/lib/python3.8/copy.py?line=146)[ else:
    ]()[148](file:///usr/lib/python3.8/copy.py?line=147)[     if issubclass(cls, type):

File /usr/lib/python3.8/copy.py:205, in _deepcopy_list(x, memo, deepcopy)
    ]()[203](file:///usr/lib/python3.8/copy.py?line=202)[ append = y.append
    ]()[204](file:///usr/lib/python3.8/copy.py?line=203)[ for a in x:
--> ]()[205](file:///usr/lib/python3.8/copy.py?line=204)[     append(deepcopy(a, memo))
    ]()[206](file:///usr/lib/python3.8/copy.py?line=205)[ return y

File /usr/lib/python3.8/copy.py:146, in deepcopy(x, memo, _nil)
    ]()[144](file:///usr/lib/python3.8/copy.py?line=143)[ copier = _deepcopy_dispatch.get(cls)
    ]()[145](file:///usr/lib/python3.8/copy.py?line=144)[ if copier is not None:
--> ]()[146](file:///usr/lib/python3.8/copy.py?line=145)[     y = copier(x, memo)
    ]()[147](file:///usr/lib/python3.8/copy.py?line=146)[ else:
    ]()[148](file:///usr/lib/python3.8/copy.py?line=147)[     if issubclass(cls, type):

File /usr/lib/python3.8/copy.py:230, in _deepcopy_dict(x, memo, deepcopy)
    ]()[228](file:///usr/lib/python3.8/copy.py?line=227)[ memo[id(x)] = y
    ]()[229](file:///usr/lib/python3.8/copy.py?line=228)[ for key, value in x.items():
--> ]()[230](file:///usr/lib/python3.8/copy.py?line=229)[     y[deepcopy(key, memo)] = deepcopy(value, memo)
    ]()[231](file:///usr/lib/python3.8/copy.py?line=230)[ return y

File /usr/lib/python3.8/copy.py:161, in deepcopy(x, memo, _nil)
    ]()[159](file:///usr/lib/python3.8/copy.py?line=158)[ reductor = getattr(x, "__reduce_ex__", None)
    ]()[160](file:///usr/lib/python3.8/copy.py?line=159)[ if reductor is not None:
--> ]()[161](file:///usr/lib/python3.8/copy.py?line=160)[     rv = reductor(4)
    ]()[162](file:///usr/lib/python3.8/copy.py?line=161)[ else:
    ]()[163](file:///usr/lib/python3.8/copy.py?line=162)[     reductor = getattr(x, "__reduce__", None)

TypeError: cannot pickle 'google.protobuf.pyext._message.ScalarMapContainer' object]()

The observation space: Box([ -inf -inf -inf -1.1 -1.1 -1.1 -1.1 -1.1 -1.1 -inf -inf -inf -inf -inf -inf -inf -inf -inf -inf -inf -inf -1.01 -1.01 -1.01 -1.01 -1.01 -1.01], [ inf inf inf 1.1 1.1 1.1 1.1 1.1 1.1 inf inf inf inf inf inf inf inf inf inf inf inf 1.01 1.01 1.01 1.01 1.01 1.01], (27,), float32) The action space: Box([-1. -1. -1. -1. -1.], [1. 1. 1. 1. 1.], (5,), float32)

### Code example

import gym
import robo_gym
from robo_gym.wrappers.exception_handling import ExceptionHandling

import stable_baselines3 as sb3
from stable_baselines3 import SAC,PPO

from stable_baselines3.common.env_checker import check_env
check_env(env)

Please try to provide a minimal example to reproduce the bug.

I was running the example here. https://github.com/jr-robotics/robo-gym/blob/master/docs/environments.md#end-effector-positioning

For a custom environment, you need to give at least the observation space, action space, reset() and step() methods (see working example below). Error messages and stack traces are also helpful.

Please use the markdown code blocks for both code and stack traces.

import gym
import numpy as np

from stable_baselines3 import A2C
from stable_baselines3.common.env_checker import check_env


class CustomEnv(gym.Env):

  def __init__(self):
    super(CustomEnv, self).__init__()
    self.observation_space = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(14,))
    self.action_space = gym.spaces.Box(low=-1, high=1, shape=(6,))

  def reset(self):
    return self.observation_space.sample()

  def step(self, action):
    obs = self.observation_space.sample()
    reward = 1.0
    done = False
    info = {}
    return obs, reward, done, info

env = CustomEnv()
check_env(env)

model = A2C("MlpPolicy", env, verbose=1).learn(1000)
Traceback (most recent call last): File ...

### System Info Describe the characteristic of your environment:

  • Describe how the library was installed (pip, docker, source, …)
  • GPU models and configuration
  • Python version
  • PyTorch version
  • Gym version
  • Versions of any other relevant libraries

You can use sb3.get_system_info() to print relevant packages info:

import stable_baselines3 as sb3
sb3.get_system_info()

OS: Linux-5.13.0-39-generic-x86_64-with-glibc2.29 #44~20.04.1-Ubuntu SMP Thu Mar 24 16:43:35 UTC 2022
Python: 3.8.10
Stable-Baselines3: 1.5.0
PyTorch: 1.11.0+cu113
GPU Enabled: True
Numpy: 1.20.0
Gym: 0.21.0

({'OS': 'Linux-5.13.0-39-generic-x86_64-with-glibc2.29 #44~20.04.1-Ubuntu SMP Thu Mar 24 16:43:35 UTC 2022',
  'Python': '3.8.10',
  'Stable-Baselines3': '1.5.0',
  'PyTorch': '1.11.0+cu113',
  'GPU Enabled': 'True',
  'Numpy': '1.20.0',
  'Gym': '0.21.0'},
 'OS: Linux-5.13.0-39-generic-x86_64-with-glibc2.29 #44~20.04.1-Ubuntu SMP Thu Mar 24 16:43:35 UTC 2022\nPython: 3.8.10\nStable-Baselines3: 1.5.0\nPyTorch: 1.11.0+cu113\nGPU Enabled: True\nNumpy: 1.20.0\nGym: 0.21.0\n')

Additional context

Add any other context about the problem here.

### Checklist

  • [ /] I have read the documentation (required)
  • [ /] I have checked that there is no similar issue in the repo (required)
  • [ /] I have checked my env using the env checker (required)
  • [ /] I have provided a minimal working example to reproduce the bug (required)

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:6 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
Miffylicommented, Apr 16, 2022

You should use MlpPolicy instead of MultiInputPolicy for Box spaces.

1reaction
araffincommented, Apr 16, 2022

Hello, the check env is made for gym.Env environments, not already vectorized one (if you are using isaac Gym). You should use a VecEnvWrapper to use it with SB3, see https://github.com/DLR-RM/stable-baselines3/issues/772#issuecomment-1048657002

Read more comments on GitHub >

github_iconTop Results From Across the Web

Failed to make the environment · Issue #13 · jr-robotics/robo ...
I successfully installed the robo-gym (tag/v.0.1.8) under a conda ... please run attach-to-server-manager and check what's the output there?
Read more >
How do I check if an environment variable exists in Robot?
Is there a way to check without assigning the value of the env_var to a local var and use the keyword above on...
Read more >
robogym.envs.dactyl.full_perpendicular.make_env Example
python code examples for robogym.envs.dactyl.full_perpendicular.make_env. Learn how to use python api ... Fixing the seed here to mitigate this # issue. env ......
Read more >
CORA: Benchmarks, Baselines, and Metrics as a Platform for ...
Progress in continual reinforcement learning has been limited due to several barriers to entry: missing code, high compute requirements, ...
Read more >
Joanneum Research Releases Robot AI Platform Robo-Gym ...
The robo-gym framework is based on ROS and uses the Gazebo physics ... To mitigate these problems, many RL-for-robotics platforms perform ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found