question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Multiple Environments with Unity MLagents

See original GitHub issue

Hi,

I am trying to combine the algorithms of this repo with https://github.com/Unity-Technologies/ml-agents . I created an environement, and used their python API, which contains a wrapper to gym. When I then try to create a SubprocVecEnv with the environements, I get the following Error:

2020-06-05 18:23:46 INFO [environment.py:342] Connected new brain:
Brain?team=0
2020-06-05 18:23:48 INFO [environment.py:111] Connected to Unity environment with package version 1.0.0-preview and communication version 1.0.0
2020-06-05 18:23:49 INFO [environment.py:342] Connected new brain:
Brain?team=0
Traceback (most recent call last):
  File "C:/Users/.../Desktop/RLUnity/python/test.py", line 46, in <module>
    env = SubprocVecEnv([env1 , env2])
  File "C:\Users\...\.conda\envs\tensorflow_src_1_14\lib\site-packages\stable_baselines\common\vec_env\subproc_vec_env.py", line 93, in __init__
    process.start()
  File "C:\Users\...\.conda\envs\tensorflow_src_1_14\lib\multiprocessing\process.py", line 112, in start
    self._popen = self._Popen(self)
  File "C:\Users\...\.conda\envs\tensorflow_src_1_14\lib\multiprocessing\context.py", line 322, in _Popen
    return Popen(process_obj)
  File "C:\Users\...\.conda\envs\tensorflow_src_1_14\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__
    reduction.dump(process_obj, to_child)
  File "C:\Users\...\.conda\envs\tensorflow_src_1_14\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
  File "C:\Users\...\.conda\envs\tensorflow_src_1_14\lib\site-packages\stable_baselines\common\vec_env\base_vec_env.py", line 331, in __getstate__
    return cloudpickle.dumps(self.var)
  File "C:\Users\...\.conda\envs\tensorflow_src_1_14\lib\site-packages\cloudpickle\cloudpickle.py", line 1148, in dumps
    cp.dump(obj)
  File "C:\Users\...\.conda\envs\tensorflow_src_1_14\lib\site-packages\cloudpickle\cloudpickle.py", line 491, in dump
    return Pickler.dump(self, obj)
  File "C:\Users\...\.conda\envs\tensorflow_src_1_14\lib\pickle.py", line 437, in dump
    self.save(obj)
  File "C:\Users\...\.conda\envs\tensorflow_src_1_14\lib\pickle.py", line 549, in save
    self.save_reduce(obj=obj, *rv)
  File "C:\Users\...\.conda\envs\tensorflow_src_1_14\lib\pickle.py", line 662, in save_reduce
    save(state)
  File "C:\Users\...\.conda\envs\tensorflow_src_1_14\lib\pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Users\...\.conda\envs\tensorflow_src_1_14\lib\pickle.py", line 859, in save_dict
    self._batch_setitems(obj.items())
  File "C:\Users\...\.conda\envs\tensorflow_src_1_14\lib\pickle.py", line 885, in _batch_setitems
    save(v)
  File "C:\Users\...\.conda\envs\tensorflow_src_1_14\lib\pickle.py", line 549, in save
    self.save_reduce(obj=obj, *rv)
  File "C:\Users\...\.conda\envs\tensorflow_src_1_14\lib\pickle.py", line 662, in save_reduce
    save(state)
  File "C:\Users\...\.conda\envs\tensorflow_src_1_14\lib\pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Users\...\.conda\envs\tensorflow_src_1_14\lib\pickle.py", line 859, in save_dict
    self._batch_setitems(obj.items())
  File "C:\Users\...\.conda\envs\tensorflow_src_1_14\lib\pickle.py", line 885, in _batch_setitems
    save(v)
  File "C:\Users\...\.conda\envs\tensorflow_src_1_14\lib\pickle.py", line 549, in save
    self.save_reduce(obj=obj, *rv)
  File "C:\Users\...\.conda\envs\tensorflow_src_1_14\lib\pickle.py", line 662, in save_reduce
    save(state)
  File "C:\Users\...\.conda\envs\tensorflow_src_1_14\lib\pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Users\...\.conda\envs\tensorflow_src_1_14\lib\pickle.py", line 859, in save_dict
    self._batch_setitems(obj.items())
  File "C:\Users\...\.conda\envs\tensorflow_src_1_14\lib\pickle.py", line 885, in _batch_setitems
    save(v)
  File "C:\Users\...\.conda\envs\tensorflow_src_1_14\lib\pickle.py", line 524, in save
    rv = reduce(self.proto)
TypeError: can't pickle _thread.lock objects
2020-06-05 18:23:49 INFO [environment.py:498] Environment shut down with return code 0 (CTRL_C_EVENT).
2020-06-05 18:23:49 INFO [environment.py:498] Environment shut down with return code 0 (CTRL_C_EVENT).

Process finished with exit code 1

This is the code I am trying to run:

import tensorflow as tf
from mlagents_envs.environment import UnityEnvironment

from stable_baselines.common.policies import MlpPolicy
from stable_baselines.common.vec_env import DummyVecEnv, SubprocVecEnv
from stable_baselines import PPO2
from gym_unity.envs import UnityToGymWrapper
from mlagents_envs.side_channel.engine_configuration_channel import EngineConfig, EngineConfigurationChannel

try:
    from mpi4py import MPI
    print("MPI found!")
except ImportError:
    print("No MPI")
    MPI = None
MPI = None

engine_configuration_channel = EngineConfigurationChannel()
engine_configuration_channel.set_configuration_parameters(time_scale = 10.0)

unityEnv1 = UnityEnvironment(worker_id=0, file_name=".../RLUnity/ml-agents/exe2/RLproject", no_graphics=False ,seed=1, side_channels = [engine_configuration_channel])
env1 = UnityToGymWrapper(unityEnv1, 0, False)
unityEnv2 = UnityEnvironment(worker_id=1, file_name=".../RLUnity/ml-agents/exe2/RLproject", no_graphics=False ,seed=1, side_channels = [engine_configuration_channel])
env2 = UnityToGymWrapper(unityEnv2, 0, False)

env = SubprocVecEnv([env1 , env2])

System Info Describe the characteristic of your environment:

  • Describe how the library was installed pip
  • GPU models and configuration rtx 2070
  • Python version 3.7.7
  • Tensorflow version 1.14
  • Versions of any other relevant libraries: https://github.com/Unity-Technologies/ml-agents mlagents 0.16.0 mlagents-envs 0.16.0

Additional context Add any other context about the problem here.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:9

github_iconTop GitHub Comments

1reaction
pengzhi1998commented, Dec 23, 2021

@Miffyli Got it. Thank you for your reply!

0reactions
Miffylicommented, Dec 23, 2021

@pengzhi1998 If I understood you right the question is about using gym-unity environments with multiprocessing, without using baselines/stable-baselines? We do not have answers for this as these issues are for questions related to stable-baselines. You have better luck asking unity gym-unity’s repo or elsewhere.

Read more comments on GitHub >

github_iconTop Results From Across the Web

ML-Agents: Multiple environments (num-envs) - Unity Forum
Hey folks, Getting into training with multiple simultaneous environments... Right now I have 8 environments going, and it's giving me a 2x ...
Read more >
num-envs vs multiple training areas within one application ...
Hi, Coming of reading this blog post here, I came to wonder if there are a difference, and what it is, between using...
Read more >
Unity's ML-Agents and Effective ways of training (3/3)
ML-Agent allows you to create an executable file and use that file to create multiple instances of application environments that will work in...
Read more >
Multi-agents environments and adversarial self-play in Unity ...
In this tutorial, we will look at how to build multi-agent environments in Unity as well as explore adversarial self-play.
Read more >
5 Tips for Setting up an ML-Agents Environment in Unity
This video gives 5 tips for setting up an ML-Agents environment. The ML-Agents toolkit supports deep reinforcement learning in Unity.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found