question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Have you tried using multiple cpu on the Example here in A2C?

See original GitHub issue

I am trying to use multiple cpu for the example provided on this link?

I tried to change the environment to multiple cpu.

env = DummyVecEnv([env_maker for i in range(16)])

But I have a problem in the done and info in stable baselines. It seems they turned into arrays.

There is an error in this code: any suggestions or any of you done this? It seems lstm in stable baselines are like this.

#env = env_maker()
#observation = env.reset()

while True:
    #observation = observation[np.newaxis, ...]

    # action = env.action_space.sample()
    action, _states = model.predict(observation)
    observation, reward, done, info = env.step(action)

    # env.render()
    if done:
        print("info:", info)
        break

------------------------------

Error:

```python
ValueError                                Traceback (most recent call last)
<ipython-input-27-2d78acbb8800> in <module>
     10 
     11     # env.render()
---> 12     if done:
     13         print("info:", info)
     14         break

ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:8 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
AminHPcommented, Oct 2, 2020

Thanks man 😃

Yeah, somehow, but I didn’t override DummyVecEnv itself this time. I inherited a new class from it (DummyVecEnv2) and overrode its reset method.

0reactions
toksiscommented, Oct 2, 2020

You are a Guru! It works now. What you did was after learning, override the DummyvecEnv by removing the reset. Am i correct?

Read more comments on GitHub >

github_iconTop Results From Across the Web

RL Series-A2C and A3C - Isaac Kargar - Medium
In this method, there are several parallel agents interacting with their own environment (can be implemented and work on multiple cores of CPU...
Read more >
Why I got the same action when testing the A2C?
In this example, instead of just penalizing the model for failing, you should reward if for winning, so pausing is no longer the...
Read more >
Two-Headed A2C Network in PyTorch - DataHubbs
We build an A2C network in PyTorch using a two-headed neural network and walk you through all of the steps to start you...
Read more >
Training RL agents in stable-baselines3 is easy
Setting the policy to “MlpPolicy” means, that we are giving a state vector as input to our model. There are only 2 other...
Read more >
OpenAI Baselines: ACKTR & A2C
We're releasing two new OpenAI Baselines implementations: ACKTR and A2C. A2C is a synchronous, deterministic variant of Asynchronous ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found