question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[Rllib] Expected all tensors to be on same device. Found cuda:0 and cpu! While running ddpg with tune

See original GitHub issue

What is the problem?

Hi,

This is my first issue. If I am missing any information please let me know and I will add it. I am trying to test the ddpg algorithm with a simple benchmark environment CartPole-v0. If I run with config[‘num_gpus’] set to 0 there is no problem. However If I set it to 1 then I get the following error.

Ray version and other system information (Python version, TensorFlow version, OS): ray==1.0.0 torch==1.7.0 python 3.8.5 Ubuntu 20.04.1 LTS

Reproduction (REQUIRED)

from ray import tune
from ray.rllib.agents import ddpg
config = ddpg.DEFAULT_CONFIG.copy()
config['num_workers'] = 4
config['num_gpus'] = 1
config['framework'] = 'torch'
config['env'] = "Pendulum-v0"
tune.run(
    ddpg.DDPGTrainer,
    config=config)
(pid=1189791) 2020-11-04 14:31:25,534	INFO trainer.py:616 -- Current log_level is WARN. For more information, set 'log_level': 'INFO' / 'DEBUG' or use the -v and -vv flags.
(pid=1189794) /home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/ray/rllib/utils/torch_ops.py:65: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at  /opt/conda/conda-bld/pytorch_1603729062494/work/torch/csrc/utils/tensor_numpy.cpp:141.)
(pid=1189794)   tensor = torch.from_numpy(np.asarray(item))
(pid=1191536) /home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/ray/rllib/utils/torch_ops.py:65: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at  /opt/conda/conda-bld/pytorch_1603729062494/work/torch/csrc/utils/tensor_numpy.cpp:141.)
(pid=1191536)   tensor = torch.from_numpy(np.asarray(item))
(pid=1191538) /home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/ray/rllib/utils/torch_ops.py:65: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at  /opt/conda/conda-bld/pytorch_1603729062494/work/torch/csrc/utils/tensor_numpy.cpp:141.)
(pid=1191538)   tensor = torch.from_numpy(np.asarray(item))
(pid=1191537) /home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/ray/rllib/utils/torch_ops.py:65: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at  /opt/conda/conda-bld/pytorch_1603729062494/work/torch/csrc/utils/tensor_numpy.cpp:141.)
(pid=1191537)   tensor = torch.from_numpy(np.asarray(item))
2020-11-04 14:31:30,465	ERROR trial_runner.py:567 -- Trial DDPG_Pendulum-v0_535f9_00000: Error processing event.
Traceback (most recent call last):
  File "/home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/ray/tune/trial_runner.py", line 515, in _process_trial
    result = self.trial_executor.fetch_result(trial)
  File "/home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/ray/tune/ray_trial_executor.py", line 488, in fetch_result
    result = ray.get(trial_future[0], timeout=DEFAULT_GET_TIMEOUT)
  File "/home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/ray/worker.py", line 1428, in get
    raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(RuntimeError): ray::DDPG.train() (pid=1189791, ip=10.0.0.12)
  File "python/ray/_raylet.pyx", line 484, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 438, in ray._raylet.execute_task.function_executor
  File "/home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/ray/rllib/agents/trainer.py", line 519, in train
    raise e
  File "/home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/ray/rllib/agents/trainer.py", line 505, in train
    result = Trainable.train(self)
  File "/home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/ray/tune/trainable.py", line 336, in train
    result = self.step()
  File "/home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/ray/rllib/agents/trainer_template.py", line 134, in step
    res = next(self.train_exec_impl)
  File "/home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/ray/util/iter.py", line 756, in __next__
    return next(self.built_iterator)
  File "/home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/ray/util/iter.py", line 783, in apply_foreach
    for item in it:
  File "/home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/ray/util/iter.py", line 843, in apply_filter
    for item in it:
  File "/home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/ray/util/iter.py", line 843, in apply_filter
    for item in it:
  File "/home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/ray/util/iter.py", line 783, in apply_foreach
    for item in it:
  File "/home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/ray/util/iter.py", line 843, in apply_filter
    for item in it:
  File "/home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/ray/util/iter.py", line 1075, in build_union
    item = next(it)
  File "/home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/ray/util/iter.py", line 756, in __next__
    return next(self.built_iterator)
  File "/home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/ray/util/iter.py", line 783, in apply_foreach
    for item in it:
  File "/home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/ray/util/iter.py", line 783, in apply_foreach
    for item in it:
  File "/home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/ray/util/iter.py", line 783, in apply_foreach
    for item in it:
  File "/home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/ray/util/iter.py", line 791, in apply_foreach
    result = fn(item)
  File "/home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/ray/rllib/execution/train_ops.py", line 69, in __call__
    info = self.workers.local_worker().learn_on_batch(batch)
  File "/home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/ray/rllib/evaluation/rollout_worker.py", line 768, in learn_on_batch
    info_out[pid] = policy.learn_on_batch(batch)
  File "/home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/ray/rllib/policy/torch_policy.py", line 347, in learn_on_batch
    self._loss(self, self.model, self.dist_class, train_batch))
  File "/home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/ray/rllib/agents/ddpg/ddpg_torch_policy.py", line 54, in ddpg_actor_critic_loss
    policy_t = model.get_policy_output(model_out_t)
  File "/home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/ray/rllib/agents/ddpg/ddpg_torch_model.py", line 178, in get_policy_output
    return self.policy_model(model_out)
  File "/home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/torch/nn/modules/container.py", line 117, in forward
    input = module(input)
  File "/home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/tristan/anaconda3/envs/TSCSProject/lib/python3.8/site-packages/ray/rllib/agents/ddpg/ddpg_torch_model.py", line 93, in forward
    squashed = self.action_range * sigmoid_out + self.low_action
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
  • I have verified my script runs in a clean environment and reproduces the issue.
  • I have verified the issue also occurs with the latest wheels.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:9 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
alexvaca0commented, Jan 27, 2022

Hi, I’m running into a similar issue when trying to use PPO, ray 2.0.0.dev0 and python 3.8, on a windows 10 machine…

0reactions
NicoleRichards1998commented, Jun 24, 2022

Did anyone find a solution to this problem? I’m running into the same error with the DDPG trainer. I need to use tf2 for my LSTM network to work, so I definitely cannot change to tf2. The code I’m using works perfectly with the PPO trainer as well as the A2C trainer.

Read more comments on GitHub >

github_iconTop Results From Across the Web

RuntimeError: Expected all tensors to be on the same ... - Ray
RuntimeError : Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument...
Read more >
RuntimeError: Expected all tensors to be on the same device ...
RuntimeError : Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! when resuming training...
Read more >
RLlib: Abstractions for Distributed Reinforcement Learning
We argue for distributing RL components in a composable way by adapting algorithms for top-down hierarchi- cal control, thereby encapsulating parallelism and.
Read more >
master PDF - Stable Baselines3 Documentation
code/stable-baselines), so all the logs created in the container in this ... Reinforcement Learning differs from other machine learning ...
Read more >
Expected all tensors to be on the same ... - PyTorch Forums
Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! error ... Which device are...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found