onnx_export RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
See original GitHub issueIf you do not know the root cause of the problem, and wish someone to help you, please post according to this template:
Instructions To Reproduce the Issue:
Check https://stackoverflow.com/help/minimal-reproducible-example for how to ask good questions. Simplify the steps to reproduce the issue using suggestions from the above link, and provide them below:
-
full code you wrote or full changes you made (
git diff
) I’m trying to convert Model Zoo veriwild_bot_R50-ibn.pth to ONNX Model -
what exact command you run:
python onnx_export.py --config-file ../../configs/VERIWild/bagtricks_R50-ibn.yml --name baseline_R50 --output ../../output/onnx_model/ --opts MODEL.WEIGHTS ../../veriwild_bot_R50-ibn.pth
- full logs you observed:
[04/23 17:20:25 onnx_export]: Beginning ONNX file converting
/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py:258: UserWarning: `add_node_names' can be set to True only when 'operator_export_type' is `ONNX`. Since 'operator_export_type' is not set to 'ONNX', `add_node_names` argument will be ignored.
"`{}` argument will be ignored.".format(arg_name, arg_name))
/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py:258: UserWarning: `do_constant_folding' can be set to True only when 'operator_export_type' is `ONNX`. Since 'operator_export_type' is not set to 'ONNX', `do_constant_folding` argument will be ignored.
"`{}` argument will be ignored.".format(arg_name, arg_name))
Traceback (most recent call last):
File "onnx_export.py", line 153, in <module>
onnx_model = export_onnx_model(model, inputs)
File "onnx_export.py", line 119, in export_onnx_model
operator_export_type=OperatorExportTypes.ONNX_ATEN_FALLBACK,
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/__init__.py", line 230, in export
custom_opsets, enable_onnx_checker, use_external_data_format)
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py", line 91, in export
use_external_data_format=use_external_data_format)
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py", line 639, in _export
dynamic_axes=dynamic_axes)
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py", line 411, in _model_to_graph
use_new_jit_passes)
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py", line 379, in _create_jit_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py", line 342, in _trace_and_get_graph_from_model
torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True)
File "/usr/local/lib/python3.6/dist-packages/torch/jit/_trace.py", line 1148, in _get_trace_graph
outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/jit/_trace.py", line 130, in forward
self._force_outplace,
File "/usr/local/lib/python3.6/dist-packages/torch/jit/_trace.py", line 116, in wrapper
outs.append(self.inner(*trace_inputs))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 725, in _call_impl
result = self._slow_forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 709, in _slow_forward
result = self.forward(*input, **kwargs)
File "../../fastreid/modeling/meta_arch/baseline.py", line 100, in forward
images = self.preprocess_image(batched_inputs)
File "../../fastreid/modeling/meta_arch/baseline.py", line 130, in preprocess_image
images.sub_(self.pixel_mean).div_(self.pixel_std)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
Issue Analytics
- State:
- Created 2 years ago
- Comments:6
Top Results From Across the Web
Torch.onnx.export, RuntimeError: Expected all tensors to be ...
Torch.onnx.export, RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! #72175.
Read more >Torch.onnx.export, RuntimeError: Expected ... - PyTorch Forums
Torch.onnx.export, RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
Read more >RuntimeError: Expected all tensors to be on the same device ...
RuntimeError : Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! when resuming training...
Read more >RuntimeError: Expected all tensors to be ... - Deep Graph Library
Hi! I am encountering problems when trying to send my graph to device for prediction. I do the following: device = torch.device("cuda:0" if ......
Read more >runtimeerror: expected all tensors to be on the same device ...
Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
I fix the issue:
Just upgrade de onnx from 1.4.1 to 1.9.0
All is working now:
This issue was closed because it has been inactive for 14 days since being marked as stale.