Failed to export onnx model
See original GitHub issueCode version (Git Hash) and PyTorch version
st-gcn master@e7024ac and Pytorch ‘1.1.0a0+828a6a3’
Dataset used
Demo
Expected behavior
Successfully export onnx model
Actual behavior
root@p4station:/workspace# python main.py demo --openpose openpose/build/ --device 1 /workspace/processor/io.py:39: YAMLLoadWarning: calling yaml.load() without Loader=… is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details. default_arg = yaml.load(f) Starting OpenPose demo… Auto-detecting all available GPUs… Detected 1 GPU(s), using 1 of them starting at GPU 0. Starting thread(s)… OpenPose demo successfully finished. Total time: 47.039486 seconds. Pose estimation complete.
Network forwad…
Prediction result: skateboarding
Done.
/workspace/net/utils/tgcn.py:58: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can’t record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert A.size(0) == self.kernel_size
Traceback (most recent call last):
File “main.py”, line 31, in <module>
p.start()
File “/workspace/processor/demo.py”, line 85, in start
torch.onnx.export(self.model, dummy_input, “st-gcn_kinetics-skeleton.onnx”)
File “/opt/conda/lib/python3.6/site-packages/torch/onnx/init.py”, line 24, in export
return utils.export(*args, **kwargs)
File “/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py”, line 108, in export
_retain_param_name=_retain_param_name)
File “/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py”, line 315, in _export
_retain_param_name)
File “/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py”, line 245, in _model_to_graph
graph = _optimize_graph(graph, operator_export_type)
File “/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py”, line 164, in _optimize_graph
graph = torch._C._jit_pass_onnx(graph, operator_export_type)
File “/opt/conda/lib/python3.6/site-packages/torch/onnx/init.py”, line 49, in _run_symbolic_function
return utils._run_symbolic_function(*args, **kwargs)
File “/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py”, line 550, in _run_symbolic_function
n.kindOf(“value”)))
RuntimeError: Unsupported prim::Constant kind: s
. Send a bug report.
Steps to reproduce the behavior
dummy_input = torch.randn(1, 3, 300, 18, 2, device=‘cuda’) torch.onnx.export(self.model, dummy_input, “st-gcn_kinetics-skeleton.onnx”)
Other comments
Issue Analytics
- State:
- Created 4 years ago
- Comments:5
@lxy5513 I replaced einsum through other supported ops,
`#x = torch.einsum(‘nkctv,kvw->nctw’, (x, A))
x = x.permute(0, 2, 3, 1, 4).contiguous() n, c, t, k, v = x.size() k, v, w = A.size() x = x.view(n * c * t, k * v) A = A.view(k * v, w) x = torch.mm(x, A) x = x.view(n, c, t, w) `
@nfeng0105 @lxy5513 hi sir, have you ever export onnx successfully. Could you share the export script for us pls!!!. This problem bothered me several days!!!