question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Error when export swin model to onnx.

See original GitHub issue

Error when export to onnx

Are Swin models supported for mmseg?


python mmdeploy/tools/deploy.py mmdeploy/configs/mmseg/segmentation_tensorrt_static-512x1024.py mmsegmentation/configs/swin/upernet_swin_base_patch4_window12_512x512_160k_ade20k_pretrain_384x384_22K.py mmsegmentation/checkpoints/upernet_swin_base_patch4_window12_512x512_160k_ade20k_pretrain_384x384_22K_20210531_125459-429057bf.pth mmsegmentation/demo/demo.png --work-dir mmdeploy/work-dir --show --device cuda:0 --dump-info
… … Warning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied. Warning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied. Warning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied. Warning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied. 2022-10-14 09:51:01,738 - mmdeploy - INFO - Execute onnx optimize passes. 2022-10-14 09:51:46,852 - mmdeploy - INFO - Finish pipeline mmdeploy.apis.pytorch2onnx.torch2onnx 2022-10-14 09:52:12,573 - mmdeploy - INFO - Start pipeline mmdeploy.backend.tensorrt.onnx2tensorrt.onnx2tensorrt in subprocess 2022-10-14 09:52:13,049 - mmdeploy - INFO - Successfully loaded tensorrt plugins from /code/mmdeploy/mmdeploy/lib/libmmdeploy_tensorrt_ops.so [TensorRT] WARNING: onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [TensorRT] WARNING: onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped Process Process-3: Traceback (most recent call last): File “/root/archiconda3/envs/mmdeploy/lib/python3.6/multiprocessing/process.py”, line 258, in _bootstrap self.run() File “/root/archiconda3/envs/mmdeploy/lib/python3.6/multiprocessing/process.py”, line 93, in run self._target(*self._args, **self._kwargs) File “/code/mmdeploy/mmdeploy/apis/core/pipeline_manager.py”, line 107, in call ret = func(*args, **kwargs) File “/code/mmdeploy/mmdeploy/backend/tensorrt/onnx2tensorrt.py”, line 88, in onnx2tensorrt device_id=device_id) File “/code/mmdeploy/mmdeploy/backend/tensorrt/utils.py”, line 165, in from_onnx raise RuntimeError(f’Failed to parse onnx, {error_msgs}') RuntimeError: Failed to parse onnx, In node -1 (importPad): UNSUPPORTED_NODE: Assertion failed: inputs.at(1).is_weights()

2022-10-14 09:52:28,598 - mmdeploy - ERROR - mmdeploy.backend.tensorrt.onnx2tensorrt.onnx2tensorrt with Call id: 1 failed. exit.

Issue Analytics

  • State:open
  • Created a year ago
  • Comments:5

github_iconTop GitHub Comments

1reaction
RunningLeoncommented, Oct 17, 2022

@abetancordelrosario Hi, pls update your tensorrt to 8.2. works fine on my side.

0reactions
RunningLeoncommented, Oct 20, 2022

Where can I download tensorrt 8.2 for cuda10.2 and ARM?

pls. refer to this doc

Read more comments on GitHub >

github_iconTop Results From Across the Web

[help]How to export swin model to ONNX? Problem: Node ...
I exported my trained model into ONNX by the following code: torch.onnx.export(model, input_tensor, onnx_name, verbose=True, ...
Read more >
Export to ONNX - Transformers - Hugging Face
In this guide, we'll show you how to export Transformers models to ONNX (Open Neural Network eXchange). Once exported, a model can be...
Read more >
Changelog — MMDetection 2.26.0 documentation
Add an example of combining swin and one-stage models (#6621) ... Fix dynamic_axes parameter error in ONNX dynamic shape export (#6104).
Read more >
Dynamic Input. IShuffleLayer applied to shape tensor must ...
Description I want to convert swin transformer model with dynamic shape to tensorrt. ... [W] ONNX shape inference exited with an error:
Read more >
Running an ONNX model ValueError: not enough values to ...
I am new to python. I would like to convert my .pth file to .onnx file but encounter an error when running the...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found