Error when export swin model to onnx.
See original GitHub issueError when export to onnx
Are Swin models supported for mmseg?
python mmdeploy/tools/deploy.py mmdeploy/configs/mmseg/segmentation_tensorrt_static-512x1024.py mmsegmentation/configs/swin/upernet_swin_base_patch4_window12_512x512_160k_ade20k_pretrain_384x384_22K.py mmsegmentation/checkpoints/upernet_swin_base_patch4_window12_512x512_160k_ade20k_pretrain_384x384_22K_20210531_125459-429057bf.pth mmsegmentation/demo/demo.png --work-dir mmdeploy/work-dir --show --device cuda:0 --dump-info
…
…
Warning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied.
Warning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied.
Warning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied.
Warning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied.
2022-10-14 09:51:01,738 - mmdeploy - INFO - Execute onnx optimize passes.
2022-10-14 09:51:46,852 - mmdeploy - INFO - Finish pipeline mmdeploy.apis.pytorch2onnx.torch2onnx
2022-10-14 09:52:12,573 - mmdeploy - INFO - Start pipeline mmdeploy.backend.tensorrt.onnx2tensorrt.onnx2tensorrt in subprocess
2022-10-14 09:52:13,049 - mmdeploy - INFO - Successfully loaded tensorrt plugins from /code/mmdeploy/mmdeploy/lib/libmmdeploy_tensorrt_ops.so
[TensorRT] WARNING: onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
Process Process-3:
Traceback (most recent call last):
File “/root/archiconda3/envs/mmdeploy/lib/python3.6/multiprocessing/process.py”, line 258, in _bootstrap
self.run()
File “/root/archiconda3/envs/mmdeploy/lib/python3.6/multiprocessing/process.py”, line 93, in run
self._target(*self._args, **self._kwargs)
File “/code/mmdeploy/mmdeploy/apis/core/pipeline_manager.py”, line 107, in call
ret = func(*args, **kwargs)
File “/code/mmdeploy/mmdeploy/backend/tensorrt/onnx2tensorrt.py”, line 88, in onnx2tensorrt
device_id=device_id)
File “/code/mmdeploy/mmdeploy/backend/tensorrt/utils.py”, line 165, in from_onnx
raise RuntimeError(f’Failed to parse onnx, {error_msgs}')
RuntimeError: Failed to parse onnx, In node -1 (importPad): UNSUPPORTED_NODE: Assertion failed: inputs.at(1).is_weights()
2022-10-14 09:52:28,598 - mmdeploy - ERROR - mmdeploy.backend.tensorrt.onnx2tensorrt.onnx2tensorrt
with Call id: 1 failed. exit.
Issue Analytics
- State:
- Created a year ago
- Comments:5
@abetancordelrosario Hi, pls update your tensorrt to 8.2. works fine on my side.
pls. refer to this doc