Failed to create arcface tensorrt model with tensorrt 20.10
See original GitHub issueHi, I was trying to convert Arcface model to TensorRT plan using build_insight_trt.py
and TensorRT 20.10, but it failed with the following error-
[TensorRT] ERROR: (Unnamed Layer* 480) [Shuffle]: at most one dimension may be inferred
ERROR: Failed to parse the ONNX file: /models/onnx/arcface_r100_v1/arcface_r100_v1.onnx.tmp
In node -1 (scaleHelper): UNSUPPORTED_NODE: Assertion failed: dims.nbDims == 4 || dims.nbDims == 5
Require this because Triton supports custom Python backends only with Triton 20.10 and not 20.09, and the model converted using TensorRT 20.09 is unsupported with Triton 20.10 (Triton 20.10 supports TensorRT 20.10)
Can this be fixed?
Also, Thanks for the amazing work!
Issue Analytics
- State:
- Created 3 years ago
- Comments:23 (13 by maintainers)
Top Results From Across the Web
TensorRT Release 20.10 - NVIDIA Documentation Center
The TensorRT C++ samples and C++ API documentation. The samples can be built by running make in the /workspace/tensorrt/samples directory.
Read more >Face Recognition with Arcface with TensorRT | by 楊亮魯
I fail to run the TensorRT inference on jetson Nano, due to Prelu not supported ... start container, build the arcface TensorRT engine;...
Read more >Face Recognition: ONNX to TensorRT conversion for Arcface ...
In this blogpost, I will explain the steps required in the model conversion of ONNX to TensorRT and the reason why my steps...
Read more >Error occurred while running the TensorRT samples: [reformat ...
TensorRT Version: 8.2.0.3 CPU Architecture: aarch64 GPU Type: GTX 1060 Nvidia Driver Version: 510 Operating System + Version: Ubuntu 20.04 ...
Read more >TensorRT/ONNX - eLinux.org
5 How to use trtexec to run inference with dynamic shape? 6 How to convert onnx model to a tensorrt engine? 7 If...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Looks like TensorRT ONNX parser is linked to latest release of onnx-tensorrt. Fix for this issue is already in master branch of onnx-tensorrt, but it wasn’t released yet, so the issue should be fixed in TensorRT release next to onnx-tensorrt release… Or you can try building onnx-tensorrt from source and use it’s built-in onnx2trt util to build engine.
I have tested conversion with batch size>1. Yes, it seems to be broken for TRT 7.2 for now. Workaround with
trtexec
seems to be working: