question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

some error with onnx transfer to trt

See original GitHub issue

some error export

[05/19/2022-11:11:38] [W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
ERROR: builtin_op_importers.cpp:2261 In function importPad:
[8] Assertion failed: mode == "constant" && value == 0.f && "This version of TensorRT only supports constant 0 padding!"
[05/19/2022-11:11:38] [E] Failed to parse onnx file
[05/19/2022-11:11:38] [E] Parsing model failed
[05/19/2022-11:11:38] [E] Engine creation failed
[05/19/2022-11:11:38] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # trtexec --onnx=colorize_infer_model_CPU_sim.onnx --saveEngine=colorize_infer_model_CPU_sim.trt --best

How do I convert INT64 to INT32? or How to fix this type error?

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
tongchangDcommented, May 23, 2022

Thank you for your reply. sorry ,This is Tensorrt’s mistake,Someone say that upgrading Tensorrt can solve this problem,so I upgrade tensorrt==8.2.0 and i solved this problem.

0reactions
jcwchencommented, May 20, 2022

@jcwchen Could you please move this issue to https://github.com/onnx/onnx-tensorrt if possible?

Sorry I don’t have permission for onnx-tensorrt so I cannot transfer this issue there

Read more comments on GitHub >

github_iconTop Results From Across the Web

Failed converting ONNX model to TensorRT model
I want to convert the model from ONNX to TensorRT, manually and ... [TRT] [W] GPU error during getBestTactic: Conv_6 + Relu_7 :...
Read more >
TensorRT/ONNX - eLinux.org
If you met some error during converting onnx to engine. If you met some error during parsing, please add “--verbose” into trtexec cmd...
Read more >
TensorRT Execution Provider - NVIDIA - ONNX Runtime
Instructions to execute ONNX Runtime on NVIDIA GPUs with the TensorRT ... If some operators in the model are not supported by TensorRT,...
Read more >
Convert PyTorch Model to ONNX Model - Documentation
To convert a PyTorch model to an ONNX model, you need both the PyTorch model and ... Just do as the error message...
Read more >
What is the opset number? — sklearn-onnx 1.11.2 ...
An ONNX graph only contains one unique opset, every node must be described following ... 2} 91 try target_opset: {'': 10, 'ai.onnx.ml': 1}...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found