Cannot load onnx model
See original GitHub issueI load my onnx model without condig.pbtxt file, but i got error: Mismatch between allocated memory size
trtserver: engine.cpp:1094: bool nvinfer1::rt::Engine::deserialize(const void*, std::size_t, nvinfer1::IGpuAllocator&, nvinfer1::IPluginFactory*): Assertion `size >= bsize && "Mismatch between allocated memory size and expected size of serialized engine."' failed.
Issue Analytics
- State:
- Created 4 years ago
- Comments:5 (3 by maintainers)
Top Results From Across the Web
can not load onnx model · Issue #2793 · triton-inference ...
i convert a "detr_resnet50" model to an onnx with dynamic batch as below shown , and it seems to be OK. the converting...
Read more >Can't load ONNX model - NVIDIA Developer Forums
Description I am getting an error in loading my custom face mask detection model ONNX using the SSD Detector code when I run...
Read more >Error when trying to load .onnx files - Apache TVM Discuss
Hello I have just installed TVM and was going through the tutorials. I ran tvmc compile --target "llvm" --output resnet50-v2-7-tvm.tar ...
Read more >Unable to import ONNX model - Python - OpenCV Forum
I am trying to use an adult content detection ONNX model. This model was originally converted from a tensorflow model by the author....
Read more >[Solved]-Load onnx model in opencv dnn-C++ - appsloveworld
Running Keras DNN model (UNet) using OpenCV readNetFromTensorFlow: Error: Unknown layer type Shape in op decoder_stage0_upsampling/Shape · How to load base onnx ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@ThiagoMateo if you specifies
format
field in the input, the input should only has 3 dimensions specified (c, h, w). You can try removing theformat
field.@ThiagoMateo From ONNX Runtime’s commit history, it appears that non-spatial BatchNormalization is supported since ONNX Runtime v1.0.0. TRTIS just advance the ONNX Runtime version to 1.0.0 recently, which will be in 19.11. So you can wait until 19.11 is released, in the next few days, and see if TRTIS can load the model successfully.
At the meantime, you may try to deploy your model on GPU / with different Execution Accelerators as the error is just indicating that the CPU provider doesn’t support this op (while the others may support).