question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Cannot load onnx model

See original GitHub issue

I load my onnx model without condig.pbtxt file, but i got error: Mismatch between allocated memory size

trtserver: engine.cpp:1094: bool nvinfer1::rt::Engine::deserialize(const void*, std::size_t, nvinfer1::IGpuAllocator&, nvinfer1::IPluginFactory*): Assertion `size >= bsize && "Mismatch between allocated memory size and expected size of serialized engine."' failed.

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:5 (3 by maintainers)

github_iconTop GitHub Comments

5reactions
GuanLuocommented, Nov 27, 2019

@ThiagoMateo if you specifies format field in the input, the input should only has 3 dimensions specified (c, h, w). You can try removing the format field.

1reaction
GuanLuocommented, Nov 26, 2019

@ThiagoMateo From ONNX Runtime’s commit history, it appears that non-spatial BatchNormalization is supported since ONNX Runtime v1.0.0. TRTIS just advance the ONNX Runtime version to 1.0.0 recently, which will be in 19.11. So you can wait until 19.11 is released, in the next few days, and see if TRTIS can load the model successfully.

At the meantime, you may try to deploy your model on GPU / with different Execution Accelerators as the error is just indicating that the CPU provider doesn’t support this op (while the others may support).

Read more comments on GitHub >

github_iconTop Results From Across the Web

can not load onnx model · Issue #2793 · triton-inference ...
i convert a "detr_resnet50" model to an onnx with dynamic batch as below shown , and it seems to be OK. the converting...
Read more >
Can't load ONNX model - NVIDIA Developer Forums
Description I am getting an error in loading my custom face mask detection model ONNX using the SSD Detector code when I run...
Read more >
Error when trying to load .onnx files - Apache TVM Discuss
Hello I have just installed TVM and was going through the tutorials. I ran tvmc compile --target "llvm" --output resnet50-v2-7-tvm.tar ...
Read more >
Unable to import ONNX model - Python - OpenCV Forum
I am trying to use an adult content detection ONNX model. This model was originally converted from a tensorflow model by the author....
Read more >
[Solved]-Load onnx model in opencv dnn-C++ - appsloveworld
Running Keras DNN model (UNet) using OpenCV readNetFromTensorFlow: Error: Unknown layer type Shape in op decoder_stage0_upsampling/Shape · How to load base onnx ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found