question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

TensorRT inference demo

See original GitHub issue

Hello everyone, here is a tensorrt inference demo for nanodet: https://github.com/linghu8812/tensorrt_inference/tree/master/project/nanodet.

First of all, when I export onnx model, I add softmax and concat layer to onnx, so the end of onnx model looks like this: image In this way, this will increase the inference time of the model, but it will reduce the postprocessing time. Considering comprehensively, the total processing time has been reduced, so I choose this way to export the onnx model.

In addition, the onnxsim module has been imported when export onnx, so the model exported has been simplified.

from onnxsim import simplify

onnx_model = onnx.load(output_path)  # load onnx model
model_simp, check = simplify(onnx_model)
assert check, "Simplified ONNX model could not be validated"
onnx.save(model_simp, output_path)
print('finished exporting onnx ')

At last, the TensorRT inference result has shown below: image

for more information, please refer: https://github.com/linghu8812/tensorrt_inference

Issue Analytics

  • State:open
  • Created 3 years ago
  • Reactions:10
  • Comments:5

github_iconTop GitHub Comments

4reactions
yueyihuacommented, Apr 17, 2021

How to export nanodet onnx model with softmax and concat? I used nanodet_m.ckpt, and export-onnx.py from https://github.com/linghu8812/tensorrt_inference, but onnx model is still like this: image

0reactions
ysyyorkcommented, Dec 7, 2021

NVM, I figured i have to run python setup.py install again with that forked repo

Read more comments on GitHub >

github_iconTop Results From Across the Web

jkjung-avt/tensorrt_demos: TensorRT MODNet ... - GitHub
tensorrt_demos. Examples demonstrating how to optimize Caffe/TensorFlow/DarkNet/PyTorch models with TensorRT. Highlights: Run an optimized "MODNet" video ...
Read more >
NVIDIA Deep Learning TensorRT Documentation
Serves as a demo of how to use a pre-trained Faster-RCNN model in NVIDIA TAO to do inference with TensorRT. Algorithm Selection API...
Read more >
Vision TensorRT inference samples - IBM Developer
Vision TensorRT inference samples. Samples that illustrate how to use IBM Maximo Visual Inspection with edge devices. Save
Read more >
Changelog - GitHub
- Moved `RefitMap` API from ONNX parser to core TensorRT. - Various bugfixes for plugins, samples and ONNX parser. - Port demoBERT to...
Read more >
How To Run Inference Using TensorRT C++ API | LearnOpenCV
Learn how to use the TensorRT C++ API to perform faster inference on your deep learning model.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found