TensorRT inference demo
See original GitHub issueHello everyone, here is a tensorrt inference demo for nanodet: https://github.com/linghu8812/tensorrt_inference/tree/master/project/nanodet.
First of all, when I export onnx model, I add softmax and concat layer to onnx, so the end of onnx model looks like this: In this way, this will increase the inference time of the model, but it will reduce the postprocessing time. Considering comprehensively, the total processing time has been reduced, so I choose this way to export the onnx model.
In addition, the onnxsim module has been imported when export onnx, so the model exported has been simplified.
from onnxsim import simplify
onnx_model = onnx.load(output_path) # load onnx model
model_simp, check = simplify(onnx_model)
assert check, "Simplified ONNX model could not be validated"
onnx.save(model_simp, output_path)
print('finished exporting onnx ')
At last, the TensorRT inference result has shown below:
for more information, please refer: https://github.com/linghu8812/tensorrt_inference
Issue Analytics
- State:
- Created 3 years ago
- Reactions:10
- Comments:5
Top Results From Across the Web
jkjung-avt/tensorrt_demos: TensorRT MODNet ... - GitHub
tensorrt_demos. Examples demonstrating how to optimize Caffe/TensorFlow/DarkNet/PyTorch models with TensorRT. Highlights: Run an optimized "MODNet" video ...
Read more >NVIDIA Deep Learning TensorRT Documentation
Serves as a demo of how to use a pre-trained Faster-RCNN model in NVIDIA TAO to do inference with TensorRT. Algorithm Selection API...
Read more >Vision TensorRT inference samples - IBM Developer
Vision TensorRT inference samples. Samples that illustrate how to use IBM Maximo Visual Inspection with edge devices. Save
Read more >Changelog - GitHub
- Moved `RefitMap` API from ONNX parser to core TensorRT. - Various bugfixes for plugins, samples and ONNX parser. - Port demoBERT to...
Read more >How To Run Inference Using TensorRT C++ API | LearnOpenCV
Learn how to use the TensorRT C++ API to perform faster inference on your deep learning model.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
How to export nanodet onnx model with softmax and concat? I used nanodet_m.ckpt, and export-onnx.py from https://github.com/linghu8812/tensorrt_inference, but onnx model is still like this:
NVM, I figured i have to run
python setup.py install
again with that forked repo