Load ONNX model
See original GitHub issueI managed to export some models from the model zoo into ONNX format. However, I have difficulties getting it to work with torchreid.
In torchtools.py
, instead of torch.load()
, I added checkpoint = onnx.load(fpath)
. This resulted in the following error:
File "yolov5_deepsort\reid models\deep-person-reid\torchreid\utils\torchtools.py", line 280, in load_pretrained_weights
if 'state_dict' in checkpoint:
TypeError: argument of type 'ModelProto' is not iterable
Any advice?
Issue Analytics
- State:
- Created a year ago
- Reactions:1
- Comments:9 (5 by maintainers)
Top Results From Across the Web
Python | onnxruntime
Load the onnx model with onnx.load. import onnx onnx_model = onnx.load("fashion_mnist_model.onnx") onnx.checker.check_model(onnx_model) · Create inference ...
Read more >onnx/PythonAPIOverview.md at main - GitHub
Checking an ONNX Model ... import onnx # Preprocessing: load the ONNX model model_path = "path/to/the/model.onnx" onnx_model = onnx.load(model_path) print(f"The ...
Read more >torch.onnx — PyTorch 1.13 documentation
The torch.onnx module can export PyTorch models to ONNX. ... import onnx # Load the ONNX model model = onnx.load("alexnet.onnx") # Check that...
Read more >How to use the onnx.load function in onnx - Snyk
To help you get started, we've selected a few onnx.load examples, based on popular ways it is used in public projects.
Read more >Load and Run an ONNX Model
In this tutorial, you'll learn how to use a backend to load and run a ONNX model. Example: Using TensorFlow backend. First, install...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Good news @KaiyangZhou, @Rm1n90, @HeChengHui!
I have a working multibackend (ONNX, OpenVINO and TFLite) class for for the ReID models that I manged to export (
mobilenet
,resnet50
andosnet
models) with my export script. My export pipeline is as follows: PT --> ONNX --> OpenVINO --> TFLite.osnet
models fails in the OpenVINO export;mobilenet
andresnet50
models go all the way through. Feel free to experiment with it, it is in working condition as shown by my CI pipeline. Don’t forget to drop a PR if you have any improvements! 😄Did you time the model in ONNX, OPENVINO and TFLITE to see how long will take the tracking to do the job compare to pytorch version?
Inference time for the different frameworks is highly dependent on which HW you run it on. The chosen export frameworks should be deployment-plaform specific.