question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

I managed to export some models from the model zoo into ONNX format. However, I have difficulties getting it to work with torchreid. In torchtools.py, instead of torch.load(), I added checkpoint = onnx.load(fpath). This resulted in the following error:

File "yolov5_deepsort\reid models\deep-person-reid\torchreid\utils\torchtools.py", line 280, in load_pretrained_weights
    if 'state_dict' in checkpoint:
TypeError: argument of type 'ModelProto' is not iterable

Any advice?

Issue Analytics

  • State:open
  • Created a year ago
  • Reactions:1
  • Comments:9 (5 by maintainers)

github_iconTop GitHub Comments

6reactions
mikel-brostromcommented, Aug 6, 2022

Good news @KaiyangZhou, @Rm1n90, @HeChengHui!

I have a working multibackend (ONNX, OpenVINO and TFLite) class for for the ReID models that I manged to export (mobilenet, resnet50 and osnet models) with my export script. My export pipeline is as follows: PT --> ONNX --> OpenVINO --> TFLite. osnet models fails in the OpenVINO export; mobilenet and resnet50 models go all the way through. Feel free to experiment with it, it is in working condition as shown by my CI pipeline. Don’t forget to drop a PR if you have any improvements! 😄

1reaction
mikel-brostromcommented, Aug 11, 2022

Did you time the model in ONNX, OPENVINO and TFLITE to see how long will take the tracking to do the job compare to pytorch version?

Inference time for the different frameworks is highly dependent on which HW you run it on. The chosen export frameworks should be deployment-plaform specific.

  • ONNX is an all-around format for all type of CPUs
  • OpenVINO should be the way to go for Deep Learning inference on Intel CPUs, Intel integrated GPUs, and Intel Vision Processing Units (VPUs)
  • TFLITE is for mobile and IoT devices
Read more comments on GitHub >

github_iconTop Results From Across the Web

Python | onnxruntime
Load the onnx model with onnx.load. import onnx onnx_model = onnx.load("fashion_mnist_model.onnx") onnx.checker.check_model(onnx_model) · Create inference ...
Read more >
onnx/PythonAPIOverview.md at main - GitHub
Checking an ONNX Model ... import onnx # Preprocessing: load the ONNX model model_path = "path/to/the/model.onnx" onnx_model = onnx.load(model_path) print(f"The ...
Read more >
torch.onnx — PyTorch 1.13 documentation
The torch.onnx module can export PyTorch models to ONNX. ... import onnx # Load the ONNX model model = onnx.load("alexnet.onnx") # Check that...
Read more >
How to use the onnx.load function in onnx - Snyk
To help you get started, we've selected a few onnx.load examples, based on popular ways it is used in public projects.
Read more >
Load and Run an ONNX Model
In this tutorial, you'll learn how to use a backend to load and run a ONNX model. Example: Using TensorFlow backend. First, install...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found