question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

unet export to onnx issue

See original GitHub issue

Describe the bug

Trying to export the unet model to onnx. Used this script

.onnx file created successfully but export wasn’t successful:

import onnx
onnx_model = onnx.load("model.onnx")
onnx.checker.check_model(onnx_model)

Error:

protobuf_string = model if isinstance(model, bytes) else model.SerializeToString()
ValueError: Message onnx.ModelProto exceeds maximum protobuf size of 2GB: 3439066593

Reproduction

No response

Logs

No response

System Info

Python 3.8.13 Pytorch 1.13 ONNX 1.12.0 transformers 4.22.1 diffusers 0.3.0

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:5 (3 by maintainers)

github_iconTop GitHub Comments

6reactions
anton-lcommented, Sep 21, 2022

Hi @tanayvarshney! The exported UNet is indeed >2GB due to the protobuf size limits, but the directory of the exported model contains external weight tensors. So while this fails the onnx check, onnxruntime is able to load the model just fine 😃

P.S. I’m preparing an update for the onnx exporting script that will save the UNet in fp16, which should fit into 2GB.

1reaction
anton-lcommented, Nov 2, 2022

https://github.com/huggingface/diffusers/pull/932 enabled an FP16 conversion, so it should be possible to manually convert and use the checkpoints with size-sensitive onnx tools now. Feel free to ping me here if you have any issues with that workflow! Hardware-independent FP16 checkpoints are still WIP, as they require still-unreleased onnxruntime updates.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Exception when converting Unet from pytorch to onnx
The problem is due to ONNX not having an implementation of the PyTorch 2D Instane Normalization layer. The solution was to copy the...
Read more >
Best Practices for Neural Network Exports to ONNX
Our experience shows that is easier to export PyTorch models. If possible, choose a PyTorch source and convert it using the built-in torch.onnx...
Read more >
(optional) Exporting a Model from PyTorch to ONNX and ...
In this tutorial, we describe how to convert a model defined in PyTorch into the ONNX format and then run it with ONNX...
Read more >
Convert PyTorch Model to ONNX Model - Documentation
The PyTorch 'compiler' will correctly capture any control flow, and correctly export the model to ONNX format. This sounds like a proper solution...
Read more >
MATLAB exportONNXNetwork - MathWorks
Export the network net as an ONNX format file called squeezenet.onnx . Save the file to the current folder. If the Deep Learning...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found