question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Issue when exporting SwinForImageClassification to ONNX format

See original GitHub issue

System Info

1. Libraries version

transformers == 4.23.1 / torch == 1.12.1 / onnx = 1.12.0

2. Context

I trained an Image Classifier using SwinForImageClassification with a custom number of labels. I want to put it in production using the ONNX format. So I need to export my model in this format.

Error

I used the python -m transformers.onnx [...] as recommended in your documentation. Without considering some warning (available hereabove) during the ONNX creation, the model’s creation works. However the test that compares values (validate_model_outputs) at the end fails:

$ python -m transformers.onnx --model='test_onnx/swin_classif/sources' test_onnx/swin_classif/ --feature='image-classification' --preprocessor=feature_extractor
[...]
Validating ONNX model...
	-[✓] ONNX model output names match reference model ({'logits'})
	- Validating ONNX Model output "logits":
		-[✓] (3, 160) matches (3, 160)
		-[x] values not close enough (atol: 0.0001)
[...]

I tried to use torch.onnx.export and transformers.onnx.export. The same behaviour happened. BUT when I tried with the model ViT (ViTForImageClassification), everything worked well !!

3. How to reproduce the issue

In the dedicated section.

I don’t know if I miss something obvious here ? Thank you for your work 🤗 and I hope we can solve this ! 🚀

4. Warning during ONNX creation

here is the different warning I have when creating the ONNX model: (Each happen multiple times for different operation)

- TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
- UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
- WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.

Who can help?

@NielsRogge, @sgugger

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, …)
  • My own task or dataset (give details below)

Reproduction

Here is the code to reproduce the issue: step 1: Save pretrained

from transformers import SwinForImageClassification, AutoFeatureExtractor
pretrained_path = "microsoft/swin-tiny-patch4-window7-224"

swin_classif = SwinForImageClassification.from_pretrained(pretrained_path, ignore_mismatched_sizes=True, num_labels=160)
feat_extract = AutoFeatureExtractor.from_pretrained(pretrained_path)

swin_classif.save_pretrained('test_onnx/swin_classif/sources')
feat_extract.save_pretrained('test_onnx/swin_classif/sources')

step 2: Build ONNX python -m transformers.onnx --model='test_onnx/swin_classif/sources' test_onnx/swin_classif/ --feature='image-classification' --preprocessor=feature_extractor

Expected behavior

I would expect the ONNX creation for SwinForImageClassification to work: given the same input, SwinForImageClassification model produces the same output as its exported version in ONNX format.

Issue Analytics

  • State:open
  • Created a year ago
  • Comments:11 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
fxmartycommented, Nov 28, 2022

@BenoitLeguay I can’t reproduce the issue with python -m transformers.onnx --model microsoft/swin-base-patch4-window12-384-in22k swin-bas-onnx/ --feature image-classification. Is it always failing for you?

With:

transformers==4.24.0
torch==1.12.1+cu113
onnx==1.12.0
onnxruntime==1.12.0

Could you try with these versions? Or give more details on your setup / a dockerfile to reproduce?

I notice that the ONNX export gives a lot of warnings, so it could be the exported models can not handle certain dynamic cases. It is part of https://github.com/huggingface/optimum/issues/503 to have a better user experience with exported ONNX models and the case they support / dont’ support. You can expect more work being put on the ONNX export in the Optimum lib (doc here), notably a stronger test suite for the exported models.

0reactions
fxmartycommented, Nov 28, 2022

Ok great! Don’t hesitate to share if with your old laptop + transformers==4.24.0 you can reproduce the issue. Otherwise it could be that it has been fixed in this version (maybe https://github.com/huggingface/transformers/pull/19475 )

Read more comments on GitHub >

github_iconTop Results From Across the Web

[help]How to export swin model to ONNX? Problem: Node ...
I exported my trained model into ONNX by the following code: torch.onnx.export(model, input_tensor, onnx_name, verbose=True, ...
Read more >
Export to ONNX - Transformers - Hugging Face
When a model is exported to the ONNX format, these operators are used to construct a computational graph (often called an intermediate representation)...
Read more >
Exporting your model to ONNX format | Barracuda | 2.0.0
To use your trained neural network in Unity, you need to export it to the ONNX format. ONNX (Open Neural Network Exchange) is...
Read more >
(optional) Exporting a Model from PyTorch to ONNX and ...
In this tutorial, we describe how to convert a model defined in PyTorch into the ONNX format and then run it with ONNX...
Read more >
Best Practices for Neural Network Exports to ONNX
ONNX defines a common set of operators — the building blocks of machine learning and deep learning models — and a common file...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found