Issue when exporting SwinForImageClassification to ONNX format
See original GitHub issueSystem Info
1. Libraries version
transformers == 4.23.1 / torch == 1.12.1 / onnx = 1.12.0
2. Context
I trained an Image Classifier using SwinForImageClassification
with a custom number of labels. I want to put it in production using the ONNX format. So I need to export my model in this format.
Error
I used the python -m transformers.onnx [...]
as recommended in your documentation.
Without considering some warning (available hereabove) during the ONNX creation, the model’s creation works. However the test that compares values (validate_model_outputs
) at the end fails:
$ python -m transformers.onnx --model='test_onnx/swin_classif/sources' test_onnx/swin_classif/ --feature='image-classification' --preprocessor=feature_extractor
[...]
Validating ONNX model...
-[✓] ONNX model output names match reference model ({'logits'})
- Validating ONNX Model output "logits":
-[✓] (3, 160) matches (3, 160)
-[x] values not close enough (atol: 0.0001)
[...]
I tried to use torch.onnx.export
and transformers.onnx.export
. The same behaviour happened. BUT when I tried with the model ViT (ViTForImageClassification
), everything worked well !!
3. How to reproduce the issue
In the dedicated section.
I don’t know if I miss something obvious here ? Thank you for your work 🤗 and I hope we can solve this ! 🚀
4. Warning during ONNX creation
here is the different warning I have when creating the ONNX model: (Each happen multiple times for different operation)
- TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
- UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
- WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
Who can help?
Information
- The official example scripts
- My own modified scripts
Tasks
- An officially supported task in the
examples
folder (such as GLUE/SQuAD, …) - My own task or dataset (give details below)
Reproduction
Here is the code to reproduce the issue: step 1: Save pretrained
from transformers import SwinForImageClassification, AutoFeatureExtractor
pretrained_path = "microsoft/swin-tiny-patch4-window7-224"
swin_classif = SwinForImageClassification.from_pretrained(pretrained_path, ignore_mismatched_sizes=True, num_labels=160)
feat_extract = AutoFeatureExtractor.from_pretrained(pretrained_path)
swin_classif.save_pretrained('test_onnx/swin_classif/sources')
feat_extract.save_pretrained('test_onnx/swin_classif/sources')
step 2: Build ONNX
python -m transformers.onnx --model='test_onnx/swin_classif/sources' test_onnx/swin_classif/ --feature='image-classification' --preprocessor=feature_extractor
Expected behavior
I would expect the ONNX creation for SwinForImageClassification
to work: given the same input, SwinForImageClassification model produces the same output as its exported version in ONNX format.
Issue Analytics
- State:
- Created a year ago
- Comments:11 (5 by maintainers)
@BenoitLeguay I can’t reproduce the issue with
python -m transformers.onnx --model microsoft/swin-base-patch4-window12-384-in22k swin-bas-onnx/ --feature image-classification
. Is it always failing for you?With:
Could you try with these versions? Or give more details on your setup / a dockerfile to reproduce?
I notice that the ONNX export gives a lot of warnings, so it could be the exported models can not handle certain dynamic cases. It is part of https://github.com/huggingface/optimum/issues/503 to have a better user experience with exported ONNX models and the case they support / dont’ support. You can expect more work being put on the ONNX export in the Optimum lib (doc here), notably a stronger test suite for the exported models.
Ok great! Don’t hesitate to share if with your old laptop +
transformers==4.24.0
you can reproduce the issue. Otherwise it could be that it has been fixed in this version (maybe https://github.com/huggingface/transformers/pull/19475 )