[help]How to export swin model to ONNX? Problem: Node (Concat_246) Op (Concat) [ShapeInferenceError]
See original GitHub issueI exported my trained model into ONNX by the following code:
torch.onnx.export(model, input_tensor, onnx_name, verbose=True, opset_version=12, input_names=['images'],
output_names=['output'], use_external_data_format=False)
But when running onnx model, I got the following error:
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Node (Concat_246) Op (Concat) [ShapeInferenceError] All inputs to Concat must have same rank
It caused by attn.view(B_ // nW, nW, self.num_heads, N, N)
in https://github.com/microsoft/Swin-Transformer/blob/793f971e735b1e27d5e2c683b7a2b53090d3806d/models/swin_transformer.py#L133
The Concat op maybe one parts of torch.view(). Does anyone know how to solve this problem?
Issue Analytics
- State:
- Created 2 years ago
- Reactions:11
- Comments:9 (1 by maintainers)
Top Results From Across the Web
No results found
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
change “attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)” to “attn = attn.view(-1, self.num_heads, N, N) + mask.unsqueeze(1)” can solve the problem.
The link https://gist.github.com/devymex/51687edd41eef4ccc56d76a0c66bf92c is not avalilable,can you share the code how to export video swin transformer to onnx? thanks.@devymex