exported MASKRCNN ONNX model cannot run: Op (Slice) [ShapeInferenceError] Input axes has invalid data
See original GitHub issue🐛 Bug
I exported my mask-rcnn model with resnet-101 as backbone using the most recently built torch and torchvision, but cannot run by onnxruntime 1.3.0.
img = cv2.resize(img,(1333,800))
img = np.expand_dims(img,0)
img = np.transpose(img,(0,3,1,2)).astype(np.float32)/255
dummy_input1 = torch.from_numpy(img)
model.eval()
input_names = ["input"]
torch.onnx.export(model, dummy_input1,
"traced_maskrcnn.onnx",
output_names=["boxes", "labels", "scores", "masks"],
# do_constant_folding=True,
verbose=True,
opset_version=11,
input_names=input_names)
Here is the export log:
C:\Users\Administrator\Anaconda3\lib\site-packages\torch\tensor.py:460: RuntimeWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
'incorrect results).', category=RuntimeWarning)
D:\Garage\maskvision\transforms.py:315: TracerWarning: torch.as_tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
mean = torch.as_tensor(self.image_mean, dtype=dtype, device=device)
D:\Garage\maskvision\transforms.py:316: TracerWarning: torch.as_tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
std = torch.as_tensor(self.image_std, dtype=dtype, device=device)
C:\Users\Administrator\Anaconda3\lib\site-packages\torchvision\models\detection\rpn.py:164: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
torch.tensor(image_size[1] // g[1], dtype=torch.int64, device=device)] for g in grid_sizes]
C:\Users\Administrator\Anaconda3\lib\site-packages\torchvision\ops\boxes.py:125: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
boxes_x = torch.min(boxes_x, torch.tensor(width, dtype=boxes.dtype, device=boxes.device))
C:\Users\Administrator\Anaconda3\lib\site-packages\torchvision\ops\boxes.py:127: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
boxes_y = torch.min(boxes_y, torch.tensor(height, dtype=boxes.dtype, device=boxes.device))
C:\Users\Administrator\Anaconda3\lib\site-packages\torchvision\ops\poolers.py:216: UserWarning: This overload of nonzero is deprecated:
nonzero(Tensor input, *, Tensor out)
Consider using one of the following signatures instead:
nonzero(Tensor input, *, bool as_tuple) (Triggered internally at ..\torch\csrc\utils\python_arg_parser.cpp:761.)
idx_in_level = torch.nonzero(levels == level).squeeze(1)
C:\Users\Administrator\Anaconda3\lib\site-packages\torchvision\models\detection\roi_heads.py:368: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
return torch.tensor(M + 2 * padding).to(torch.float32) / torch.tensor(M).to(torch.float32)
C:\Users\Administrator\Anaconda3\lib\site-packages\torch\onnx\symbolic_opset9.py:2225: UserWarning: Exporting aten::index operator of advanced indexing in opset 11 is achieved by combination of multiple ONNX operators, including Reshape, Transpose, Concat, and Gather. If indices include negative values, the exported graph will produce incorrect results.
"If indices include negative values, the exported graph will produce incorrect results.")
The onnx model is generated after the above warnings, but onnxruntime throw an error:
Node (Slice_1492) Op (Slice) [ShapeInferenceError] Input axes has invalid data
Issue Analytics
- State:
- Created 3 years ago
- Comments:6 (2 by maintainers)
Top Results From Across the Web
IScatterLayer cannot be used to compute a shape tensor
Description I used polygraphy to check the onnx model produced by ... and run it in tensorrt and the issued operator is called...
Read more >Forward pass working in PyTorch but not ONNX
I am looking for generic troubleshooting advice others have found helpful when debugging PyTorch to ONNX exports.
Read more >Load ONNX Model failed: ShapeInferenceError - Stack Overflow
The error is coming from one of the convolution or maxpool operators. What this error means is the shape of pads input is...
Read more >onnx/Lobby - Gitter
Unfortunately it doesn't work. It gives me an error that matlab expects the inputs to be image inputs. Is there support for feedforward...
Read more >Converting an ONNX Mask R-CNN Model
For more information about supported input image dimensions and required pre- and post-processing steps, refer to the documentation. Interpret the outputs of ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@neginraoof It’s working now, thanks
@neginraoof Tried pulling and building the latest C++ version of ONNX Runtime, the
Op (Slice)
error remains.