ONNX shape inference does not infer shapes
See original GitHub issueBug Report
Describe the bug
onnx.shape_inference.infer_shapes
does not correctly infer shape of each layer.
System information
- OS Platform and Distribution: Windows 10
- ONNX version: 1.7.0
- Python version: 3.7.4
Reproduction instructions
- Describe the code to reproduce the behavior.
model = onnx.load("models/conv_dummy.onnx")
onnx.checker.check_model(model)
inferred_model = onnx.shape_inference.infer_shapes(model)
print(inferred_model.graph.value_info)
output:
[name: "9"
type {
tensor_type {
elem_type: 1
}
}
, name: "10"
type {
tensor_type {
elem_type: 1
}
}
, name: "11"
type {
tensor_type {
elem_type: 1
}
}
, name: "12"
type {
tensor_type {
elem_type: 1
}
}
, name: "13"
type {
tensor_type {
elem_type: 1
}
}
, name: "14"
type {
tensor_type {
elem_type: 1
}
}
]
Model file: models.zip
Expected behavior
Expected each entry in model.graph.value_info
to have tensor shape field which tells me the shape of that layer.
Notes
Model was exported from PyTorch using torch.onnx.export
Issue Analytics
- State:
- Created 3 years ago
- Comments:24 (13 by maintainers)
Top Results From Across the Web
ONNX shape inference does not infer shapes #4357 - GitHub
Describe the bug onnx.shape_inference.infer_shapes does not correctly infer shape of each layer. System information
Read more >Source code for polygraphy.backend.onnx.loader
If the provided model is already a graph, the graph is not exported to ONNX. ... InferShapes(BaseLoader): """ Functor that runs shape inference...
Read more >How to extract layer shape and type from ONNX / PyTorch?
If the model is sequential then you can infer the architecture of the network from its layers directly. For any model that is...
Read more >utils/add_nms.py · muttalib1326/Punjabi_Character_Detection ...
LOGGER.info(f"Shape inference could not be performed at this ... "This version of ONNX GraphSurgeon does not support folding shapes, ".
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Good workaround. Adding a line
add_value_info_for_constants(model)
before shape inference runs correctly.Actually the utilities here come from protobuf since it’s a model proto. Perhaps you can raise this concern there. Thank you for the suggestion.