ONNX Optimizer raises error "Input 0.weight is undefined!"
See original GitHub issueBug Report
Describe the bug
When running onnx.optimizer.optimize(model, passes)
with passes = ['fuse_bn_into_conv']
, optimizer fails with error:
Input 0.weight is undefined!
System information
- OS Platform and Distribution: Windows 10
- ONNX version: 1.7.0
- Python version: 3.7.4
Reproduction instructions
- Describe the code to reproduce the behavior.
model = onnx.load("models/conv_dummy.onnx")
onnx.checker.check_model(model)
model = onnx.optimizer.optimize(model, passes=['fuse_bn_into_conv'])
Model file: models.zip
Expected behavior
Expected optimizer to run correctly and return an optimized model.
Notes
Model was exported from PyTorch with torch.onnx.export
Issue Analytics
- State:
- Created 3 years ago
- Comments:5 (2 by maintainers)
Top Results From Across the Web
caffe2 inference a onnx model , happend IndexError: Input ...
optimize (model_str, passes) IndexError: Input 475 is undefined! who can tell the solution? another,if it is a pytorch model,when conver to onnx ......
Read more >TensorRT 8.4.1 Release Notes - NVIDIA Documentation Center
When parsing networks with ONNX operand expand on scalar input. TensorRT would error out. This issue has been fixed in this release. The...
Read more >ONNX Dialect
The resulting tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then the resulting tensor has...
Read more >2. PopART Python API - Graphcore Documents
Create a runtime class for executing an ONNX graph on a set of IPU hardware for ... Raises. popart.OutOfMemoryException – If an out...
Read more >Autograd mechanics — PyTorch 1.13 documentation
This will make it error out in the backward if used on tensors that require ... functions will actually raise an error if...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
After using the workaround discussed in #2903, I was able to resolve this by manually adding all initializer to model input.
Hopefully it should be merged into master soon, but I suppose it would take a while to merge into the release version (ONNX 1.7.0 has just been released recently).
It’s a short modification in
onnx/shape_inference/implementation.cc
so you need to add them in your originalimplementation.cc
file and compile it from source. I am still working on it (writing some tests). If there is any bug in this PR, I will inform you here. Sorry for the inconvenience.The decision for optimizer is still debatable (keep it in onnx or move to another repo). Since there are some users depend on it, I would say it will still exist. No worries about it.