question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

ONNX Optimizer raises error "Input 0.weight is undefined!"

See original GitHub issue

Bug Report

Describe the bug

When running onnx.optimizer.optimize(model, passes) with passes = ['fuse_bn_into_conv'], optimizer fails with error:

Input 0.weight is undefined!

System information

  • OS Platform and Distribution: Windows 10
  • ONNX version: 1.7.0
  • Python version: 3.7.4

Reproduction instructions

  • Describe the code to reproduce the behavior.
model = onnx.load("models/conv_dummy.onnx")
onnx.checker.check_model(model)
model = onnx.optimizer.optimize(model, passes=['fuse_bn_into_conv'])

Model file: models.zip

Expected behavior

Expected optimizer to run correctly and return an optimized model.

Notes

Model was exported from PyTorch with torch.onnx.export

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5 (2 by maintainers)

github_iconTop GitHub Comments

3reactions
dtch1997commented, Jul 22, 2020

After using the workaround discussed in #2903, I was able to resolve this by manually adding all initializer to model input.

for init in model.graph.initializer:
  for value_info in model.graph.value_info:
    if init.name == value_info.name:
      model.graph.input.append(value_info)
2reactions
jcwchencommented, Jan 16, 2021

Hopefully it should be merged into master soon, but I suppose it would take a while to merge into the release version (ONNX 1.7.0 has just been released recently).

It’s a short modification in onnx/shape_inference/implementation.cc so you need to add them in your original implementation.cc file and compile it from source. I am still working on it (writing some tests). If there is any bug in this PR, I will inform you here. Sorry for the inconvenience.

The decision for optimizer is still debatable (keep it in onnx or move to another repo). Since there are some users depend on it, I would say it will still exist. No worries about it.

Read more comments on GitHub >

github_iconTop Results From Across the Web

caffe2 inference a onnx model , happend IndexError: Input ...
optimize (model_str, passes) IndexError: Input 475 is undefined! who can tell the solution? another,if it is a pytorch model,when conver to onnx ......
Read more >
TensorRT 8.4.1 Release Notes - NVIDIA Documentation Center
When parsing networks with ONNX operand expand on scalar input. TensorRT would error out. This issue has been fixed in this release. The...
Read more >
ONNX Dialect
The resulting tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then the resulting tensor has...
Read more >
2. PopART Python API - Graphcore Documents
Create a runtime class for executing an ONNX graph on a set of IPU hardware for ... Raises. popart.OutOfMemoryException – If an out...
Read more >
Autograd mechanics — PyTorch 1.13 documentation
This will make it error out in the backward if used on tensors that require ... functions will actually raise an error if...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found