question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Optimizer crashes on onnx model converted from Tensorflow graph

See original GitHub issue

Hi, I am trying to invoke the ONNX optimizer following in the instructions in the following post:

https://github.com/onnx/onnx/blob/master/docs/PythonAPIOverview.md#optimizing-an-onnx-model

The ONNX model is converted from a `TensorFlow .pb file:

     onnx_model = tensorflow_graph_to_onnx_model(graph_def, outputs, opset=7,
                             ignore_unimplemented=True)

    all_passes = optimizer.get_available_passes()
    print("Available optimization passes:")
    for p in all_passes:
        print(p)
    print()
    # Apply the optimization on the original model
    passes = ['fuse_consecutive_transposes']

    optimized_onnx_model = optimizer.optimize(onnx_model, passes)

I am getting a crash in the optimizer:

 File "/home/xxx/PycharmProjects/virtualenv/local/lib/python3.5/site-packages/onnx/optimizer.py", line 52, in optimize
    optimized_model_str = C.optimize(model_str, passes)
IndexError: _Map_base::at

onnx version: 1.3.0

onnx-tf verison: 1.1.2

Thanks!

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Reactions:5
  • Comments:15 (2 by maintainers)

github_iconTop GitHub Comments

44reactions
nihuicommented, Oct 24, 2019

hi all I finally resolved this issue

you have to add keep_initializers_as_inputs=True when exporting onnx

torch.onnx._export(model, x, "model.onnx", export_params=True, keep_initializers_as_inputs=True)

since this commit

https://github.com/pytorch/pytorch/commit/7583519b870e33ee3182f330c1bb8663559697b6

7reactions
Zrufycommented, Sep 25, 2019

i have the same problem.The error is invalid unordered_map<K, T> key

Read more comments on GitHub >

github_iconTop Results From Across the Web

Can't convert onnx model to tflite using TF 2.4.1 - Stack Overflow
But when it comes to the conversion of that saved model to TFLite an error happens. The code: import onnx import tensorflow as...
Read more >
Intel at the Edge (The Model Optimizer) - Kevin Urban
Btw, out of curiosity, I tried running this command without the config commands set: the model optimizer crashes. Exercise: Convert a Caffe ...
Read more >
Accelerating Inference in TensorFlow with TensorRT User Guide
Conversion with TF-TRT is only one extra step to optimize a model for inference on NVIDIA devices. The TF-TRT workflow is simple. The...
Read more >
Release Notes for Intel® Distribution of OpenVINO™ toolkit ...
This API works properly only for ONNX* models, the set of supported frameworks ... which simplifies conversion of non-frozen model graphs.
Read more >
Archives - Qualcomm Developer Network
Re-converting these models is strongly recommended as the old layer will be ... TensorFlow Converter: Added support for Identity nodes that act as...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found