question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

OpenVINO: Converting inference model

See original GitHub issue

Hi guys,

by chance i fixed the following issue. Let me report to you so you can save much time!

These days i tried to run your model on an Intel GPU based SoC. Therefore i need to convert the model via OpenVINO to an intel supported inference model.

While converting i got an error: [ERROR] Graph contains a cycle. Can not proceed For more information, this issue seems to be similar: https://software.intel.com/en-us/forums/computer-vision/topic/781822

By chance i found this solution: Set the following argument to use Faster RCNN custom operations python3 mo_tf.py --input_model <MODEL_PATH> --tensorflow_use_custom_operations_config <OPENVINO_DIR>/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support.json

Then OpenVINO converts your model instantly!

Greetings, Timo

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Reactions:1
  • Comments:11 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
TimoK93commented, Mar 18, 2019

I stopped working on it. Using OpenVINO with custom layers is very difficult… It is optimized for Google object detection API. The Google Models are really easy to Implement!

0reactions
Maxfashkocommented, Mar 27, 2019

@TimoK93 you have old information. It was a long time ago. Recent attempts to make the model retina work in this post: https://software.intel.com/en-us/forums/computer-vision/topic/806219

Read more comments on GitHub >

github_iconTop Results From Across the Web

Converting a TensorFlow Model - OpenVINO™ Documentation
This page provides general instructions on how to convert a model from a TensorFlow format to the OpenVINO IR format using Model Optimizer....
Read more >
Convert a TensorFlow Model to OpenVINO
Use Model Optimizer to convert a TensorFlow model to OpenVINO IR with FP16 precision. The models are saved to the current directory. Add...
Read more >
Model Optimizer Usage - OpenVINO™ Documentation
Model Optimizer provides two parameters to override original input shapes for model conversion: --input and --input_shape . For more information about these ...
Read more >
Setting Input Shapes - OpenVINO™ Documentation
Model Optimizer supports conversion of models with dynamic input shapes that contain undefined dimensions. However, if the shape of data is not going...
Read more >
Converting a Model to Intermediate Representation (IR)
In the first case, the Model Optimizer generates the IR with required pre-processing layers and Inference Engine samples may be used to infer...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found