OpenVINO: Converting inference model
See original GitHub issueHi guys,
by chance i fixed the following issue. Let me report to you so you can save much time!
These days i tried to run your model on an Intel GPU based SoC. Therefore i need to convert the model via OpenVINO to an intel supported inference model.
While converting i got an error: [ERROR] Graph contains a cycle. Can not proceed For more information, this issue seems to be similar: https://software.intel.com/en-us/forums/computer-vision/topic/781822
By chance i found this solution:
Set the following argument to use Faster RCNN custom operations
python3 mo_tf.py --input_model <MODEL_PATH> --tensorflow_use_custom_operations_config <OPENVINO_DIR>/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support.json
Then OpenVINO converts your model instantly!
Greetings, Timo
Issue Analytics
- State:
- Created 5 years ago
- Reactions:1
- Comments:11 (3 by maintainers)
Top GitHub Comments
I stopped working on it. Using OpenVINO with custom layers is very difficult… It is optimized for Google object detection API. The Google Models are really easy to Implement!
@TimoK93 you have old information. It was a long time ago. Recent attempts to make the model retina work in this post: https://software.intel.com/en-us/forums/computer-vision/topic/806219