question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

OpenVINO Model Optimizer not found when exporting object detection model

See original GitHub issue

I am facing problems when running the export script in the object detection example here: https://github.com/openvinotoolkit/training_extensions/tree/develop/models/object_detection/model_templates/custom-object-detection

Although I am using an OpenVINO dev container, and already did the install_prerequisites.sh, the exporter cannot seem to convert the .pth file into a .bin and .xml file.

The warning I would get when running install_prerequisites is:

[WARNING] All Model Optimizer dependencies are installed globally.
[WARNING] If you want to keep Model Optimizer in separate sandbox
[WARNING] run install_prerequisites.sh "{caffe|tf|tf2|mxnet|kaldi|onnx}" venv

The warning I would get when running export.py is:

WARNING: ONNX Optimizer has been moved to https://github.com/onnx/optimizer.
All further enhancements and fixes to optimizers will be done in this new repo.
The optimizer code in onnx/onnx repo will be removed in 1.9 release.

Downloading /root/.torch/models/mobilenetv2_w1-0887-13a021bc.pth.zip from https://github.com/osmr/imgclsmob/releases/download/v0.0.213/mobilenetv2_w1-0887-13a021bc.pth.zip...
ONNX model has been saved to "/openvino_training_extensions/my_model/export/model.onnx"
OpenVINO Model Optimizer not found, please source openvino/bin/setupvars.sh before running this script.

The above claims that the Model Optimizer is not found.

To reproduce my problem, you can use this dockerfile, which assumes that I have another build step above named trained, containing all my training files:

FROM openvino/ubuntu18_dev:2021.3 
USER root
ENV DEBIAN_FRONTEND=noninteractive
COPY --from=train /openvino_training_extensions /openvino_training_extensions
ENV WORK_DIR='/openvino_training_extensions/my_model'
WORKDIR  ${WORK_DIR}
ENV OBJ_DET_DIR=/openvino_training_extensions/models/object_detection
ENV MODEL_TEMPLATE='./model_templates/custom-object-detection/mobilenet_v2-2s_ssd-256x256/template.yaml'
ENV TRAIN_ANN_FILE="${OBJ_DET_DIR}/../../data/airport/annotation_example_train.json"
ENV TRAIN_IMG_ROOT="${OBJ_DET_DIR}/../../data/airport/train"
ENV VAL_ANN_FILE="${OBJ_DET_DIR}/../../data/airport/annotation_example_val.json"
ENV VAL_IMG_ROOT="${OBJ_DET_DIR}/../../data/airport/val"
ENV CLASSES="vehicle,person,non-vehicle"
WORKDIR /openvino_training_extensions/models/object_detection
RUN apt-get update && apt-get install -y sudo 
ARG OTE_DIR='/openvino_training_extensions'
RUN ./init_venv.sh
RUN . venv/bin/activate && pip3 install -e /openvino_training_extensions/ote
RUN cd /openvino_training_extensions/models/object_detection &&\
   source /opt/intel/openvino/bin/setupvars.sh && \
   . venv/bin/activate && \
   /opt/intel/openvino_2021.3.394/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites.sh && \
   cd /openvino_training_extensions/my_model && \
   python3 export.py \
   --load-weights ${WORK_DIR}/outputs/latest.pth \
   --save-model-to ${WORK_DIR}/export \

Environment:

  • OS: Linux Ubuntu 18.04
  • Framework version: PyTorch (as used in the custom-object-detector repo)
  • Python version: 3.6.9 (as used in the container openvino/ubuntu18_dev:2021.3 on dockerhub)
  • OpenVINO version: 2021.3
  • CUDA/cuDNN version: -
  • GPU model and memory: -

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:5 (1 by maintainers)

github_iconTop GitHub Comments

1reaction
pigubaozacommented, Apr 8, 2021

The solution that ultimately worked for me was to use /opt/intel/openvino_2021.3.394/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites.sh venv on the third last line. Note the use of venv at the end. I still used the virtual environments and concatenated operations as I wanted to follow the official steps as closely as possible

0reactions
pigubaozacommented, Apr 14, 2021

For those using the Dockerhub container tagged openvino/ubuntu18_dev:2021.3 (I have not tested all of them), /opt/intel/openvino and /opt/intel/openvino_2021 symlinks to /opt/intel/openvino_2021.3.394/.

@ReshitkoM’s solution might work if you installed OpenVINO 2021 directly.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Model Optimizer Frequently Asked Questions
A : Model Optimizer tried to infer a specified layer via the Caffe framework. However, it cannot construct a net using the Caffe...
Read more >
Question about converting Tensorflow Object Detection 2.4 to ...
Hi, I've tried to convert TF2 object detection API model ... ~/openvino/model-optimizer/extensions/front/tf/ssd_support_api_v2.0.json ...
Read more >
Error in converting custom ssd model using Tensorflow2 ...
Solved: Hi, I am trying to convert a custom SSD MobileNet V2 FPNLite 320x320 from TensorFlow2 model zoo to Openvino Intermediate Representation (IR)...
Read more >
Exporting TensorFlow 2 model to OpenVino - Stack Overflow
OpenVino Model Optimizer does not support Tensorflow 2.0 yet. But, you can use Tensorflow 1.14 freeze_graph.py to freeze a TF 2.0 model.
Read more >
Convert TFLite Model Maker Object detection model to ...
I don't think this is possible after exporting the model to Tensorflow Lite but it should work if the model is exported as...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found