[Support] Wrong input size when converting EfficientDet-lite3
See original GitHub issueDoing the following to convert EfficientDet-lite3 from https://github.com/google/automl/tree/master/efficientdet:
$ python3 model_inspect.py --runmode=saved_model --model_name=efficientdet-lite3 --ckpt_path=efficientdet-lite3 --saved_model_dir=saved_model/modeldir
$ mo --reverse_input_channels --input_model ../efficientdet/efficientdet-lite3/saved_model/efficientdet-lite3_frozen.pb --transformations_config openvino_env/lib/python3.9/site-packages/openvino/tools/mo/front/tf/automl_efficientdet.json --input image_arrays --tensorboard_logdir . --input_shape [1,512,512,3]
Using the blobconverter
webapp to convert the OpenVINO model, tried with both the defaults settings and setting -iml NHWC -il NHWC
.
Always the same error when running on OAK-D PRO:
[mxid] [20.1] [168.777] [NeuralNetwork(1)] [warning] Input image (512x512) does not match NN (3x512)
Unclear why this is different from the EfficientDet in the OMZ, which I based this conversion on. What am I doing wrong?
Issue Analytics
- State:
- Created a year ago
- Comments:8 (3 by maintainers)
Top Results From Across the Web
Is it possible to use a custom input shape with efficient det?
Hi,. I want to use TFLite Model Maker to train custom object detection models. When I try to change the hparam image_size.
Read more >Input size of converted lite model doesn't match the original ...
Input size of converted lite model doesn't match the original model ... Error (Regular TensorFlow ops) is increasing the app size to 195MB!...
Read more >Fail to convert saved EfficientDet model saved_model.pb to ...
As the error suggests, the input tensor should be made up of numbers of 'uint8' data-type. So while constructing the input_data, convert it ......
Read more >Retraining EfficientDet for High-Accuracy Object Detection
If the scaled image is smaller than the required input size, the image will be padded with zeros to match the required input...
Read more >TFLite Object Detection with TFLite Model Maker
Choose an object detection model architecture. Tensorflow Lite Model Maker currently supports 5 different object detection models (EfficientDet-Lite[0-4]). All ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Was able to get this working with the following:
I used the default settings for
blobconverter
in OpenVINO mode.I guess the NN node in DepthAI expects CHW layout?
So blobconverter works on either NCHW or NHWC, in fact it is using the compile_tool from OpenVINO in the background, so anything that’s possible to compile with that compiler should be possible to compile with blobconverter. And you’ve managed to compile the blob as you’ve said, but the error was
[mxid] [20.1] [168.777] [NeuralNetwork(1)] [warning] Input image (512x512) does not match NN (3x512)
This means that while your model was expecting NHWC, it likely received NCHW images or vice versa. As Erik said, setting
colorCam.setInterleaved(bool)
should work to my knowledge. Have you tried that?