Custom Yolov4 IR infer at c++ sample gives me Segmentation fault (core dumped)
See original GitHub issueI’ve trained my own custom yolov4 (darknet) on 1 class, then convert it to onnx successfully and to IR as well, now I want to infer the IR model with c++ samples, I used those samples ( multi_channel_object_detection_demo_yolov3 and object_detection_demo) and both gives me the following error (Segmentation fault (core dumped)
The run code is:
./multi_channel_object_detection_demo_yolov3-m /home/aya/Deployment_project/Plate_IR/Plate_yolov4_1_3_416_416_static.xml -d CPU -i /home/aya/Deployment_project/test_samples/1.jpg
OR
./object_detection_demo -m /home/aya/Deployment_project/Plate_IR/Plate_yolov4_1_3_416_416_static.xml -d CPU -i /home/aya/Deployment_project/test_samples/1.jpg -at yolo -labels /home/aya/Deployment_project/Models/Plate/Plate.txt
I use this docker image: https://hub.docker.com/r/openvino/ubuntu20_dev my cpu is: Intel® Xeon® CPU E5-2695 v4 @ 2.10GHz × 6
Is there anything I do wrong ?
Issue Analytics
- State:
- Created 2 years ago
- Comments:13 (5 by maintainers)
Top GitHub Comments
@eaidova Thank you, I forget to do that scaling step, this worked perfectly ^^ I will submit a detailed solution here and then close the issue.
How do you convert model to IR? Our OMZ demos assume that model expects BGR image in [0, 255] range. It means that some preprocessing options should be included to MO command line (if I right remember for yolo models standard is RGB image [0, 1]): –scale 255 --reverse_input_channels