question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Building OpenCV4.5.5 form source with enabling Inference Engine not working?

See original GitHub issue

I have successfully built OpenCV from the source without any errors, but when I am using Inference Engine to read net, it shows the following error:

OpenCV(4.5.5-dev) D:\opencv\modules\dnn\src\dnn.cpp:3964: error: (-2:Unspecified error) Build OpenCV with Inference Engine to enable loading models from Model Optimizer. in function ‘cv::dnn::dnn4_v20211220::Net::readFromModelOptimizer’

But when I print cv::getBuildInformation(); it prints the following output:

Inference Engine: YES (2021040200 / 2021.4.2) * libs: C:/Program Files (x86)/Intel/openvino_2021.4.752/deployment_tools/inference_engine/lib/intel64/Release/inference_engine.lib / C:/Program Files (x86)/Intel/openvino_2021.4.752/deployment_tools/inference_engine/lib/intel64/Debug/inference_engined.lib C:/Program Files (x86)/Intel/openvino_2021.4.752/deployment_tools/inference_engine/bin/intel64/Release/inference_engine.dll / C:/Program Files (x86)/Intel/openvino_2021.4.752/deployment_tools/inference_engine/bin/intel64/Debug/inference_engined.dll C:/Program Files (x86)/Intel/openvino_2021.4.752/deployment_tools/inference_engine/bin/intel64/Release/inference_engine.dll C:/Program Files (x86)/Intel/openvino_2021.4.752/deployment_tools/inference_engine/bin/intel64/Debug/inference_engined.dll * includes: C:/Program Files (x86)/Intel/openvino_2021.4.752/deployment_tools/inference_engine/include nGraph: YES (0.0.0+e2a469a) * libs: C:/Program Files (x86)/Intel/openvino_2021.4.752/deployment_tools/ngraph/lib/ngraph.lib / C:/Program Files (x86)/Intel/openvino_2021.4.752/deployment_tools/ngraph/lib/ngraphd.lib C:/Program Files (x86)/Intel/openvino_2021.4.752/deployment_tools/ngraph/lib/ngraph.dll / C:/Program Files (x86)/Intel/openvino_2021.4.752/deployment_tools/ngraph/lib/ngraphd.dll C:/Program Files (x86)/Intel/openvino_2021.4.752/deployment_tools/ngraph/lib/ngraph.dll C:/Program Files (x86)/Intel/openvino_2021.4.752/deployment_tools/ngraph/lib/ngraphd.dll * includes: C:/Program Files (x86)/Intel/openvino_2021.4.752/deployment_tools/ngraph/include

So the output shows it has the Inference Engine functionality to read net, but why it is not working?

Issue Analytics

  • State:open
  • Created 2 years ago
  • Comments:8 (1 by maintainers)

github_iconTop GitHub Comments

1reaction
mshabunincommented, Mar 10, 2022

I’ve not been able to reproduce your issue with OpenCV@4.5.5 or 4.x branch and OpenVINO 2021.4.2 (model, release).

So I built OpenCV (only dnn module):

$ cd build
$ ..\openvino_752\bin\setupvars.bat
	Python 3.8.12
	[setupvars.bat] OpenVINO environment initialized
$ cmake -G"Visual Studio 16 2019" -A x64 -DWITH_INF_ENGINE=ON  -DBUILD_LIST=dnn ..\opencv
	...
	--     Inference Engine:            YES (2021040200 / 2021.4.2)
	--         * libs:                  C:/work/openvino_752/...
	--         * includes:              C:/work/openvino_752/...
	--     nGraph:                      YES (0.0.0+e2a469a)
	--         * libs:                  C:/work/openvino_752/...
	--         * includes:              C:/work/openvino_752/...
	...
$ cmake --build . --config Release

Then build an application:

$ cd ..\build-sample
$ set OpenCV_DIR=..\build
$ cmake -G"Visual Studio 16 2019" -A x64 ..\example
$ cmake --build . --config Release

Then ran it:

$ set PATH=..\build\bin\Release;%PATH%
$ Release\opencv_example.exe
	>>>>>>>>>>>>>> start
	[E:] [BSL] found 0 ioexpander device
	>>>>>>>>>>>>>> read net done successfully

Application is simple:

cout << ">>>>>>>>>>>>>> start" << endl;
cv::dnn::Net net = cv::dnn::Net::readFromModelOptimizer("person-detection-0203.xml", "person-detection-0203.bin");
cout << ">>>>>>>>>>>>>> read net done successfully" << endl;
0reactions
alalekcommented, Mar 15, 2022

Avoid using of CMake GUI - changing build configuration on the fly is tricky and it is not validated.

Consider using of CMake command-line only:

  • all required flags are specified on the first run (we have reproducible input as much as possible)
  • clear CMake cache if dependencies are changed or upgraded
Read more comments on GitHub >

github_iconTop Results From Across the Web

When I try to compile OpenCv with inference engine enabled, I ...
cpp(1595): error C2259: 'cv::dnn::InfEngineBackendNet': cannot instantiate abstract class 1>C:\local\opencv-4.0. 0\modules\dnn\src\dnn. cpp( ...
Read more >
FAQs & How-To - DepthAI documentation - Luxonis
How Does DepthAI Provide Spatial AI Results?¶. There are two ways to use DepthAI to get Spatial AI results: Monocular Neural Inference fused...
Read more >
EVA SDK Installation Guide for x86 - LINUX
1 Introduction. This chapter describes the installation of the following software. GStreamer, Gstreamer RTSP Plugin and GStreamer Python ...
Read more >
DNN model inference on Odroid XU4 using OpenVINO ARM ...
This enables dnn inference on ARM devices using OpenVINO API. ... Hence we will need to build both OpenVINO and OpenCV from source...
Read more >
Getting started with the NVIDIA Jetson Nano - PyImageSearch
Compiling and installing the Jetson Inference engine on the Nano took just ... OpenCV's Deep Neural Network ( dnn ) module does not...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found