openvino windows下安装配置并在c++中加载onnx模型进行推理

1.openvino安装配置步骤(https://docs.openvinotoolkit.org/latest/openvino_docs_install_guides_installing_openvino_windows.html#Update-Path)
注意点:整个安装过程最好在以管理员身份打开的cmd或anaconda prompt中进行(因为命令都是在c盘执行)
错误1 安装程序没找到python和gpu
在安装openvino过程会遇到这样的错误,不用管直接继续安装(电脑已经安装anaconda python 3.6),之后在cmd执行Configure the Model Optimizer命令的时候会找到python;要是不行就在上述网址的Optional Steps有配置python所在目录到环境变量;Optional Steps也有对于GPU的步骤,我能看到我的显卡是nvidia geforce gtx1050 Ti,缺少intel的显卡驱动。。。。所以还得安装下intel的显卡驱动。。。
在这里插入图片描述
错误2

install_prerequisites_onnx.bat

报错

python: can't open file 'C:\Program': [Errno 2] No such file or directory
WARNING: Package(s) not found: openvino
[ WARNING ] Could not find the Inference Engine Python API. Installing OpenVINO (TM) toolkit using pip
Looking in indexes: http://mirrors.aliyun.com/pypi/simple/
Collecting openvino==2021.3
  Downloading http://mirrors.aliyun.com/pypi/packages/a6/2b/ce362ba73c65c6a01c1c74c2fc7d8deb08db6e6f6a704879d9c65f6b50ff/openvino-2021.3.0-2774-cp36-cp36m-win_amd64.whl (20.6 MB)
     |████████████████████████████████| 20.6 MB 133 kB/s
Requirement already satisfied: numpy>=1.16.3 in d:\anaconda3\envs\yad2k\lib\site-packages (from openvino==2021.3) (1.19.5)
Installing collected packages: openvino
Successfully installed openvino-2021.3.0
python: can't open file 'C:\Program': [Errno 2] No such file or directory
[ WARNING ] The installed OpenVINO (TM) toolkit version 2021.3 does not work as expected. Uninstalling...
Found existing installation: openvino 2021.3.0
Uninstalling openvino-2021.3.0:
  Successfully uninstalled openvino-2021.3.0
[ WARNING ] Consider building the Inference Engine Python API from sources
*****************************************************************************************
Optional: To speed up model conversion process, install protobuf-*.egg located in the
"model-optimizer\install_prerequisites" folder or building protobuf library from sources.
For more information please refer to Model Optimizer FAQ, question #80.

解决方法:修改install_prerequisites.bat中的
line 83

set python_command='python "%~dp0..\mo\utils\extract_release_version.py"'

line 93,132,150

python "%~dp0..\mo\utils\find_ie_version.py"

总的来说就是在python后的%~dp0..\mo\utils\find_ie_version.py加个双引号

2.安装配置后的测试demo步骤(报了一堆错,由于目的仅仅是为了在c++中使用openvino调用训练好的权重,此步骤跳过)

demo_squeezenet_download_convert_run

错误1:找不到如下文件C:\Users\Documents\Intel\OpenVINO\openvino_models\models\public\squeezenet1.1/squeezenet1.1.caffemodel
解决方法:

pip install urllib3==1.25.8

https://blog.csdn.net/dou3516/article/details/107087386坑1(按照在C:\Users\Documents\Intel\OpenVINO\openvino_models\models\public\squeezenet1.1手动加的squeezenet1.1.prototxt会在执行demo_squeezenet_download_convert_run被覆盖导致如下错误)

[ ERROR ] Model file C:\Users\Documents\Intel\OpenVINO\openvino_models\ir\public\squeezenet1.1\FP16\squeezenet1.1.xml cannot be opened!
Error

3.在c++加载推理分类模型-无需转换onnx模型到ir格式,直接读取onnx即可
(主要参考C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\inference_engine\samples\cpp\hello_classification\main.cpp)
"C:\Program Files (x86)\Intel\openvino_2021\bin\setupvars.bat"文件中临时配置的环境变量全部手动加入到环境变量中,然后添加头文件目录,

注意点:
1.输入的格式需要按照训练好的模型进行改变input_info->setLayout(Layout::NCHW);,我这里是使用pytorch训练的模型所以格式为nchw
2.模型输入的图片的通道数要和c++读取图片的方式相同(灰度图就1,rgb就3)
3.catch很好用,可以知道错误在哪

	catch (const std::exception & ex) {
		std::cerr << ex.what() << std::endl;
		return EXIT_FAILURE;
	}
#include "pch.h"
#include <iostream>
#include <vector>
#include <memory>
#include <string>
#include <iterator>
#include <samples/common.hpp>

#include <inference_engine.hpp>
#include <samples/ocv_common.hpp>
#include <samples/classification_results.h>
#include <opencv2/opencv.hpp>

using namespace InferenceEngine;

int main()
{
	try {
		std::string input_model = "...\\view_classification.onnx";
		std::string input_image_path = "...\\Out_0083.bmp";
		std::string device_name = "CPU";
		Core ie;
		CNNNetwork network = ie.ReadNetwork(input_model);

		InputInfo::Ptr input_info = network.getInputsInfo().begin()->second;
		std::string input_name = network.getInputsInfo().begin()->first;
		input_info->getPreProcess().setResizeAlgorithm(RESIZE_BILINEAR);
		input_info->setLayout(Layout::NCHW);
		input_info->setPrecision(Precision::U8);

		DataPtr output_info = network.getOutputsInfo().begin()->second;
		std::string output_name = network.getOutputsInfo().begin()->first;
		output_info->setPrecision(Precision::FP32);

		std::cout << input_name << std::endl;
		std::cout << output_name << std::endl;

		ExecutableNetwork executable_network = ie.LoadNetwork(network, device_name);
		InferRequest infer_request = executable_network.CreateInferRequest();

		cv::Mat image = cv::imread(input_image_path,0);

		Blob::Ptr imgBlob = wrapMat2Blob(image);
		infer_request.SetBlob(input_name, imgBlob);
		infer_request.Infer();
		Blob::Ptr output = infer_request.GetBlob(output_name);
		ClassificationResult classificationResult(output, { input_image_path });
		// 输出代表类型的数字
		std::cout << classificationResult.getResults().at(0) << std::endl;
		std::cout << "Hello World!\n";
	}
	catch (const std::exception & ex) {
		std::cerr << ex.what() << std::endl;
		return EXIT_FAILURE;
	}
	return EXIT_SUCCESS;
}

版权声明:本文为weixin_42388228原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接和本声明。