东方耀AI技术分享

 找回密码
 立即注册

QQ登录

只需一步,快速开始

搜索
热搜: 活动 交友 discuz
查看: 4279|回复: 6
打印 上一主题 下一主题

[课堂笔记] Intel的OpenVINO的安装与示例运行(Intel CPU GPU MYRIAD)

[复制链接]

1365

主题

1856

帖子

1万

积分

管理员

Rank: 10Rank: 10Rank: 10

积分
14429
QQ
跳转到指定楼层
楼主
发表于 2020-9-15 15:14:41 | 只看该作者 |只看大图 回帖奖励 |倒序浏览 |阅读模式

Intel的OpenVINO的安装与示例运行(Intel CPU GPU MYRIAD)

半导体厂商开发的硬件再怎么厉害,也需要软件工具的加持,重复制造轮子不是一个好主意,为了充分挖掘处理器的性能,各个厂家都发布了各种软件框架和工具,比如Intel的OpenVINO,Nvidia的TensorRT等等。

这里重点介绍英特尔发布的针对AI工作负载的一款部署神器--OpenVINO。

我们有了各种开源框架,比如tensorflow,pytorch,mxnet,caffe2等,为什么还要推荐OpenVINO来作为部署工具呢?

OpenVINO是一个Pipeline工具集,同时可以兼容各种开源框架训练好的模型,拥有算法模型上线部署的各种能力,只要掌握了该工具,你可以轻松的将预训练模型在Intel的CPU上快速部署起来


对于AI工作负载来说,OpenVINO提供了深度学习推理套件(DLDT),该套件可以将各种开源框架训练好的模型进行线上部署,除此之外,还包含了图片处理工具包OpenCV,视频处理工具包Intel Media SDK


在做推理的时候,大多数情况需要前处理和后处理,前处理如通道变换,取均值,归一化,Resize等,后处理是推理后,需要将检测框等特征叠加至原图等,都可以使用OpenVINO工具套件里的API接口完成。

DLDT分为两部分:模型优化器(Model Optimizer)   推理引擎(Inference Engine)


模型优化器是一个python脚本工具,用于将开源框架训练好的模型转化为推理引擎可以识别的中间表达,其实就是两个文件,xml和bin文件,前者是网络结构的描述,后者是权重文件。模型优化器的作用包括压缩模型和加速,比如,去掉推理无用的操作(Dropout),层的融合(Conv + BN + Relu),以及内存优化。

使用模型优化器,从主流的开发框架(如Caffe和TensorFlow)中导入训练的模型, 自动剪枝、量化和图层压缩模型, 从而在FPGA上优化执行


了解FPGA是如何用于加速深度神经网络的: https://www.altera.com/solutions ... gence/overview.html




推理引擎是一个支持C\C++和python的一套API接口,需要开发人员自己实现推理过程的开发,开发流程其实非常的简单

这是一款非常给力的专门做推理的工具,并且有intel在不停的开发和优化新的网络结构,有人维护和开发这件事很重要



安装教程:https://docs.openvinotoolkit.org ... openvino_linux.html


pip config set global.index-url https://pypi.douban.com/simple/


run install_prerequisites.sh venv {caffe|tf|mxnet|kaldi|onnx}


OpenVINO™ Toolkit Intel's Pre-Trained Models :
https://docs.openvinotoolkit.org ... ls_intel_index.html



Adding jiang to the video group...

Installation completed successfully.

Next steps:
Add OpenCL users to the video group: 'sudo usermod -a -G video USERNAME'
   e.g. if the user running OpenCL host applications is foo, run: sudo usermod -a -G video foo
   Current user has been already added to the video group

If you use 8th Generation Intel® Core™ processor, you will need to add:
   i915.alpha_support=1
   to the 4.14 kernel command line, in order to enable OpenCL functionality for this platform.


MYRIAD计算棒 只能 FP16   precisions FP16



openvino07.png (53 KB, 下载次数: 139)

openvino07.png

openvino06.png (31.73 KB, 下载次数: 136)

openvino06.png

openvino05.png (69.91 KB, 下载次数: 137)

openvino05.png

openvino04.png (51.05 KB, 下载次数: 144)

openvino04.png

openvino03.png (87.55 KB, 下载次数: 135)

openvino03.png

openvino02.png (71.15 KB, 下载次数: 138)

openvino02.png

openvino0101.png (57.33 KB, 下载次数: 142)

openvino0101.png

openvino01.png (30.91 KB, 下载次数: 141)

openvino01.png
让天下人人学会人工智能!人工智能的前景一片大好!
回复

使用道具 举报

1365

主题

1856

帖子

1万

积分

管理员

Rank: 10Rank: 10Rank: 10

积分
14429
QQ
沙发
 楼主| 发表于 2020-9-15 15:21:01 | 只看该作者
  1. ###################################################

  2. Convert a model with Model Optimizer

  3. Run python3 /home/jiang/intel/openvino_2020.4.287/deployment_tools/open_model_zoo/tools/downloader/converter.py --mo /home/jiang/intel/openvino_2020.4.287/deployment_tools/model_optimizer/mo.py --name squeezenet1.1 -d /home/jiang/openvino_models/models -o /home/jiang/openvino_models/ir --precisions FP16

  4. ========== Converting squeezenet1.1 to IR (FP16)
  5. Conversion command: /usr/bin/python3 -- /home/jiang/intel/openvino_2020.4.287/deployment_tools/model_optimizer/mo.py --framework=caffe --data_type=FP16 --output_dir=/home/jiang/openvino_models/ir/public/squeezenet1.1/FP16 --model_name=squeezenet1.1 '--input_shape=[1,3,227,227]' --input=data '--mean_values=data[104.0,117.0,123.0]' --output=prob --input_model=/home/jiang/openvino_models/models/public/squeezenet1.1/squeezenet1.1.caffemodel --input_proto=/home/jiang/openvino_models/models/public/squeezenet1.1/squeezenet1.1.prototxt

  6. Model Optimizer arguments:
  7. Common parameters:
  8.         - Path to the Input Model:         /home/jiang/openvino_models/models/public/squeezenet1.1/squeezenet1.1.caffemodel
  9.         - Path for generated IR:         /home/jiang/openvino_models/ir/public/squeezenet1.1/FP16
  10.         - IR output name:         squeezenet1.1
  11.         - Log level:         ERROR
  12.         - Batch:         Not specified, inherited from the model
  13.         - Input layers:         data
  14.         - Output layers:         prob
  15.         - Input shapes:         [1,3,227,227]
  16.         - Mean values:         data[104.0,117.0,123.0]
  17.         - Scale values:         Not specified
  18.         - Scale factor:         Not specified
  19.         - Precision of IR:         FP16
  20.         - Enable fusing:         True
  21.         - Enable grouped convolutions fusing:         True
  22.         - Move mean values to preprocess section:         False
  23.         - Reverse input channels:         False
  24. Caffe specific parameters:
  25.         - Path to Python Caffe* parser generated from caffe.proto:         /home/jiang/intel/openvino_2020.4.287/deployment_tools/model_optimizer/mo/front/caffe/proto
  26.         - Enable resnet optimization:         True
  27.         - Path to the Input prototxt:         /home/jiang/openvino_models/models/public/squeezenet1.1/squeezenet1.1.prototxt
  28.         - Path to CustomLayersMapping.xml:         Default
  29.         - Path to a mean file:         Not specified
  30.         - Offsets for a mean file:         Not specified
  31. Model Optimizer version:        
  32. [ WARNING ]  
  33. Detected not satisfied dependencies:
  34.         protobuf: installed: 3.0.0, required: >= 3.6.1

  35. Please install required versions of components or use install_prerequisites script
  36. /home/jiang/intel/openvino_2020.4.287/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites_caffe.sh
  37. Note that install_prerequisites scripts may install additional components.

  38. [ SUCCESS ] Generated IR version 10 model.
  39. [ SUCCESS ] XML file: /home/jiang/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.xml
  40. [ SUCCESS ] BIN file: /home/jiang/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.bin
  41. [ SUCCESS ] Total execution time: 3.27 seconds.
  42. [ SUCCESS ] Memory consumed: 85 MB.



  43. ###################################################

  44. Build Inference Engine samples

  45. -- The C compiler identification is GNU 7.5.0
  46. -- The CXX compiler identification is GNU 7.5.0
  47. -- Check for working C compiler: /usr/bin/cc
  48. -- Check for working C compiler: /usr/bin/cc -- works
  49. -- Detecting C compiler ABI info
  50. -- Detecting C compiler ABI info - done
  51. -- Detecting C compile features
  52. -- Detecting C compile features - done
  53. -- Check for working CXX compiler: /usr/bin/c++
  54. -- Check for working CXX compiler: /usr/bin/c++ -- works
  55. -- Detecting CXX compiler ABI info
  56. -- Detecting CXX compiler ABI info - done
  57. -- Detecting CXX compile features
  58. -- Detecting CXX compile features - done
  59. -- Looking for C++ include unistd.h
  60. -- Looking for C++ include unistd.h - found
  61. -- Looking for C++ include stdint.h
  62. -- Looking for C++ include stdint.h - found
  63. -- Looking for C++ include sys/types.h
  64. -- Looking for C++ include sys/types.h - found
  65. -- Looking for C++ include fnmatch.h
  66. -- Looking for C++ include fnmatch.h - found
  67. -- Looking for strtoll
  68. -- Looking for strtoll - found
  69. -- Found InferenceEngine: /home/jiang/intel/openvino_2020.4.287/deployment_tools/inference_engine/lib/intel64/libinference_engine.so (Required is at least version "2.1")
  70. -- Configuring done
  71. -- Generating done
  72. -- Build files have been written to: /home/jiang/inference_engine_samples_build
  73. Scanning dependencies of target format_reader
  74. Scanning dependencies of target gflags_nothreads_static
  75. [  9%] Building CXX object thirdparty/gflags/CMakeFiles/gflags_nothreads_static.dir/src/gflags.cc.o
  76. [ 18%] Building CXX object thirdparty/gflags/CMakeFiles/gflags_nothreads_static.dir/src/gflags_completions.cc.o
  77. [ 27%] Building CXX object thirdparty/gflags/CMakeFiles/gflags_nothreads_static.dir/src/gflags_reporting.cc.o
  78. [ 36%] Building CXX object common/format_reader/CMakeFiles/format_reader.dir/MnistUbyte.cpp.o
  79. [ 45%] Building CXX object common/format_reader/CMakeFiles/format_reader.dir/bmp.cpp.o
  80. [ 54%] Building CXX object common/format_reader/CMakeFiles/format_reader.dir/format_reader.cpp.o
  81. [ 63%] Building CXX object common/format_reader/CMakeFiles/format_reader.dir/opencv_wraper.cpp.o
  82. [ 72%] Linking CXX shared library ../../intel64/Release/lib/libformat_reader.so
  83. [ 72%] Built target format_reader
  84. [ 81%] Linking CXX static library ../../intel64/Release/lib/libgflags_nothreads.a
  85. [ 81%] Built target gflags_nothreads_static
  86. Scanning dependencies of target classification_sample_async
  87. [ 90%] Building CXX object classification_sample_async/CMakeFiles/classification_sample_async.dir/main.cpp.o
  88. [100%] Linking CXX executable ../intel64/Release/classification_sample_async
  89. [100%] Built target classification_sample_async


  90. ###################################################

  91. Run Inference Engine classification sample

  92. Run ./classification_sample_async -d CPU -i /home/jiang/intel/openvino/deployment_tools/demo/car.png -m /home/jiang/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.xml

  93. [ INFO ] InferenceEngine:
  94.         API version ............ 2.1
  95.         Build .................. 2020.4.0-359-21e092122f4-releases/2020/4
  96.         Description ....... API
  97. [ INFO ] Parsing input parameters
  98. [ INFO ] Parsing input parameters
  99. [ INFO ] Files were added: 1
  100. [ INFO ]     /home/jiang/intel/openvino/deployment_tools/demo/car.png
  101. [ INFO ] Creating Inference Engine
  102.         CPU
  103.         MKLDNNPlugin version ......... 2.1
  104.         Build ........... 2020.4.0-359-21e092122f4-releases/2020/4

  105. [ INFO ] Loading network files
  106. [ INFO ] Preparing input blobs
  107. [ WARNING ] Image is resized from (787, 259) to (227, 227)
  108. [ INFO ] Batch size is 1
  109. [ INFO ] Loading model to the device
  110. [ INFO ] Create infer request
  111. [ INFO ] Start inference (10 asynchronous executions)
  112. [ INFO ] Completed 1 async request execution
  113. [ INFO ] Completed 2 async request execution
  114. [ INFO ] Completed 3 async request execution
  115. [ INFO ] Completed 4 async request execution
  116. [ INFO ] Completed 5 async request execution
  117. [ INFO ] Completed 6 async request execution
  118. [ INFO ] Completed 7 async request execution
  119. [ INFO ] Completed 8 async request execution
  120. [ INFO ] Completed 9 async request execution
  121. [ INFO ] Completed 10 async request execution
  122. [ INFO ] Processing output blobs

  123. Top 10 results:

  124. Image /home/jiang/intel/openvino/deployment_tools/demo/car.png

  125. classid probability label
  126. ------- ----------- -----
  127. 817     0.6853030   sports car, sport car
  128. 479     0.1835197   car wheel
  129. 511     0.0917197   convertible
  130. 436     0.0200694   beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon
  131. 751     0.0069604   racer, race car, racing car
  132. 656     0.0044177   minivan
  133. 717     0.0024739   pickup, pickup truck
  134. 581     0.0017788   grille, radiator grille
  135. 468     0.0013083   cab, hack, taxi, taxicab
  136. 661     0.0007443   Model T

  137. [ INFO ] Execution successful

  138. [ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool


  139. ###################################################

  140. Demo completed successfully.





  141. Run Inference Engine security_barrier_camera demo

  142. Run ./security_barrier_camera_demo -d CPU -d_va CPU -d_lpr CPU -i /home/jiang/intel/openvino/deployment_tools/demo/car_1.bmp -m /home/jiang/openvino_models/ir/intel/vehicle-license-plate-detection-barrier-0106/FP16/vehicle-license-plate-detection-barrier-0106.xml -m_lpr /home/jiang/openvino_models/ir/intel/license-plate-recognition-barrier-0001/FP16/license-plate-recognition-barrier-0001.xml -m_va /home/jiang/openvino_models/ir/intel/vehicle-attributes-recognition-barrier-0039/FP16/vehicle-attributes-recognition-barrier-0039.xml

  143. [ INFO ] InferenceEngine: 0x7f73d5aef030
  144. [ INFO ] Files were added: 1
  145. [ INFO ]     /home/jiang/intel/openvino/deployment_tools/demo/car_1.bmp
  146. [ INFO ] Loading device CPU
  147.         CPU
  148.         MKLDNNPlugin version ......... 2.1
  149.         Build ........... 2020.4.0-359-21e092122f4-releases/2020/4

  150. [ INFO ] Loading detection model to the CPU plugin
  151. [ INFO ] Loading Vehicle Attribs model to the CPU plugin
  152. [ INFO ] Loading Licence Plate Recognition (LPR) model to the CPU plugin
  153. [ INFO ] Number of InferRequests: 1 (detection), 3 (classification), 3 (recognition)
  154. [ INFO ] 4 streams for CPU
  155. [ INFO ] Display resolution: 1920x1080
  156. [ INFO ] Number of allocated frames: 3
  157. [ INFO ] Resizable input with support of ROI crop and auto resize is disabled
  158. 0.2FPS for (3 / 1) frames
  159. Detection InferRequests usage: 0.0%

  160. [ INFO ] Execution successful


  161. ###################################################

  162. Demo completed successfully.
复制代码
让天下人人学会人工智能!人工智能的前景一片大好!
回复

使用道具 举报

1365

主题

1856

帖子

1万

积分

管理员

Rank: 10Rank: 10Rank: 10

积分
14429
QQ
板凳
 楼主| 发表于 2020-9-15 16:38:53 | 只看该作者
注意:不要永久设置openvino的环境变量,会影响终端的其他conda环境的使用
比如:openvino的opencv就会影响 conda中的opencv-python

终端临时即可:source /home/jiang/intel/openvino/bin/setupvars.sh
让天下人人学会人工智能!人工智能的前景一片大好!
回复

使用道具 举报

1365

主题

1856

帖子

1万

积分

管理员

Rank: 10Rank: 10Rank: 10

积分
14429
QQ
地板
 楼主| 发表于 2020-9-16 09:26:26 | 只看该作者
Operating Systems

    Ubuntu 18.04.x long-term support (LTS), 64-bit
    CentOS 7.4, 64-bit (for target only)
    Yocto Project v3.0, 64-bit (for target only and requires modifications)
让天下人人学会人工智能!人工智能的前景一片大好!
回复

使用道具 举报

1365

主题

1856

帖子

1万

积分

管理员

Rank: 10Rank: 10Rank: 10

积分
14429
QQ
5#
 楼主| 发表于 2020-9-16 09:40:32 | 只看该作者
  1. 运行例子:
  2. 1、开新终端 source /home/jiang/intel/openvino/bin/setupvars.sh
  3. 如果没有临时环境变量:
  4. error while loading shared libraries: libinference_engine_transformations.so: cannot open shared object file
  5. 2、去c++编译后的可执行文件所在目录:cd ~/inference_engine_samples_build/intel64/Release
  6. 3、(CPU/GPU/MYRIAD)
  7. ./classification_sample_async -i ~/intel/openvino/deployment_tools/demo/car.png -m ~/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.xml -d CPU
复制代码
让天下人人学会人工智能!人工智能的前景一片大好!
回复

使用道具 举报

1365

主题

1856

帖子

1万

积分

管理员

Rank: 10Rank: 10Rank: 10

积分
14429
QQ
6#
 楼主| 发表于 2020-9-16 11:34:05 | 只看该作者
  1. Model Optimizer arguments:
  2. Common parameters:
  3.         - Path to the Input Model:         /home/jiang/py3_openvino_works/hyperlpr_py3_openvino/model_ori/mssd512_voc.caffemodel
  4.         - Path for generated IR:         /home/jiang/py3_openvino_works/hyperlpr_py3_openvino/model_openvino/detect/FP32
  5.         - IR output name:         mssd512_voc
  6.         - Log level:         ERROR
  7.         - Batch:         Not specified, inherited from the model
  8.         - Input layers:         data
  9.         - Output layers:         detection_out
  10.         - Input shapes:         [1,3,512,512]
  11.         - Mean values:         Not specified
  12.         - Scale values:         Not specified
  13.         - Scale factor:         Not specified
  14.         - Precision of IR:         FP32
  15.         - Enable fusing:         True
  16.         - Enable grouped convolutions fusing:         True
  17.         - Move mean values to preprocess section:         False
  18.         - Reverse input channels:         False
  19. Caffe specific parameters:
  20.         - Path to Python Caffe* parser generated from caffe.proto:         /home/jiang/intel/openvino/deployment_tools/model_optimizer/mo/front/caffe/proto
  21.         - Enable resnet optimization:         True
  22.         - Path to the Input prototxt:         /home/jiang/py3_openvino_works/hyperlpr_py3_openvino/model_ori/mssd512_voc.prototxt
  23.         - Path to CustomLayersMapping.xml:         Default
  24.         - Path to a mean file:         Not specified
  25.         - Offsets for a mean file:         Not specified
  26. Model Optimizer version:        
  27. [ WARNING ]  
  28. Detected not satisfied dependencies:
  29.         protobuf: installed: 3.0.0, required: >= 3.6.1

  30. Please install required versions of components or use install_prerequisites script
  31. /home/jiang/intel/openvino_2020.4.287/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites_caffe.sh
  32. Note that install_prerequisites scripts may install additional components.

  33. [ SUCCESS ] Generated IR version 10 model.
  34. [ SUCCESS ] XML file: /home/jiang/py3_openvino_works/hyperlpr_py3_openvino/model_openvino/detect/FP32/mssd512_voc.xml
  35. [ SUCCESS ] BIN file: /home/jiang/py3_openvino_works/hyperlpr_py3_openvino/model_openvino/detect/FP32/mssd512_voc.bin
  36. [ SUCCESS ] Total execution time: 5.52 seconds.
  37. [ SUCCESS ] Memory consumed: 83 MB.
复制代码
让天下人人学会人工智能!人工智能的前景一片大好!
回复

使用道具 举报

1365

主题

1856

帖子

1万

积分

管理员

Rank: 10Rank: 10Rank: 10

积分
14429
QQ
7#
 楼主| 发表于 2020-9-17 09:21:59 | 只看该作者
推理引擎是一个支持C\C++和python的一套API接口,需要开发人员自己实现推理过程的开发,开发流程其实非常的简单,核心流程如下:

1、装载处理器的插件库
2、读取网络结构和权重
3、配置输入和输出参数
4、装载模型
5、创建推理请求
6、准备输入Data
7、推理
8、结果处理
让天下人人学会人工智能!人工智能的前景一片大好!
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

QQ|Archiver|手机版|小黑屋|人工智能工程师的摇篮 ( 湘ICP备2020019608号-1 )

GMT+8, 2024-4-20 18:59 , Processed in 0.239552 second(s), 21 queries .

Powered by Discuz! X3.4

© 2001-2017 Comsenz Inc.

快速回复 返回顶部 返回列表