|
- ###################################################
- Convert a model with Model Optimizer
- Run python3 /home/jiang/intel/openvino_2020.4.287/deployment_tools/open_model_zoo/tools/downloader/converter.py --mo /home/jiang/intel/openvino_2020.4.287/deployment_tools/model_optimizer/mo.py --name squeezenet1.1 -d /home/jiang/openvino_models/models -o /home/jiang/openvino_models/ir --precisions FP16
- ========== Converting squeezenet1.1 to IR (FP16)
- Conversion command: /usr/bin/python3 -- /home/jiang/intel/openvino_2020.4.287/deployment_tools/model_optimizer/mo.py --framework=caffe --data_type=FP16 --output_dir=/home/jiang/openvino_models/ir/public/squeezenet1.1/FP16 --model_name=squeezenet1.1 '--input_shape=[1,3,227,227]' --input=data '--mean_values=data[104.0,117.0,123.0]' --output=prob --input_model=/home/jiang/openvino_models/models/public/squeezenet1.1/squeezenet1.1.caffemodel --input_proto=/home/jiang/openvino_models/models/public/squeezenet1.1/squeezenet1.1.prototxt
- Model Optimizer arguments:
- Common parameters:
- - Path to the Input Model: /home/jiang/openvino_models/models/public/squeezenet1.1/squeezenet1.1.caffemodel
- - Path for generated IR: /home/jiang/openvino_models/ir/public/squeezenet1.1/FP16
- - IR output name: squeezenet1.1
- - Log level: ERROR
- - Batch: Not specified, inherited from the model
- - Input layers: data
- - Output layers: prob
- - Input shapes: [1,3,227,227]
- - Mean values: data[104.0,117.0,123.0]
- - Scale values: Not specified
- - Scale factor: Not specified
- - Precision of IR: FP16
- - Enable fusing: True
- - Enable grouped convolutions fusing: True
- - Move mean values to preprocess section: False
- - Reverse input channels: False
- Caffe specific parameters:
- - Path to Python Caffe* parser generated from caffe.proto: /home/jiang/intel/openvino_2020.4.287/deployment_tools/model_optimizer/mo/front/caffe/proto
- - Enable resnet optimization: True
- - Path to the Input prototxt: /home/jiang/openvino_models/models/public/squeezenet1.1/squeezenet1.1.prototxt
- - Path to CustomLayersMapping.xml: Default
- - Path to a mean file: Not specified
- - Offsets for a mean file: Not specified
- Model Optimizer version:
- [ WARNING ]
- Detected not satisfied dependencies:
- protobuf: installed: 3.0.0, required: >= 3.6.1
- Please install required versions of components or use install_prerequisites script
- /home/jiang/intel/openvino_2020.4.287/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites_caffe.sh
- Note that install_prerequisites scripts may install additional components.
- [ SUCCESS ] Generated IR version 10 model.
- [ SUCCESS ] XML file: /home/jiang/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.xml
- [ SUCCESS ] BIN file: /home/jiang/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.bin
- [ SUCCESS ] Total execution time: 3.27 seconds.
- [ SUCCESS ] Memory consumed: 85 MB.
- ###################################################
- Build Inference Engine samples
- -- The C compiler identification is GNU 7.5.0
- -- The CXX compiler identification is GNU 7.5.0
- -- Check for working C compiler: /usr/bin/cc
- -- Check for working C compiler: /usr/bin/cc -- works
- -- Detecting C compiler ABI info
- -- Detecting C compiler ABI info - done
- -- Detecting C compile features
- -- Detecting C compile features - done
- -- Check for working CXX compiler: /usr/bin/c++
- -- Check for working CXX compiler: /usr/bin/c++ -- works
- -- Detecting CXX compiler ABI info
- -- Detecting CXX compiler ABI info - done
- -- Detecting CXX compile features
- -- Detecting CXX compile features - done
- -- Looking for C++ include unistd.h
- -- Looking for C++ include unistd.h - found
- -- Looking for C++ include stdint.h
- -- Looking for C++ include stdint.h - found
- -- Looking for C++ include sys/types.h
- -- Looking for C++ include sys/types.h - found
- -- Looking for C++ include fnmatch.h
- -- Looking for C++ include fnmatch.h - found
- -- Looking for strtoll
- -- Looking for strtoll - found
- -- Found InferenceEngine: /home/jiang/intel/openvino_2020.4.287/deployment_tools/inference_engine/lib/intel64/libinference_engine.so (Required is at least version "2.1")
- -- Configuring done
- -- Generating done
- -- Build files have been written to: /home/jiang/inference_engine_samples_build
- Scanning dependencies of target format_reader
- Scanning dependencies of target gflags_nothreads_static
- [ 9%] Building CXX object thirdparty/gflags/CMakeFiles/gflags_nothreads_static.dir/src/gflags.cc.o
- [ 18%] Building CXX object thirdparty/gflags/CMakeFiles/gflags_nothreads_static.dir/src/gflags_completions.cc.o
- [ 27%] Building CXX object thirdparty/gflags/CMakeFiles/gflags_nothreads_static.dir/src/gflags_reporting.cc.o
- [ 36%] Building CXX object common/format_reader/CMakeFiles/format_reader.dir/MnistUbyte.cpp.o
- [ 45%] Building CXX object common/format_reader/CMakeFiles/format_reader.dir/bmp.cpp.o
- [ 54%] Building CXX object common/format_reader/CMakeFiles/format_reader.dir/format_reader.cpp.o
- [ 63%] Building CXX object common/format_reader/CMakeFiles/format_reader.dir/opencv_wraper.cpp.o
- [ 72%] Linking CXX shared library ../../intel64/Release/lib/libformat_reader.so
- [ 72%] Built target format_reader
- [ 81%] Linking CXX static library ../../intel64/Release/lib/libgflags_nothreads.a
- [ 81%] Built target gflags_nothreads_static
- Scanning dependencies of target classification_sample_async
- [ 90%] Building CXX object classification_sample_async/CMakeFiles/classification_sample_async.dir/main.cpp.o
- [100%] Linking CXX executable ../intel64/Release/classification_sample_async
- [100%] Built target classification_sample_async
- ###################################################
- Run Inference Engine classification sample
- Run ./classification_sample_async -d CPU -i /home/jiang/intel/openvino/deployment_tools/demo/car.png -m /home/jiang/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.xml
- [ INFO ] InferenceEngine:
- API version ............ 2.1
- Build .................. 2020.4.0-359-21e092122f4-releases/2020/4
- Description ....... API
- [ INFO ] Parsing input parameters
- [ INFO ] Parsing input parameters
- [ INFO ] Files were added: 1
- [ INFO ] /home/jiang/intel/openvino/deployment_tools/demo/car.png
- [ INFO ] Creating Inference Engine
- CPU
- MKLDNNPlugin version ......... 2.1
- Build ........... 2020.4.0-359-21e092122f4-releases/2020/4
- [ INFO ] Loading network files
- [ INFO ] Preparing input blobs
- [ WARNING ] Image is resized from (787, 259) to (227, 227)
- [ INFO ] Batch size is 1
- [ INFO ] Loading model to the device
- [ INFO ] Create infer request
- [ INFO ] Start inference (10 asynchronous executions)
- [ INFO ] Completed 1 async request execution
- [ INFO ] Completed 2 async request execution
- [ INFO ] Completed 3 async request execution
- [ INFO ] Completed 4 async request execution
- [ INFO ] Completed 5 async request execution
- [ INFO ] Completed 6 async request execution
- [ INFO ] Completed 7 async request execution
- [ INFO ] Completed 8 async request execution
- [ INFO ] Completed 9 async request execution
- [ INFO ] Completed 10 async request execution
- [ INFO ] Processing output blobs
- Top 10 results:
- Image /home/jiang/intel/openvino/deployment_tools/demo/car.png
- classid probability label
- ------- ----------- -----
- 817 0.6853030 sports car, sport car
- 479 0.1835197 car wheel
- 511 0.0917197 convertible
- 436 0.0200694 beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon
- 751 0.0069604 racer, race car, racing car
- 656 0.0044177 minivan
- 717 0.0024739 pickup, pickup truck
- 581 0.0017788 grille, radiator grille
- 468 0.0013083 cab, hack, taxi, taxicab
- 661 0.0007443 Model T
- [ INFO ] Execution successful
- [ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
- ###################################################
- Demo completed successfully.
- Run Inference Engine security_barrier_camera demo
- Run ./security_barrier_camera_demo -d CPU -d_va CPU -d_lpr CPU -i /home/jiang/intel/openvino/deployment_tools/demo/car_1.bmp -m /home/jiang/openvino_models/ir/intel/vehicle-license-plate-detection-barrier-0106/FP16/vehicle-license-plate-detection-barrier-0106.xml -m_lpr /home/jiang/openvino_models/ir/intel/license-plate-recognition-barrier-0001/FP16/license-plate-recognition-barrier-0001.xml -m_va /home/jiang/openvino_models/ir/intel/vehicle-attributes-recognition-barrier-0039/FP16/vehicle-attributes-recognition-barrier-0039.xml
- [ INFO ] InferenceEngine: 0x7f73d5aef030
- [ INFO ] Files were added: 1
- [ INFO ] /home/jiang/intel/openvino/deployment_tools/demo/car_1.bmp
- [ INFO ] Loading device CPU
- CPU
- MKLDNNPlugin version ......... 2.1
- Build ........... 2020.4.0-359-21e092122f4-releases/2020/4
- [ INFO ] Loading detection model to the CPU plugin
- [ INFO ] Loading Vehicle Attribs model to the CPU plugin
- [ INFO ] Loading Licence Plate Recognition (LPR) model to the CPU plugin
- [ INFO ] Number of InferRequests: 1 (detection), 3 (classification), 3 (recognition)
- [ INFO ] 4 streams for CPU
- [ INFO ] Display resolution: 1920x1080
- [ INFO ] Number of allocated frames: 3
- [ INFO ] Resizable input with support of ROI crop and auto resize is disabled
- 0.2FPS for (3 / 1) frames
- Detection InferRequests usage: 0.0%
- [ INFO ] Execution successful
- ###################################################
- Demo completed successfully.
复制代码 |
|