Below are the steps taken to run the benchmark app:
1. Create a virtual environment to avoid dependency conflicts. You can skip this step only if you do want to install all dependencies globally.
-python3 -m venv openvino_env
-source openvino_env/bin/activate
2. Set Up and Update PIP to the Highest Version
-python3 -m pip install --upgrade pip
3. Install openvino runtime packages
-pip3 install openvino==2022.1.0.dev20220131
-run “python -c "from openvino.runtime import Core"” to verify that the runtime package is properly installed, you will not see any error messages.
4. Install openvino development packages
-pip3 install openvino-dev[caffee,kaldi,mxnet,onnx,pytorch,tensorflow2]==2022.1.0.dev20220131
-run “mo -h” to verify that the developer package is properly installed, you will see the help message for Model Optimizer.
5. Use omz_downloader to download the model file from online source.
-omz_downloader --name alexnet
6. Use omz_converter to converts the models that are not in the Inference Engine IR format into that format.
-omz_converter --name alexnet --precision FP32
7. Run the python benchmark app
-benchmark_app -m ./public/alexnet/FP32/alexnet.xml
• For C++ benchmarking application, follow the steps provided:
C++ benchmarking application can be installed from apt as described at https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_apt.html. Then still do pip installation from above and get models
cd /opt/intel/openvino_<VERSION>/samples/cpp # this is for 2022.1, use openvino_<VERSION>/inference_engine/samples/cpp for 2021.4
./build_samples.sh
cd ~/inference_engine_samples_build
./benchmark_app -m /path/to/converted/model # should be /tmp/public/alexnet/FP32/alexnet.xml here
OpenVINO Validation Steps:
Please install Openvino on the Lookout Canyon Ubuntu image and execute both C++ and Python benchmarking application on that image as follows
Please see the attached work logs for more details.
• For the python benchmarking application, install the Openvino runtime and development packages from https:/ /pypi.org/ project/ openvino/ 2022.1. 0.dev20220131/ and https:/ /pypi.org/ project/ openvino- dev/2022. 1.0.dev20220131 /.
Below are the steps taken to run the benchmark app: env/bin/ activate
1. Create a virtual environment to avoid dependency conflicts. You can skip this step only if you do want to install all dependencies globally.
-python3 -m venv openvino_env
-source openvino_
2. Set Up and Update PIP to the Highest Version
-python3 -m pip install --upgrade pip
3. Install openvino runtime packages =2022.1. 0.dev20220131
-pip3 install openvino=
-run “python -c "from openvino.runtime import Core"” to verify that the runtime package is properly installed, you will not see any error messages.
4. Install openvino development packages dev[caffee, kaldi,mxnet, onnx,pytorch, tensorflow2] ==2022. 1.0.dev20220131
-pip3 install openvino-
-run “mo -h” to verify that the developer package is properly installed, you will see the help message for Model Optimizer.
5. Use omz_downloader to download the model file from online source.
-omz_downloader --name alexnet
6. Use omz_converter to converts the models that are not in the Inference Engine IR format into that format.
-omz_converter --name alexnet --precision FP32
7. Run the python benchmark app alexnet/ FP32/alexnet. xml
-benchmark_app -m ./public/
• For C++ benchmarking application, follow the steps provided:
C++ benchmarking application can be installed from apt as described at https:/ /docs.openvino. ai/latest/ openvino_ docs_install_ guides_ installing_ openvino_ apt.html. Then still do pip installation from above and get models openvino_ <VERSION> /samples/ cpp # this is for 2022.1, use openvino_ <VERSION> /inference_ engine/ samples/ cpp for 2021.4 engine_ samples_ build converted/ model # should be /tmp/public/ alexnet/ FP32/alexnet. xml here
cd /opt/intel/
./build_samples.sh
cd ~/inference_
./benchmark_app -m /path/to/