Machine Learning Becomes Easier and Faster with OpenVINO

Not long ago, machine learning was mainly the subject of complex scientific articles; today, it plays a key role in solving routine tasks in many industries. The most significant progress has been made in computer vision using convolutional neural networks, a class of deep neural networks applied to image analysis that has always been considered resource-consuming and too complex for real-world applications. Certain modern publicly available machine-learning competition platforms such as Kaggle offer exciting cases that no one could have ever conceived before, including deepfake detection (recognition of intentional face and voice distortions in videos), remote diagnostics using medical imaging, and weather prediction based on satellite images of clouds.

Auriga’s engineers follow the latest trends in computer vision to apply innovative ideas and platforms in their solutions. This article focuses on a fairly new but powerful product, the Intel Distribution of OpenVINO Toolkit (OpenVINO hereinafter for short) and discusses how it can help us solve machine-learning and computer-vision tasks.

OpenVINO is a free toolkit (framework) for C++ and Python with ready-to-use pre-trained convolutional neural network models, an optimizer for third-party models, and an API for inference (trained network responses for new non-training data) that uses several hardware computing platforms created by Intel. Creating and training models are unsupported and must be performed outside the framework.

OpenVINO includes several modules and scripts, including:

  • Model Optimizer – a command-line script for converting and optimizing models of popular frameworks into the internal IR format, which is used to represent the model in OpenVINO.
  • Inference Engine – an API for high-performance inference using a prepared model.
  • Model Zoo and Model Downloader – a large database of pre-trained models that allow the building of ready-to-use out-of-the-box solutions.
  • Pre-built OpenCV and OpenVX are ready-made assemblies of popular libraries containing everything needed for image and video pre-processing as well as additional algorithms for machine learning and computer vision.
  • Post-training optimization toolkit – a command-line tool for optimizing models including those used for weight quantization.
  • DL Workbench – a graphical analyzer and optimizer for models.
  • Demo applications – a set of examples for a quick start.

The sequence of steps for solving a typical image classification, detection, or segmentation problem using OpenVINO can proceed as follows:

  1. Select a pre-trained model from the Model Zoo collection or create and train a new model using a popular framework: TensorFlow, Caffe, or PyTorch (the model needs to be converted to ONNX format after PyTorch first);
  2. Optimize the model using OpenVINO if necessary;
  3. Use the built-in benchmark to select one or more execution platforms and the optimal inference method for an existing model;
  4. Create the final product or solution.

Fig.1. Steps to create a working neural network in OpenVINO

Below are some examples of how OpenVINO has helped us at Auriga effectively cope with certain computer vision tasks.

Case Study 1. The task was to detect and classify image sequences using a compact Intel NUC platform. The main processor (CPU) was used for rather complicated preliminary processing and had insufficient resources for other tasks. Using OpenVINO, the inference was transferred from the CPU to the integrated graphics processor unit (GPU), which made it possible to achieve the desired performance.

Case Study 2. The task was to significantly increase the speed of our existing solution to identify anomalies while maintaining accuracy. The code analysis showed that optimizing the algorithms and the model itself were unable to provide the required speed increase. The model was transferred to OpenVINO, optimized by the built-in tools, and then inference was used with sending parallel data to several computing modules (in this case, the CPU and GPU simultaneously).

Case Study 3. The task was to present a prototype of an affordable compact device developed within a very short time frame that is used to calculate the number of vehicles in a parking lot. A ready-to-use pre-trained model was selected from the Model Zoo database, and then we assembled a test layout based on the Raspberry Pi board with the USB-connected Intel Movidius dongle.

Model Zoo

Fig. 2. Result of applying a ready-to-use Model Zoo model for vehicle detection

The cases show that OpenVINO helps solve a variety of computer-vision problems and sometimes spares developers from diving into the details of building and training neural networks, which allows them to accelerate the final product’s creation.

As much as we appreciate the advantages of OpenVINO, our team faced several challenges using it:

  • OpenVINO is optimized for use in the Intel ecosystem; using other vendors’ platforms and devices may prove challenging. Neural networks are commonly associated with powerful GPUs, while the Intel Corporation is still noticeably lagging behind with relevant products, so we look forward to the release of its discrete graphics cards on the market.
  • The inference module has restrictions on the types of layers in models and may not support parts of them for certain types of calculator; in this case, the fallback platform can be used. Fortunately, the number of these restrictions decreases as new versions of OpenVINO are released.
  • Some of the announced computing platforms (such as Intel FPGA) are still in the early stages of their launch and so are not yet available to the public.

OpenVINO is a great tool for both beginners in machine learning and experienced professionals who are working on complex product solutions. This framework helps reduce development times and boost efficiency. In addition, the framework is well structured and documented, which makes working with it easy. The Intel Corporation is actively developing both OpenVINO and associated computing platforms, so we are certain that it will remain relevant and helpful in the future.