Intel has announced the launch of the Intel Distribution of OpenVINO Toolkit's latest version. The new features incorporate a greater selection of deep learning models, more device portability choices, and higher inferencing performance with fewer code changes.
Vice President, OpenVINO Developer Tools in the Network and Edge Group, Adam Burns stated:
The latest release of OpenVINO 2022.1 builds on more than three years of learnings from hundreds of thousands of developers to simplify and automate optimizations. The latest upgrade adds hardware auto-discovery and automatic optimization, so software developers can achieve optimal performance on every platform. This software plus Intel silicon enables a significant AI ROI advantage and is deployed easily into the Intel-based solutions in your network.
With OpenVINO 2022.1 features, developers will be able to adopt, maintain, optimize, and deploy code efficiently across an expanded range of deep learning models. Some of the highlights from the latest version include updated and cleaner API, fewer code changes when transitioning from frameworks, and OpenVINO training extensions and neural network compression framework (NNCF) that provide optional model training templates that offer supplementary performance enhancements with preserved accuracy for action recognition, image classification, speech recognition, question answering, and translation.
Broader model support comes for natural language programming models and use cases like text-to-speech and voice recognition, optimization and support for advanced computer vision, and direct support for PaddlePaddle models.
Portability and performance updates include smarter device usage without modifying code, "expert" optimization built into the toolkit, and support for high-performance inferencing on CPU and integrated GPU with 12th Gen Intel Core.