PyTorch is an open-source machine learning framework that is used for the creation and training of deep learning models. These can then be applied in a variety of use cases, mostly concerned with the fields of computer vision, natural language processing, and the like.
Microsoft announced full support for PyTorch on Azure last year; some of the tech giant"s developers take an active part in the training framework"s community, and PyTorch is offered through plenty of the Redmond giant"s AI platform services. Now, a few weeks after the release of PyTorch 1.2, Microsoft has highlighted some ways in which it can be utilized on Azure, while also reaffirming its continued support for the Torch-based library.
Although primarily written in Python, PyTorch also has a C++ frontend. The new version of the framework has been integrated with some of Microsoft"s cloud platform"s services that include Azure Machine Learning, Azure Notebooks, and Data Science Virtual Machine. You can make use of the some of the latest features in the following ways for each of these services:
- Azure Machine Learning service – Azure Machine Learning streamlines the building, training, and deployment of machine learning models. Azure Machine Learning’s Python SDK has a dedicated PyTorch estimator that makes it easy to run PyTorch training scripts on any compute target you choose, whether it’s your local machine, a single virtual machine (VM) in Azure, or a GPU cluster in Azure. Learn how to train Pytorch deep learning models at scale with Azure Machine Learning.
- Azure Notebooks – Azure Notebooks provides a free, cloud-hosted Jupyter notebook server with PyTorch 1.2 pre-installed. To learn more, check out the PyTorch tutorials and examples.
- Data Science Virtual Machine – Data Science Virtual Machines are pre-configured with popular data science and deep learning tools, including PyTorch 1.2. You can choose a variety of machine types to host your Data Science Virtual Machine, including those with GPUs. To learn more, refer to the Data Science Virtual Machine documentation.
Moving on, Microsoft has also detailed the progress it has made in making the process of taking PyTorch models from training to production more efficient. Open Neural Network Exchange (ONNX) has been recommended for exportation of models. Notably, ONNX models can be inferenced using ONNX Runtime, which has been written in C++ and is supported on Windows, Linux, and Mac. As the inference engine is quite small in size, it is highly suitable for exporting production scale machine learning workloads across a range of target devices. Moreover, NVIDIA and Intel have also integrated their accelerators with Runtime, making it more efficient on CPUs, GPUs, and VPUs.
As such, Microsoft has contributed some enhanced ONNX export features for PyTorch 1.2, which include:
- Support for a wider range of PyTorch models, including object detection and segmentation models such as mask RCNN, faster RCNN, and SSD
- Support for models that work on variable length inputs
- Export models that can run on various versions of ONNX inference engines
- Optimization of models with constant folding
- End-to-end tutorial showing export of a PyTorch model to ONNX and running inference in ONNX Runtime
You can learn more about deploying PyTorch 1.2 models to the cloud, to Windows apps, and to Linux IoT ARM devices by clicking the relevant tutorial links. The latest version of the deep learning framework on Azure can be checked out through a free trial.