|PyTorch Adds TorchScript API|
|Written by Kay Ewbank|
|Friday, 16 August 2019|
PyTorch 1.2 has been released with a new TorchScript API offering fuller coverage of Python. The new release also has expanded ONNX export support and a standard nn.Transformer module.
PyTorch is an optimized tensor library for deep learning using GPUs and CPUs. It aims to offer a replacement for NumPy that make use of the power of GPUs, while providing a deep learning research platform that provides maximum flexibility and speed.
The developers say that in this release, the open source ML framework takes a major step forward for production use with the addition of an improved TorchScript environment. The TorchScript compiler converts PyTorch models to a statically typed graph representation, providing a way to optimize and execute models in constrained environments where Python is not available. You can incrementally convert your model to TorchScript, mixing compiled code seamlessly with Python. TorchScript programs can be saved from a Python process and loaded in a process where there is no Python dependency.
In this release, TorchScript has significantly expanded support for the subset of Python used in PyTorch models and delivers a new, easier-to-use API for compiling your models to TorchScript.
Support for ONNX export has also been expanded. ONNX is the Open Neural Network eXchange format, an open format to represent deep learning models designed to make it easier for AI developers to move models between tools. This release of PyTorch adds full support to export ONNX Opset versions 7 to 10, and there's an enhancement to the constant folding pass to support Opset 10, the latest available version of ONNX. ScriptModule has also been improved including support for multiple outputs, tensor factories, and tuples as inputs and outputs.
A standard nn.Transformer module has been included in this release. This is designed for use with neural networks that transform a sequence of elements (words in a sentence, perhaps) into a different sequence. Such networks are often used for translations, and usually use an encoder and a decoder, along with an attention mechanism that looks at the input sequence and decides which parts are the most important. The nn.transformer module relies entirely on an attention mechanism to draw global dependencies between input and output. It's based on the ideas put forward in a paper entitled “Attention is All You Need”.
The final main improvement to this release is an updated set of Domain API libraries. These provide access to common datasets, models, and transforms that can be used to quickly create a baseline, and this release sees three updated DAPI libraries for text, audio, and vision.
The new version is available on GitHub.
Summer SALE Kindle 9.99 Paperback $10 off!!
or email your comment to: email@example.com
|Last Updated ( Friday, 16 August 2019 )|