PyTorch Adds TorchScript API
Written by Kay Ewbank   
Friday, 16 August 2019

PyTorch 1.2 has been released with a new TorchScript API offering fuller coverage of Python. The new release also has expanded ONNX export support and a standard nn.Transformer module.

PyTorch is an optimized tensor library for deep learning using GPUs and CPUs. It aims to offer a replacement for NumPy that make use of the power of GPUs, while providing a deep learning research platform that provides maximum flexibility and speed.

pytorch

The developers say that in this release, the open source ML framework takes a major step forward for production use with the addition of an improved TorchScript environment. The TorchScript compiler converts PyTorch models to a statically typed graph representation, providing a way to optimize and execute models in constrained environments where Python is not available. You can incrementally convert your model to TorchScript, mixing compiled code seamlessly with Python. TorchScript programs can be saved from a Python process and loaded in a process where there is no Python dependency.

In this release, TorchScript has significantly expanded support for the subset of Python used in PyTorch models and delivers a new, easier-to-use API for compiling your models to TorchScript.

Support for ONNX export has also been expanded. ONNX is the Open Neural Network eXchange format, an open format to represent deep learning models designed to make it easier for AI developers to move models between tools. This release of PyTorch adds full support to export ONNX Opset versions 7 to 10, and there's an enhancement to the constant folding pass to support Opset 10, the latest available version of ONNX. ScriptModule has also been improved including support for multiple outputs, tensor factories, and tuples as inputs and outputs.

A standard nn.Transformer module has been included in this release. This is designed for use with neural networks that transform a sequence of elements (words in a sentence, perhaps) into a different sequence. Such networks are often used for translations, and usually use an encoder and a decoder, along with an attention mechanism that looks at the input sequence and decides which parts are the most important.  The nn.transformer module relies entirely on an attention mechanism to draw global dependencies between input and output. It's based on the ideas put forward in a paper entitled  “Attention is All You Need”.

The final main improvement to this release is an updated set of Domain API libraries. These provide access to common datasets, models, and transforms that can be used to quickly create a baseline, and this release sees three updated DAPI libraries for text, audio, and vision.

The new version is available on GitHub.

 

pytorch 

 

More Information

PyTorch Website

PyThorch On GitHub

Related Articles

PyTorch Scholarship Challenge

Pyro Now On Watson Machine Learning

More Efficient Style Transfer Algorithm

ONNX For AI Model Interoperability

Microsoft Cognitive Toolkit Version 2.0

NVIDA Updates Free Deep Learning Software

TensorFlow - Googles Open Source AI And Computation Engine

AIGoes Open Source To The Tune Of $1 Billion 

 

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

Banner


Wasmer 5 Adds iOS Support
12/11/2024

The Wasmer team has released Wasmer 5.0. The WebAssembly runtime adds experimental support for more back ends including V8, Wasmi and WAMR. It also now has iOS support, and upgraded compilers includin [ ... ]



Data Wrangler Gets Copilot Integration
11/11/2024

Microsoft has announced that Copilot is being integrated into Data Wrangler. The move will give data scientists the ability to use natural language to clean and transform data, and to get help with fi [ ... ]


More News

espbook

 

Comments




or email your comment to: comments@i-programmer.info

Last Updated ( Friday, 16 August 2019 )