|More Pythonic PyTorch 2 Released|
|Written by Alex Denham|
|Thursday, 30 March 2023|
PyTorch 2.0 has been released with fundamental changes to the way it works at the compiler level, faster performance, and support for Dynamic Shapes and Distributed.
PyTorch is an optimized tensor library for deep learning using GPUs and CPUs that until now has mainly been developed by Meta AI. It aims to offer a replacement for NumPy that makes use of the power of GPUs, while providing a deep learning research platform that provides maximum flexibility and speed.
The new release includes a stable version of Accelerated Transformers (formerly called Better Transformers); and torch.compile, a feature that improves PyTorch performance and starts the move for parts of PyTorch from C++ back into Python. torch.compile is an optional and additive feature. Torch.compile makes use of several new technologies – TorchDynamo, AOTAutograd, PrimTorch and TorchInductor.
TorchDynamo captures PyTorch programs safely using Python Frame Evaluation Hooks; AOTAutograd overloads PyTorch’s autograd engine as a tracing autodiff for generating ahead-of-time backward traces.
TorchDynamo, AOTAutograd, PrimTorch and TorchInductor are written in Python and support dynamic shapes (the ability to send in Tensors of different sizes without inducing a recompilation).
Alongside the new release the team has released a series of beta updates to the PyTorch domain libraries, and separate libraries including TorchAudio, TorchVision, and TorchText. An update for TorchX is also being released as it moves to community supported mode.
PyTorch is available now on the PyTorch Foundation website and on GitHub.
PyTorch Joins Linux Foundation
To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.
or email your comment to: firstname.lastname@example.org
|Last Updated ( Thursday, 30 March 2023 )|