PyTorch 1.10 Focuses on Improved Training and Performance |
Written by Kay Ewbank |
Thursday, 02 December 2021 |
PyTorch, Facebook's open-source deep-learning framework, has been updated with an integration with CUDA Graphs API. The new version also has better performance thanks to JIT compiler updates, as well as beta support for the Android Neural Networks API (NNAPI). Two new libraries, TorchVision and TorchAudio have also been released.
The support for Android NNAPI means Android apps can use hardware accelerators such as GPUs and neural processing units. It is now included as a beta feature with improvements including being able to run tests on mobile hosts and support for flexible tensor shapes. The CUDA Graphs APIs integration is another beta inclusion, and is designed to improve the runtime performance where workloads are CPU-bound by using a GPU. The way this works is that the workload is sent to the GPU, and the results are captured and reused at runtime. This obviously loses flexibility but gains a more acceptable performance. A number of features in the new release are now moved from beta to production, including a module for remote communication, the training memory optimizer, and the handler for distributed data parallel communications. The FX module is also now marked as stable. This consists of a symbolic tracer, an intermediate representation, and a Python code generator. The module gives developers the tools to create their own custom transformations. You can take a module and convert it to a graph representation that can then be modified in code. The resulting graph can then be converted to PyTorch compatible Python source code. More InformationRelated ArticlesPyTorch 1.8 Improves FFT Support
To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.
Comments
or email your comment to: comments@i-programmer.info |
Last Updated ( Thursday, 02 December 2021 ) |