TensorFlow 2 Offers Faster Model Training
Written by Kay Ewbank   
Wednesday, 02 October 2019

There's a new version of Google TensorFlow with faster model training and a move to Keras as the central high-level API used to build and train models.

TensorFlow is Google's open source tool that can be used for a wide range of parallel computations, including implementing neural networks and other AI learning methods. It is designed to make it easier to work with neural networks and is seen as more general and easier than other options. Keras was added in version 1.4, but has now been made the central API. It is a machine learning framework that is itself a neural-network API. It was written in Python and developed with a focus on enabling fast experimentation.

tensorflow

The improved performance in training models comes from tighter integration with TensorRT, Nvidia's deep learning inference optimizer, commonly used in ResNet-50 and BERT-based applications.  Nvidia says that with TensorRT and TensorFlow 2.0, developers can achieve up to a 7x speedup on inference. 

The aim of the tighter integration of Keras, along with other improvements including eager execution by default and Pythonic function execution, is to make TensorFlow feel familiar for Python developers.

Eager execution in TensorFlow means that operations are evaluated immediately, without building graphs: operations return concrete values instead of constructing a computational graph to run later.

Pythonic function execution refers to a change from having to build a graph and executing it via a TensorFlow session. This is now is discouraged and replaced with by writing regular Python functions. The functions can then be turned into graphs that can be executed remotely, serialized, and optimized for performance.

Other improvements mean developers will be able to use the Distribution Strategy API to distribute training with minimal code changes. This API supports distributed training with Keras model.fit, as well as with custom training loops.

This release has also standardized on the SavedModel file format. This means you can run models on a variety of runtimes, including the cloud, web, browser, Node.js, mobile and embedded systems. You can run your models with TensorFlow, deploy them with TensorFlow Serving, use them on mobile and embedded systems with TensorFlow Lite, and train and run in the browser or Node.js with TensorFlow.js.

TensorFlow 2.0 is available for download on GitHub. 

 

tensorflow

More Information

TensorFlow On GitHub

Related Articles

TensorFlow 1.5 Includes Mobile Version

TensorFlow For R

TensorFlow Incorporates Keras

TensorFlow Lite For Mobiles

TensorFlow Reaches Version 1

TensorFlow - Google's Open Source AI And Computation Engine

 

 

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

Banner


AWS Releases Lambda SnapStart For .NET Functions
10/12/2024

Amazon has released new services for AWS Lambda SnapStart,  Amazon's performance optimization that aims to significantly improve the startup time for applications.



Random Gifts For Programmers
24/11/2024

Not really random. Not even pseudo random, more stuff that caught my attention and that I, for one, would like to be given. And, yes, if I'm not given them, I'd probably buy some for myself.


More News

espbook

 

Comments




or email your comment to: comments@i-programmer.info

Last Updated ( Wednesday, 02 October 2019 )