Deep Learning

TensorFlow 2.0: Dynamic, Readable, and Highly Extended

August 13, 2019
9 min read
Tensorflow-2.0-bg.png

TensorFlow 2.0 Introduction

Considering learning a new Python framework for deep learning? If you already know some TensorFlow and are looking for something with a little more dynamism, you no longer have to switch all the way to PyTorch, thanks to some substantial changes coming as part of TensorFlow 2.0.

In fact, many of the changes in 2.0 specifically address the alleged shortcomings of TensorFlow. With eager execution by default, you no longer have to pre-define a static graph, initialize sessions, or worry about tensors falling outside of the proper scope when you get over-zealous in your object oriented principles. TensorFlow still has about 3 times the user base of PyTorch (judging from the repositories on GitHub referencing each framework), and that means more extensions, more tutorials, and more developers collaboratively exploring the space of all possible code errors on Stack Overflow. You'll also find that, despite the major changes starting with TensorFlow 2.0, the project developers have taken many steps to ensure that backwards compatibility can be maintained.

What is TensorFlow?

At its core, TensorFlow is a software package for the mathematical manipulation, storage, and transfer of tensors, a generalization of the concept of matrices. The primary utility and development driver for TensorFlow is machine learning, especially deep neural networks with millions of parameters.

TensorFlow is typically used as an extension of Python, but all those low-level components that allow TF to run on hardware accelerators like GPUs and Google's Tensor Processing Units are written in compiled languages like C++ and CUDA. TensorFlow was released under the Apache 2.0 license in 2015, and since then it has found widespread use in research, industry, and educational communities. It's used by about 3x as many deep learning practitioners as the next most popular framework and is highly extended by projects like TF-encrypted for privacy-centric federated learning or Magenta for deep-learning augmented artistry with images and music.

The large user community and open-source ethos of many AI and ML practitioners means that there are plenty of material to learn from and projects to contribute to. TensorFlow computation is graph-based, with data in the form of tensors flowing between computational nodes along connecting edges in a directed manner. Graphs are a fitting cognitive framework for setting up parallel computations. It's also well suited to distributed computing, optimization at the compiler level, and provides a model structure that exists outside of the programming language it is written in (and is thus portable). But the graph and session based approach employed in TensorFlow pre-2.0 required developers to first define a static graph, than establish a session to run the graph. This approach leads to more re-used boilerplate code, obviates the use of normal Python flow control, and because graphs must be defined prior to invoking their use in a TensorFlow session, they are not as flexible during training.

TensorFlow graph and animation

TensorFlow graph CC BY Tensorflow.org

Major Changes in TensorFlow 2.0

Streamlining the TensorFlow experience was a major development objective for TensorFlow 2.0. They accomplished this by reducing redundancy, full keras integration, and a major shift away from static graphs to eager execution. The latter change makes the framework more dynamic, and arguably improves the intuitiveness and readability of the code. TF 2.0 has fully embraced keras as the high-level Application Programming Interface (API). This means that instead of using keras directly with a TensorFlow backend doing all the heavy lifting in the background, all the capabilities of keras are available from within TensorFlow under tf.keras. Although keras integration into TensorFlow had already begun prior to 2.0, the team has put a lot of effort into consolidating redundancies and grouping everything under the keras umbrella. In particular, you'll probably find yourself working with the keras layers API. Francois Chollet, the inventor of keras, put out an 18-tweet crash course complete with code support that highlights the central role played by the layers API in TF 2.0. Previously, you could find the same functionality with slight differences in several different corners of TensorFlow.

One of the major updates in 2.0 is that many of these confusing redundancies have been consolidated, with many functions now built into the keras API. It's now much easier to work with concepts from object-oriented programming and elegant Python flow control. Overall, TF 2.0 promises to offer improved flexibility for research-level experimentation while retaining the real-world utility and deployability that drove TensorFlow to be the most widely deep learning framework. While TF 2.0 adopts eager execution to make experimentation and prototyping a breeze, developers can access all the benefits of graphs by wrapping their code with a tf.function annotation. What's more, the changes in 2.0 likely lead to smoother learning curve and more readable code for most users.

ebook deep learning

Noteworthy Projects

One exciting development coming as part of TF 2.0 and a major trend in deep learning in general is edge computing: lightweight deep learning models designed to be deployed on low-power distributed devices like mobile phones, embedded microcontrollers, and Internet of Things (IoT) devices.

TensorFlow Lite is tailor made for edge-computing, and the TF Developer's Summit demonstrated edge ML on a new lightweight TPU-enabled Coral Dev Board. This is in addition to the pre-existing support for devices like the Raspberry Pi. Other interesting projects leveraging the improved capabilities of TF 2.0 include an open-source chatbot library called DeepPavlov, a bug-bite image classifier, and an air quality prediction app that estimates the level of pollution based on cell phone images.

Caveats

TensorFlow 2.0 remains pre-release as of this writing, and so is accompanied with a litany of bugs and changing functionality that accompanies any project at an early stage of development. As an example, I began a new project involving a flexible cGAN model shortly after TF 2.0 alpha was released, and upgrading to the current TF 2.0 beta meant major code changes due to bugs in the way the keras layers API handles tensor concatenation. Any large project built on TF 2.0 is likely to undergo similar revisions, at least for a timescale of weeks to months, as 2.0 is readied for full release.

How to Get Started With TensorFlow 2.0

A huge component of the TensorFlow value proposition is the substantial community built around the open-source project. This means tons of great material in the form of tutorials, blog posts, and full blown projects to choose from. I would suggest you take a 3-part approach to getting up to speed with 2.0.

  1. Get a high-level understanding of the major changes in TensorFlow 2.0 and how this impact your projects by reading blog posts (like this one) and watching some of the presentations from the TensorFlow team from TF Dev Summit 2019. This is to fuel your imagination and help to focus down on specific concepts that will make a big difference in what you can build.
  2. Investigate the wide swaths of tutorials and code examples from resources like Google's SeedBank, which are all accompanied by code you can modify and run in Google Colaboratory notebooks. This is a good way to get used to the more dynamic and flexible eager execution principles in TF 2.0, and I think you'll agree that the result is much more Pythonic than the old graph and session based development flow.
  3. Your impressions from 1 and 2 should give you enough of an cognitive outline of 2.0 to start solving real problems. If you're into reinforcement learning, you may find that TF-Agents becomes a useful expedient to your next project, while natural language processing practitioners may find the improved new ease-of-use for recurrent neural networks quickly becomes indispensable.

Ultimately, if you've been on the fence about building TensorFlow expertise and using it in your projects, 2.0 marks a convenient boundary to make the leap with many improvements to usability and utility. On the other hand if you're a TensorFlow veteran you should find all the features you know and love to still be available with myriad improvements in other areas. Many of the new features will improve your efficiency in rapid iteration early in a project, without sacrificing the convenient deployment later.

TensorFlow GPU Benchmarks

Topics

Tensorflow-2.0-bg.png
Deep Learning

TensorFlow 2.0: Dynamic, Readable, and Highly Extended

August 13, 20199 min read

TensorFlow 2.0 Introduction

Considering learning a new Python framework for deep learning? If you already know some TensorFlow and are looking for something with a little more dynamism, you no longer have to switch all the way to PyTorch, thanks to some substantial changes coming as part of TensorFlow 2.0.

In fact, many of the changes in 2.0 specifically address the alleged shortcomings of TensorFlow. With eager execution by default, you no longer have to pre-define a static graph, initialize sessions, or worry about tensors falling outside of the proper scope when you get over-zealous in your object oriented principles. TensorFlow still has about 3 times the user base of PyTorch (judging from the repositories on GitHub referencing each framework), and that means more extensions, more tutorials, and more developers collaboratively exploring the space of all possible code errors on Stack Overflow. You'll also find that, despite the major changes starting with TensorFlow 2.0, the project developers have taken many steps to ensure that backwards compatibility can be maintained.

What is TensorFlow?

At its core, TensorFlow is a software package for the mathematical manipulation, storage, and transfer of tensors, a generalization of the concept of matrices. The primary utility and development driver for TensorFlow is machine learning, especially deep neural networks with millions of parameters.

TensorFlow is typically used as an extension of Python, but all those low-level components that allow TF to run on hardware accelerators like GPUs and Google's Tensor Processing Units are written in compiled languages like C++ and CUDA. TensorFlow was released under the Apache 2.0 license in 2015, and since then it has found widespread use in research, industry, and educational communities. It's used by about 3x as many deep learning practitioners as the next most popular framework and is highly extended by projects like TF-encrypted for privacy-centric federated learning or Magenta for deep-learning augmented artistry with images and music.

The large user community and open-source ethos of many AI and ML practitioners means that there are plenty of material to learn from and projects to contribute to. TensorFlow computation is graph-based, with data in the form of tensors flowing between computational nodes along connecting edges in a directed manner. Graphs are a fitting cognitive framework for setting up parallel computations. It's also well suited to distributed computing, optimization at the compiler level, and provides a model structure that exists outside of the programming language it is written in (and is thus portable). But the graph and session based approach employed in TensorFlow pre-2.0 required developers to first define a static graph, than establish a session to run the graph. This approach leads to more re-used boilerplate code, obviates the use of normal Python flow control, and because graphs must be defined prior to invoking their use in a TensorFlow session, they are not as flexible during training.

TensorFlow graph and animation

TensorFlow graph CC BY Tensorflow.org

Major Changes in TensorFlow 2.0

Streamlining the TensorFlow experience was a major development objective for TensorFlow 2.0. They accomplished this by reducing redundancy, full keras integration, and a major shift away from static graphs to eager execution. The latter change makes the framework more dynamic, and arguably improves the intuitiveness and readability of the code. TF 2.0 has fully embraced keras as the high-level Application Programming Interface (API). This means that instead of using keras directly with a TensorFlow backend doing all the heavy lifting in the background, all the capabilities of keras are available from within TensorFlow under tf.keras. Although keras integration into TensorFlow had already begun prior to 2.0, the team has put a lot of effort into consolidating redundancies and grouping everything under the keras umbrella. In particular, you'll probably find yourself working with the keras layers API. Francois Chollet, the inventor of keras, put out an 18-tweet crash course complete with code support that highlights the central role played by the layers API in TF 2.0. Previously, you could find the same functionality with slight differences in several different corners of TensorFlow.

One of the major updates in 2.0 is that many of these confusing redundancies have been consolidated, with many functions now built into the keras API. It's now much easier to work with concepts from object-oriented programming and elegant Python flow control. Overall, TF 2.0 promises to offer improved flexibility for research-level experimentation while retaining the real-world utility and deployability that drove TensorFlow to be the most widely deep learning framework. While TF 2.0 adopts eager execution to make experimentation and prototyping a breeze, developers can access all the benefits of graphs by wrapping their code with a tf.function annotation. What's more, the changes in 2.0 likely lead to smoother learning curve and more readable code for most users.

ebook deep learning

Noteworthy Projects

One exciting development coming as part of TF 2.0 and a major trend in deep learning in general is edge computing: lightweight deep learning models designed to be deployed on low-power distributed devices like mobile phones, embedded microcontrollers, and Internet of Things (IoT) devices.

TensorFlow Lite is tailor made for edge-computing, and the TF Developer's Summit demonstrated edge ML on a new lightweight TPU-enabled Coral Dev Board. This is in addition to the pre-existing support for devices like the Raspberry Pi. Other interesting projects leveraging the improved capabilities of TF 2.0 include an open-source chatbot library called DeepPavlov, a bug-bite image classifier, and an air quality prediction app that estimates the level of pollution based on cell phone images.

Caveats

TensorFlow 2.0 remains pre-release as of this writing, and so is accompanied with a litany of bugs and changing functionality that accompanies any project at an early stage of development. As an example, I began a new project involving a flexible cGAN model shortly after TF 2.0 alpha was released, and upgrading to the current TF 2.0 beta meant major code changes due to bugs in the way the keras layers API handles tensor concatenation. Any large project built on TF 2.0 is likely to undergo similar revisions, at least for a timescale of weeks to months, as 2.0 is readied for full release.

How to Get Started With TensorFlow 2.0

A huge component of the TensorFlow value proposition is the substantial community built around the open-source project. This means tons of great material in the form of tutorials, blog posts, and full blown projects to choose from. I would suggest you take a 3-part approach to getting up to speed with 2.0.

  1. Get a high-level understanding of the major changes in TensorFlow 2.0 and how this impact your projects by reading blog posts (like this one) and watching some of the presentations from the TensorFlow team from TF Dev Summit 2019. This is to fuel your imagination and help to focus down on specific concepts that will make a big difference in what you can build.
  2. Investigate the wide swaths of tutorials and code examples from resources like Google's SeedBank, which are all accompanied by code you can modify and run in Google Colaboratory notebooks. This is a good way to get used to the more dynamic and flexible eager execution principles in TF 2.0, and I think you'll agree that the result is much more Pythonic than the old graph and session based development flow.
  3. Your impressions from 1 and 2 should give you enough of an cognitive outline of 2.0 to start solving real problems. If you're into reinforcement learning, you may find that TF-Agents becomes a useful expedient to your next project, while natural language processing practitioners may find the improved new ease-of-use for recurrent neural networks quickly becomes indispensable.

Ultimately, if you've been on the fence about building TensorFlow expertise and using it in your projects, 2.0 marks a convenient boundary to make the leap with many improvements to usability and utility. On the other hand if you're a TensorFlow veteran you should find all the features you know and love to still be available with myriad improvements in other areas. Many of the new features will improve your efficiency in rapid iteration early in a project, without sacrificing the convenient deployment later.

TensorFlow GPU Benchmarks

Topics