The baby boomers to generation z popularly known as Post-Millennials are all living in an impressionable moment of history now, where technologies like machine learning, deep learning and reinforcement learning are witnessing an unparalleled revolution of all time.
The ability to solve challenging real-world problems has been revitalized by the ML framework on steroids from Google named TensorFlow. It’s allowing the developers, enthusiasts, beginners, businesses managers and researchers around the world to benefit from the intelligent applications.
TensorFlow, since it’s launch, has matured as a platform to become an entire end-to-end ecosystem and undoubtedly now it has become the lingua franca among the machine learning developers and deep learning researchers of today’s world.
I think by this time of the year you all must have heard the big news from Google TensorFlow Team that the old TensorFlow 1.x filled with unactionable error messages, cryptic documentation and duplicated functionality that made the learning curve for beginners to experienced machine learning engineer very steep is about to get replaced soon.
And as being proclaimed by the technical Pundits around Silicon Valley that Premier of TensorFlow 2.0 is indeed the start of a new era.
Statistics PitStop
Till date TensorFlow has
Just as the study of history is important to know our present and to get a perspective on the future, correspondingly looking back as to how TensorFlow evolved and became a quarterback in the field of Artificial Intelligence does become mandatory.
Revisiting The History
On 9 November 2015 Artificial intelligence became more mainstream as the biggest move came from Google when it announced that it was making its powerful AI system called TensorFlow open source.
Which marked as an important move in the tech world because a big software giant was open sourcing it’s a system for the public to make people believe its stand on democratizing AI and also to earn some goodwill of the world’s software developers.
This was not the first library which was open-sourced likewise there were others too such as Caffe from Berkeley AI Research (BAIR), Torch made by Ronan Collobert, Koray Kavukcuoglu, Clement Farabet and Theano which was developed by Montreal Institute for Learning Algorithms (MILA), University of Montreal.
But all these above mentioned deep learning frameworks lacked one thing or the other such as Torch required LuaJIT to run models but during that time Lua was not yet a mainstream language.
Whereas Caffe’s architecture was considered excellent when it was born but in the modern standard, it was considered average. The main pain points of Caffe were its layer-wise design in C++ and the protobuf interface for model definition. While Theano lack of low-level interface and the inefficiency of Python interpreter made it look less attractive for industrial users.
However, After TensorFlow launched Jeff Dean the Google Brain team lead and co-founder delicately articulated Tensorflow as “you can think of TensorFlow as amalgamating the best of Theano, Torch, and Caffe into one.
Open Source: The Unsung hero that made the Nonconformity
Nothing or nobody is perfect, it was the same scenario when TensorFlow was launched, straight after its launch many researchers started benchmarking algorithms on it, but they found TensorFlow fall short by quite a margin. One such example which I can recollect is when Soumith from facebook benchmarked Tensorflow on various networks like Overfeat, Alexnet and Googlenet.
Benchmarking TensorFlow
Well this and the other consistent efforts from different independent researchers allowed TensorFlow team to look at their bottleneck which on the other hand helped them to further push forward their performance work, as well as it made them realize that they need to do a bit more work on building good APIs to make things easier for the platform users.
Since they were about to achieve a major milestone with the launch of TF 2.0 therefore in order to make everyone a part of it. They initiated a “TensorFlow RFC” group which allowed anyone or everyone to contribute to the features of TF 2.0 by simply voicing their concerns and proposing changes broadly.
This is why I’m a big fan of open-source development where people from different backgrounds, work culture, come together under different roof and work together to support one another for common goals.
Fast Forward 2019: Welcome TensorFlow 2.0 Alpha
Rajat Monga Engineering Director TensorFlow announcing the availability of TF 2.0
On 6 March 2019, TensorFlow 2.0 alpha version was released to the diverse mix of machine learning users from around the world to learn, build exciting AI applications.
Though I know in these past 3.5 years, TensorFlow has grown from a software library for deep learning to an entire ecosystem for all kinds of ML from complex Neural Networks to traditional methods to AI on mobile, web browser and embedded devices, but there is still lots of work to be done to provide developers or users of any experience level with flexibility to try the craziest ideas and ability to go beyond an exaFlop.
Well to know what arsenal TensorFlow 2.0 is now packaged together to build a comprehensive platform that supports ML workflow through training and deployment descend your eyes to the next section.
Developers Constructive Criticism: A light in the dark
The vibrant community of TensorFlow has given its heart along with great jolt to initiate the first few changes in TensorFlow 1.x by giving constructive feedback such as:
- The TensorFlow 1.x should have simpler, more intuitive APIs in developer experience.
- There are many areas in TensorFlow 1.x that are redundant and complex which inadvertently affecting developers productivity.
- Better Documentation and examples of TensorFlow 1.x is the need of the hour.
After hearing and making changes according to all the feedback received from the community the TensorFlow team announced TensorFlow 2.0 with a promise of giving the priority to simplicity and ease of use, featuring updates like easy model building with keras and eager execution, robust model deployment and production on any platform.
Heavy Promises huh !!!!!! let’s unfold all the promises one by one in the next section.
Flatting the TensorFlow 2.0
Code Alert: This section does consist of some coding so open a terminal to follow along:
Any deep learning framework is used with some essentials such as :
- Rapid prototyping
- Easier debugging
And the TensorFlow users were chirping these requirements in the ears of TensorFlow developers for a very long time and as a result, they made significant changes to the scientific library like:
- Giving Usability a high level priority
TensorFlow team adopted Keras as the high-level API to build and train models for TensorFlow, basically, they extended Keras furthermore, now all the advanced features in TensorFlow can directly put in use through tf.keras.
Now, let’s code
To get started import tf.keras to your workflow
from __future__ import absolute_import, division, print_function !pip3 install -q tensorflow==2.0.0-alpha0 import tensorflow as tf from tensorflow import keras print(tf.__version__) print(tf.keras.__version__) 2.0.0-alpha0 2.2.4-tf
tf.keras can run any keras compatible code, but you need to understand one point that the tf.keras version release with latest TensorFlow may not be the same as latest keras version
In keras simplest type of model can be made using sequential API but if you have to build complex model, it can be done using functional API
After building the model, the next obvious step is to train the model, and this can be done by calling compile method:
from tensorflow.keras import layers model = tf.keras.Sequential([ # Adds a densely-connected layer with 64 units to the model: layers.Dense(64, activation='relu', input_shape=(100,)), # Add a softmax layer with 10 output units: layers.Dense(10, activation='softmax')]) model.compile(optimizer=tf.keras.optimizers.Adam(0.001), loss='categorical_crossentropy', metrics=['accuracy'])
So, in this way you can create a simple model and train it to get the required output. This was only used to give you an idea how TF2.0 now work around Keras.
- The next big change which is now available in TF 2.0 is an Eager execution by default, as quoted by an engineer of Tensorflow that traditional 1.X used a declarative style and many a time it was little bit tuneless with the surrounding python, but TF 2.0 behave just like the surrounding code.
How ??, let’s find out, what i’m going to show you it’s a developer delight
Without TF 2.0 upgrade , let’s add two integers
# Code starts here import tensorflow as tf x = tf.constant(5,name="x") y = tf.constant(15,name="y") z = tf.add(x,y,name="z") print("Value of z before running Session:",z)
Output
Value of z before running Session: Tensor("c_5:0", shape=(), dtype=int32)
sess = tf.Session() #To get value of z result= sess.run(z) print("Value of z after running Session:",result) To close the session. sess.close()
Result
Value of z after running Session: 20
But with TF 2.0 you only need to type one command to get the addition of two numbers
tf.add(5,15)
Output
<tf.Tensor: id=22, shape=(), dtype=int32, numpy=20>
Wondering, I was too when I used it for the first time
- Clarity was another most talked point in RFC group, therefore now TF 2.0 has
- Removed duplicate functionality
- consistent intuitive syntax across APIs
- Compatibility throughout the TensorFlow ecosystem
- Flexibility the most important trouble lifted by TensorFlow Team through
- Providing full lower level API
- Providing access to internal ops in tf.raw_ops
- Providing an inheritable interface to some of the crucial concepts in TensorFlow such as variables, checkpoints, and layers.
Transitioning from Legacy to New API
After reading about so many changes in Tensorflow you must be thinking that what will happen to the entire end-to-end pipeline of your AI system which is built upon TensorFlow will it break because of these changes or your entire ML workflow will become redundant because of the old style of coding.
Well, don’t worry TensorFlow 2.0 alpha has got it covered.
Since TF 2.0 involves many API changes such as API symbol renames, argument reorders and changing default values for parameters. So, manually performing all the modifications would be prone to error. Therefore in order to make your transition to TF 2.0 as elegant as possible, the TF team has created the tf_upgrade_v2 utility to help you transition from legacy code to the new API.
When put into action this utility will accelerate your upgrade process by converting your current TensorFlow 1.x Python scripts to TensorFlow 2.0 preview scripts.
But certain API symbol cannot be simply upgraded by using a string replacement, henceforth to ensure your code still works TF 2.0 upgrade script includes a compat.v1 module.
This module will replace TF 1.x symbols like X = tf.placeholder(“float”) to
X = tf.compat.v1.placeholder(“float”)
Beginners Dilemma
Now, this section is specifically focused on the beginners or the fresh entrants in the Machine learning World.
As Exploration is in our human DNA so I will suggest you to explore each and every Deep Learning library available starting from TensorFlow, keras, and Pytorch but settle for one and become specialized in one scientific framework rather than an average in each and every other scientific framework which is out there.
And to start with there is no better option than Keras which allow you to train scalable multi-platform neural network within 10 lines of code.
As it’s being said in order to measure the depth of an ocean you need to dive deep, well if you reach till the end of this article you have already taken the most necessary and important step.
In addition to all the updates I mentioned, there is still a lot of great improvements done in TF which I haven’t discussed, so to know more do check TensorFlow official website TensorFlow and get amazed by the new competence of TensorFlow.