Introduction and setup of TensorFlow
TensorFlow (TF) has in its name the word Tensor, which is a synonym of vector. TF, thus, is a Python framework that is designed to excel at vectorial operations pertaining to the modeling of neural networks. It is the most popular library for machine learning.
As data scientists, we have a preference towards TF because it is free, opensource with a strong user base, and it uses state-of-the-art research on the graph-based execution of tensor operations.
Setup
Let us now begin with instructions to set up or verify that you have the proper setup:
- To begin the installation of TF, run the following command in your Colaboratory:
%tensorflow_version 2.x
!pip install tensorflow
This will install about 20 libraries that are required to run TF, including numpy, for example.
- If the execution of the installation ran properly, you will be able to run the following command, which will print the version of TF that is installed on your Colaboratory:
import tensorflow as tf
print(tf.__version__)
This will produce the following output:
2.1.0
- This version of TF is the current version of TF at the time of writing this book. However, we all know that TF versions change frequently and it is likely that there will be a new version of TF when you are reading this book. If that is the case, you can install a specific version of TF as follows:
!pip install tensorflow==2.1.0
TensorFlow with GPU support
Colaboratory, by default, has GPU support automatically enabled for TensorFlow. However, if you have access to your own system with a GPU and want to set up TensorFlow with GPU support, the installation is very simple. Just type the following command on your personal system:
$ pip install tensorflow-gpu
Notice, however, that this assumes that you have set up all the necessary drivers for your system to give access to the GPU. However, fear not, there is plenty of documentation about this process that can be searched on the internet, for example, https://www.tensorflow.org/install/gpu. If you run into any problems and you need to move forward, I highly recommend that you come back and do the work on Colaboratory, as it is the easiest way to learn.
Let us now address how TensorFlow works and how its graph paradigm makes it very robust.
Principles behind TensorFlow
This book is for absolute beginners in deep learning. As such, here is what we want you to know about how TF works. TF creates a graph that contains the execution from its input tensors, up to the highest level of abstraction of operations.
For example, let's say that we have tensors x and w that are known input vectors, and that we have a known constant b, and say that you want to perform this operation:
If we create this operation by declaring and assigning tensors, the graph will look like the one in Figure 2.1:
In this figure, there is a tensor multiplication operation, mul, whose result is a scalar and needs to be added, add, with another scalar, b. Note that this might be an intermediate result and, in real computing graphs, the outcome of this goes up higher in the execution tree. For more detailed information on how TF uses graphs, please refer to this paper (Abadi, M., et.al., 2016).
In a nutshell, TF finds the best way to execute tensor operations delegating specific parts to GPUs if available, or otherwise parallelizing operations on the CPU cores if available. It is open source with a growing community of users around the world. Most deep learning professionals know about TF.
Now let us discuss how to set up Keras and how it abstracts TensorFlow functionalities.