An introduction to tensor flow and tensor along with the implementation of tensors in tensor flow.
TensorFlow — is an open source software library for programming data streams in a variety of tasks. It is a symbolic math library that is also used for machine learning applications like neural networks. Google open source TensorFlow in November 2015. Since then, TensorFlow has become the most popular machine learning repository on Github. (Https://github.com/tensorflow/tensorflow)
Why TensorFlow? The popularity of TensorFlow is due to many reasons, but primarily due to the concept of computational graph, automatic differentiation, and adaptability of the structure of the Python Tensorflow API. This makes solving real TensorFlow problems available to most programmers.
Google`s Tensorflow engine has a unique way of solving problems. This unique method allows you to very efficiently solve machine learning problems. We`ll go over the basic steps to understand how Tensorflow works.
What is Tensor in Tensorflow?
TensorFlow, as the name suggests, is a platform for defining and running calculations using tensors. Tensor — it is a generalization of vectors and matrices to potentially higher dimensions. Internally, TensorFlow represents tensors as n-dimensional arrays of basic data types. Every item in Tensor has the same datatype, and the datatype is always known. The shape (that is, the number of dimensions and the size of each dimension) can only be partially known. Most operations produce tensors of fully known shapes if the shapes of their input are also fully known, but in some cases the shape of the tensor can only be found during graph execution.
Here we will present the general flow of tensorflow algorithms.
All our machine learning algorithms will depend on data. In practice, we will either generate data or use an external data source. Sometimes it is better to rely on the generated data because we want to know the expected result. And also tensorflow comes pre-installed with well-known datasets like MNIST, CIFAR-10, etc.
Data usually has the wrong size or type that our Tensorflow algorithms expect. We will have to transform our data before we can use it. Most algorithms also expect normalized data. Tensorflow has built-in functions that can normalize data for you.
data = tf.nn.batch_norm_with_global_normalization (...)
Our algorithms usually have a set of parameters that we keep constant throughout the entire procedure. For example, it could be the number of iterations, the learning rate, or other fixed parameters of our choice. It is considered a good form to initialize them together so that the reader or user can easily find them.
learning_rate = 0.001 iterations = 1000
The flux tensor depends on what we tell it, what it can and cannot change. Tensorflow will modify variables during optimization to minimize the loss function. To do this, we enter data through placeholders. We need to initialize both of these variables and placeholders with size and type so that Tensorflow knows what to expect.
a_var = tf.constant (42) x_input = tf.placeholder (tf.float32, [None, input_size] ) y_input = tf.placeholder (tf.fload32, [None, num_classes])
Once we have the data and our variables and placeholders are initialized, we need to define the model. This is done by building a computational graph. We tell Tensorflow what operations need to be performed on variables and placeholders to arrive at our model predictions.
y_pred = tf.add (tf.mul (x_input, weight_matrix), b_matrix)
After defining the model, we must evaluate the output. Here we are declaring a loss function. The loss function is very important because it tells us how far our forecasts are from the actual values.
loss = tf.reduce_mean (tf.square (y_actual - y_pred))
Now that we have everything in place, we instantiate or our graph and pass the data through placeholders and let Tensorflow change variables to better predict our training data. Here is one way to initialize a computational graph.
with tf.Session (graph = graph) as session: ... session.run (...) ... pre>
Note that we can also start our graph withsession = tf.Session (graph = graph) session.run (…)
After we have created and trained the model, we must evaluate the model by seeing how well it performs with new data according to some of the specified criteria.
It is also important to know how to make predictions for new, invisible data. We can do this with all of our models as soon as we train them.
In Tensorflow, we need to set up data, variables, placeholders and a model before we tell the program to train and change the variables to improve predictions. Tensorflow does this through a computational graph. We say this to minimize the waste function, and Tensorflow does this by modifying the variables in the model. Tensorflow knows how to change variables because it keeps track of the calculations in the model and automatically calculates gradients for each variable. This allows us to see how easy it is to make changes and try different data sources.
In general, the algorithms in TensorFlow are designed to be cyclical. We set this loop as a computational graph and (1) enter data through placeholders, (2) compute the output of the computational graph, (3) compare the output to the desired output using a loss function, (4) modify the model variables according to automatic backpropagation, and finally (5) repeat the process until the stop criteria are met.
Now begins the tutorial on tensorflow and implements it.
First, we need to import the required libraries.
import tensorflow as tf from tensorflow.python.framework import ops ops.reset_default_graph ()
Then start a graph session
sess = tf.Session ()
Now the main part begins, i.e. Creating tensors.
TensorFlow has a built-in function for creating tensors for and use in variables. For example, we can create a zero-filled tensor of a predefined shape using the tf.zeros () function as follows.
my_tensor = tf.zeros ([1,20])
We can evaluate tensors by calling the run () method in our session.
TensorFlow algorithms need to know which objects are variables and which — constants. Therefore, we create a variable using the TensorFlow tf.Variable () function. Please note that you cannot run sess.run (my_var), this will result in an error. Since TensorFlow works with computational graphs, we must create a variable initialization operation in order to evaluate the variables. For this script, we can initialize one variable at a time by calling the variable method my_var.initializer.
my_var = tf.Variable (tf.zeros ([1,20])) sess.run (my_var.initializer) sess.run (my_var)
array ([[0., 0., 0., 0., 0., 0 ., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]], dtype = float32)
Now let`s create our variable to handle measurements of a certain shape, then initialize the variables with all 1s or 0s
row_dim = 2 col_dim = 3 zero_var = tf.Variable (tf .zeros ([row_dim, col_dim])) ones_var = tf.Variable (tf.ones ([row_dim, col_dim]))
Now by evaluating their values, we can run the initializer methods on our variables again. p>
sess.run (zero_var.initializer) sess.run (ones_var.initializer) print (sess.run (zero_var)) print (sess.run (ones_var))
[[0. 0. 0.] [0. 0. 0.]] [[1. 1. 1.] [1. 1. 1.]]
AND this list will continue. The rest will be for you to learn, follow this jupyter notebook from me to get more information on tensors from here .
Visualizing variable creation in TensorBoard
To visualize variable creation in Tensorboard, we will reset the computational graph and create a global initialization operation.
Now run the following command in cmd.
tensorboard --logdir = / tmp
And it will tell us the URL where we can go in our browser to see the Tensorboard to get your loss plots.
Code to generate all types of tensors and evaluate them.