This article is a short introduction to the TensorFlow library using the Python programming language.
Introduction
TensorFlow — it is an open source software library. TensorFlow was originally developed by researchers and engineers working on the Google Brain Team at the Google Machine Intelligence research organization for machine learning and deep neural network research, but the system is general enough to be applicable to a wide variety of applications. other areas like Well!
Let’s first try to understand what the word TensorFlow really mean s!
TensorFlow — it is basically a software library for numerical calculations using data flow graphs, where:
 nodes on the graph represent mathematical operations.
 edges in the graph represent multidimensional arrays of data (called tensors ) passed between them. (Note that tensor is the central unit of data in TensorFlow.)
Consider the diagram below:
Here add — this is the node that represents the addition operation. a and b — input tensors, and c — the resulting tensor.
This flexible architecture allows you to deploy compute to one or more CPUs or GPUs on a desktop, server, or mobile device using a single API!
TensorFlow API
TensorFlow provides several APIs (Application Programming Interfaces). They can be divided into 2 main categories:
 Lowlevel API:
 full control of programming
 recommended for machine learning researchers
 provides an excellent level of control over the models
 TensorFlow Core — this is a lowlevel TensorFlow API.
 Highlevel API:
 built on top of TensorFlow Core
 lighter learn and use than TensorFlow Core
 to make repetitive tasks easier and more consistent across different users
 tf.contrib.learn is an example of a highlevel API.
In this article, we will first discuss the basics of TensorFlow Core, and then look at the higherlevel API, tf.contrib.learn .
TensorFlow Core
1. Installing TensorFlow
A simple TensorFlow installation guide is available here:
Installing TensorFlow .
Once installed, you can ensure a successful installation by running the following command in the python interpreter:
import tensorflow as tf
2. Computational graph
Any TensorFlow Core program can be divided into two separate sections:
 Building a computational graph. Computing graph — it is nothing more than a series of TensorFlow operations organized into a graph of nodes.
 Launch a computational graph. To really evaluate the nodes, we have to run the computational graph during a session . A session encapsulates the control and state of the TensorFlow runtime.
Now let’s write our very first TensorFlow program, to understand the above concept:

Output:
Sum of node1 and node2 is: 8
Let’s try to understand the code above:
 Step 1: Create a computational graph
When we create a computational graph, we mean the definition of nodes. Tensorflow provides different types of nodes for different tasks. Each node takes zero or more tensors as input and produces a tensor as output. In the above program, the node1 and node2 nodes are of type tf.constant . The const node takes no input and outputs the value it stores internally. Note that we can also specify the datatype of the output tensor with using the dtype argument.
node1 = tf.constant (3, dtype = tf.int32) node2 = tf.constant (5, dtype = tf.int32)
 node 3 has type tf.add . It takes two tensors as input and returns their sum as output tensors.
node3 = tf.add (node1, node2)
 In the above program, the node1 and node2 nodes are of type tf.constant . The const node takes no input and outputs the value it stores internally. Note that we can also specify the datatype of the output tensor with using the dtype argument.
 Step 2: Launch the computational graph
To launch the computational graph, we need to create a session . To create a session, we simply do:sess = tf.Session ()
Now we can call the run method of the session object to perform computations on any node:
print ("Sum of node1 and node2 is:", sess.run (node3))
Here, node 3 is evaluated, which additionally calls a node 1 and node 2 . Finally, we close the session using:
sess.close ()
Note. Another (and better) method work with sessions — use a block like this:
with tf.Session () as sess: print ("Sum of node1 and node2 is:", sess.run (node3))
The advantage of this approach is that you don’t need to explicitly close the session, as it automatically closes as soon as control goes outside the with block.
3. Variables
TensorFlow also has Variables nodes that can contain variable data. They are mainly used to store and update the parameters of the training model.
Variables — these are inmemory buffers containing tensors. They must be explicitly initialized and can be saved to disk during and after training. You can later restore the saved values for training or analyzing the model.
An important difference between a constant and a variable — this:
A constant’s value is stored in the graph and its value is replicated wherever the graph is loaded. A variable is stored separately, and may live on a parameter server.
Below is an example of using a variable :

Output:
Tensor value before addition: [[0. 0.] [0. 0.]] Tensor value after addition: [[1. 1.] [1. 1.]]
In the above program:
 We define a node of type Variable and assign some initial value to it.
node = tf .Variable (tf.zeros ([2,2]))
 To initialize a variable node in the scope of the current session, we do:
sess.run (tf.global_variables_initializer () )
 To assign a new value to a variable node, we can use Use the assign method like this:
node = node.assign (node + tf.ones ([2,2]))
4. Placeholders
The graph can be parameterized to accept external inputs known as placeholders . Placeholder — it is a promise to provide a value later.
When evaluating a graph that includes placeholder nodes, the feed_dict parameter is passed to the execute method of the session, to indicate Tensors that provide specific values for these placeholders.
Consider the example below:

Output:
[[3 6 9] [2 4 6] [1 2 3]]
Let’s try to understand the above program:
 We define placeholder nodes a and b as follows:
a = tf.placeholder (tf.int32, shape = (3,1)) b = tf.placeholder (tf.int32, shape = ( 1,3))
The first argument is the data type of the tensor, and one of the optional arguments is the form of the tensor.
 We define another node c, which performs matrix multiplication ( matmul ). We pass two placeholder nodes as arguments.
c = tf.matmul (a, b)
 Finally, when we are in session, we pass the value of the placeholders to the nodes in feed_dict argument sess.run:
print (sess.run (c, feed_dict = {a: [[3], [2], [1]]], b: [ [1,2,3]]}))
Consider the diagrams below to clean up the concept:
 Originally:
 After sess.run:
5. Example: Linear Regression Model
Below is an implementation of a Linear Regression model using the TensorFlow Core API.
# import dependencies
import
tensorflow as tf
import
numpy as np
import
matplotlib.pyplot as plt
# Model parameters
learning_rate
=
0.01
training_epochs
=
2000
display_step
=
200
# Data training
train_X
=
np.asarray ( [
3.3
,
4.4
,
5.5
,
6.71
,
6.93
,
4.168
,
9.779
,
6.182
,
7.59
,
2.167
,
7.042
,
10.791
,
5.313
,
7.997
,
5.654
,
9.27
,
3.1
])
train_y
=
np.asarray ([
1.7
,
2.76
,
2.09
,
3.19
,
1.694
,
1.573
,
3.366
,
2.596
,
2.53
,
1.221
,
2.827
,
3.465
,
1.65
,
2.904
,
2.42
,
2.94
,
1.3
])
n_samples
=
train_X.shape [
0
]
# Test data
test_X
=
np.asarray ([
6.83
,
4.668
,
8.9
,
7.91
,
5.7
,
8.7
,
3.1
,
2.1
])
test_y
=
np.asarray ([
1.84
,
2.273
,
3.2
,
2.831
,
2.92
,
3.24
,
1.35
,
1.03
])
# Set placeholders for object vectors and targets
X
=
tf.placeholder (tf.float32)
y
=
tf.placeholder (tf.float32)
# Set model weight and offset
W
=
tf.Variable (np.random.randn (), name
=
" weight "
)
b
=
tf.Variable (np.random.randn (), name
=
"bias"
)
# Build linear model
linear_model
=
W
*
X
+
b
# Standard deviation
cost
=
tf.reduce_sum (tf.square (linear_model

y))
/
(
2
*
n_samples)
Gradient descent
optimizer
=
tf.train. Gradient DescentOptimizer (learning_rate) .minimize (cost)
# Initializing variables
init
=
tf.global_variables_initializer ()
# Run graph
with tf.Session () as sess:
# Load initialized variables in the current session
sess.run (init)
# All training data fits
for
epoch
in
range
(training_epochs):
# execute a gradient descent step
sess.run (optimizer, feed_dict
=
{X: train_X, y: train_y})
# Display logs per epoch step
if
(epoch
+
1
)
%
display_step
=
=
0
:
c
=
sess.run (cost, feed_dict
=
{X: train_X, y: train_y})
print
(
"Epoch: {0: 6} Cost: {1: 10.4} W: { 2: 6.4} b: {3: 6.4} "
.
format
(epoch
+
1
, c, sess.run (W), sess.run (b )))
# Print final parameter values
print
(
"Optimization Finished!"
)
training_cost
=
sess.run (cost, feed_dict
=
{X: train_X, y: train_y})
print
(
"Final training cost:"
, training_cost,
"W:"
, sess.run (W) ,
"b:"
,
sess.run (b),
’’
)
# Graphic display
plt.plot (train_X, train_y,
’ ro’
, label
=
’Original data’
)
plt.plot (train_X, sess.run (W)
*
train_X
+
sess.run (b), label
=
’Fitted line’
)
plt.legend ()
plt.show ()
# Testing the model
testing_cost
=
sess.run (tf.reduce_sum (tf.square (linear_model

y))
/
(
2
*
test_X.shape [
0
]),
feed_dict
=
{X: test_X, y: test_y})
print
(
" Final testing cost: "
, testing_cost)
print
(
"Absolute mean square loss difference:"
,
abs
(training_cost

testing_cost))
# Display set string in test data
plt.plot (test_X, test_y,
’bo’
, label
=
’Testing data’
)
plt.plot (train_X, sess.run (W)
*
train_X
+
sess.run (b), label
=
’Fitted line’
)
plt.legend ()
plt.show ()
Epoch: 200 Cost: 0.1715 W: 0.426 b: 0.4371 Epoch: 400 Cost: 0.1351 W: 0.3884 b: 0.1706 Epoch: 600 Cost: 0.1127 W: 0.3589 b: 0.03