Implementing the learning process of an artificial neural network in Python

The brain is made up of hundreds of billions of cells called neurons. These neurons are connected together by synapses, which are nothing more than connections through which a neuron can send an impulse to another neuron. When a neuron sends an excitation signal to another neuron, that signal will be added to all other inputs of that neuron. If it exceeds the specified threshold, it will cause the target neuron to fire a forward action signal — this is how the thought process works internally.

In computer science, we model this process by creating “networks” on a computer using matrices. These networks can be understood as an abstraction of neurons without considering all the biological complexities. For simplicity, we`ll just simulate a simple NN with two levels capable of solving linear classification problems.

Let`s say we have a problem where we want to predict an output using a set of inputs and outputs as a teaching example, like this:

Note that the output is directly related to the third column, i.e. the values ​​of input 3 correspond to output values m in each training example in Fig. 2. Thus, for the test case, the output value should be 1.

The learning process consists of the following stages:

  1. Forward propagation:
    Take input, multiply by weight (just use random numbers as weights)
    Let Y = W i I i = W 1 I 1 + W 2 I 2 + W 3 I 3
    Pass the result through a sigmoid formula to calculate the output of the neuron. The Sigmoid function is used to normalize the result between 0 and 1:
    1/1 + e -y
  2. Backpropagation
    Calculate error, i.e. difference between actual and expected output. Depending on the error, adjust the weights by multiplying the error by the input and again by the gradient of the sigmoidal curve:
    Weight + = I / O error (1-output), here the output (1-output) is the derivative of the sigmoidal curve.

Note. Repeat the whole process for several thousand iterations.

Let`s code the whole process in Python. We will use the Numpy library to make all the matrix calculations easier for us. You need to install the NumPy library on your system to run the code
The command to install numpy:

 sudo apt -get install python-numpy 

Realization:

from numpy import *

  

class NeuralNet ( object ):

def __ init __ ( self ):

# Generate random numbers

random.see d ( 1 )

 

# Assign random weights to the 3 x 1 matrix

self . synaptic_weights = 2 * random.random (( 3 , 1 )) - 1

 

# Sigmoid function

def __ sigmoid ( self , x ):

  return 1 / ( 1 + exp ( - x))

  

# Derivative of the sigmoid function.

  # This is a sigmoid curve gradient.

  def __ sigmoid_derivative ( self , x):

return x * ( 1 - x)

 

# Train the neural network and adjust the weights every time.

def train ( self , inputs, outputs, training_iterations):

for iteration in xrange (training_iterations):

 

# Get online training.

output = self . learn (inputs)

 

# Calculate error

error = outputs - output

  

# Adjust the weights

factor = dot (inputs.T, error * self .__ sigmoid_derivative (output))

self . synaptic_weights + = factor

 

# The neural network thinks.

def learn ( self , inputs):

  return self .__ sigmoid (dot (inputs, self . synaptic_weights))

 

if __ name__ = = " __ main__ " :

  

  # initialize

  neural_network = NeuralNet ()

 

# Learning kit.

inputs = array ([[ 0 , 1 , 1 ], [ 1 , 0 , 0 ], [ 1 , 0 , 1 ]])

outputs = array ([[ 1 , 0 , 1 ]]). T

 

# Train the neural network

neural_network.train (inputs, outputs, 10000 )

  

# Testing the neural network with a test case.

print neural_network.learn (array ([ 1 , 0 , 1 ]))

Expected result: after 10 iterations, our neural network predicts a value of 0.65980921. This doesn`t look very good, as the answer should be 1. If we increase the number of iterations to 100, we get 0.87680541. Our network is getting smarter! Subsequently, for 10,000 iterations, we get 0.9897704, which is pretty close and a really satisfying result.

Links :

This article is courtesy of Vivek Pal . If you are as Python.Engineering and would like to contribute, you can also write an article using contribute.python.engineering or by posting an article contribute @ python.engineering. See my article appearing on the Python.Engineering homepage and help other geeks.

Please post comments if you find anything wrong or if you would like to share more information on the topic discussed above.