Neural networks are a very important core of deep learning; it has many practical applications in many different fields. Today these networks are used for image classification, speech recognition, object detection, etc.

Let’s see what it is and how does a neural network in Python work?

This network has different components. They are as follows &

- Input layer, x
- Arbitrary number of hidden layers
- Output layer, ŷ
- A set of weights and offsets between each layer, which is defined by W and b
- Next comes the selection activation functions for each hidden layer, σ.
- Input layer, x

- Output layer, ŷ
- A set of weights and offsets between each layer, which is defined by W and b
- Next comes selection of an activation function for each hidden layer, σ.

- First take the input as a matrix (two-dimensional array of numbers)
- Next it multiplies the input by a set of weights.
- Next, the activation function is applied.
- Return the output.
- The next error is calculated, it is the difference between the desired output from the data and the predicted output.
- And the weights change slightly depending on the error.
- For training, this process is repeated more than 1000 times, and the more data is trained, the more accurate our results will be.

This diagram represents a two-layer neural network (the input layer is usually excluded when counting the number of layers in the neural network)

2, 9 92 1, 5 86 3, 6 89 4, 8?

In this graph, circles represent neurons and lines represent synapses. Synapses are used to multiply inputs and weights. We think that weights &it is the "strength" of communication between neurons. The weights determine the output of the neural network.

from numpy import exp, array, random, dot, ta nh class my_network (): def __init __ (self) : random.seed (1) # 3x1 Weight matrix self.weight_matrix = 2 * random.random ((3, 1)) - 1 defmy_tanh (self, x): return tanh (x) defmy_tanh_derivative (self, x): return 1.0 - tanh (x) ** 2 # forward propagation defmy_forward_propagation (self, inputs): return self.my_tanh (dot (inputs, self.weight_matrix)) # training the neural network. deftrain (self, train_inputs, train_outputs, num_train_iterations): for iteration in range (num_train_iterations): output = self.my_forward_propagation (train_inputs) # Calculate the error in the output. error = train_outputs - output adjustment = dot (train_inputs.T, error * self.my_tanh_derivative (output)) # Adjust the weight matrix self.weight_matrix + = adjustment # Driver Code if __name__ == "__main__": my_neural = my_network () print (’Random weights when training has started’) print (my_neural.weight_matrix) train_inputs = array ([[0, 0, 1], [1, 1, 1], [1, 0, 1], [0, 1, 1]]) train_outputs = array ([[0, 1, 1, 0]]). T my_neural.train (train_inputs, train_outputs, 10000) print (’Displaying new weights after training’) print (my_neural.weight_matrix) # Test the neural network with a new situation. print ("Testing network on new examples -"") print (my_neural.my_forward_propagation (array ([1, 0, 0])))When we use feedforward neural network, we must execute several steps.

Random weights when training has started [[-0.16595599] [0.44064899] [-0.99977125]] Displayin g new weights after training [ [5.39428067] [0.19482422] [0.34317086]] Testing network on new examples -" [0.99995873]