+

Python | Flow tensor nn.relu () and nn.leaky_relu ()

Activation function — it is a function that is applied to the output of the neural network layer, which is then passed as input to the next layer. Activation functions are an integral part of neural networks because they provide non-linearity, without which a neural network can be reduced to a simple logistic regression model. The most widely used activation function is the rectified linear unit (ReLU). ReLU is defined as . Recently, ReLU has become a popular choice for the following reasons:

  • Computational faster : ReLU — a very simplified function that is easy to compute.
  • Fewer vanishing gradients : In machine learning, the parameter update is proportional to the partial derivative of the error function for these parameters. If the gradient becomes very small, updates will not be effective and the network may stop learning altogether. ReLU does not saturate in the positive direction, whereas other activation functions such as sigmoid and hyperbolic tangent saturate in both directions. Hence, it has fewer fading gradients, which leads to better learning.

The nn.relu () function provides Tensorflow ReLU support.

Syntax : tf.nn.relu (features, name = None)

Parameters :
features : A tensor of any of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.
name ( optional): The name for the operation.

Return type : A tensor with the same type as that of features.

Exit :

 Input type: Tensor ("Const_10: 0", shape = (6,), dtype = float32) Input: [1. -0.5 3.4000001 -2.0999999 0. -6.5] Return type: Tensor ("ReLU_9: 0", shape = (6,), dtype = float32) Output: [1. 0. 3.4000001 0. 0. 0.] 

Leaky ReLU:
The ReLU function suffers from , which is called the "dying ReLU" problem. Since the slope of the ReLU function on the negative side is zero, a neuron stuck on that side is unlikely to recover from it. This forces the neuron to output zero for each input, rendering it useless. The solution to this problem is to use Leaky ReLU with a slight slant on the negative side.

The nn.leaky_relu() function provides Tensorflow support for ReLU.

Syntax : tf.nn.leaky_relu (features, alpha, name = None)

Parameters :
features : A tensor of any of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.
alpha : The slope of the function for x & lt; 0. Default value is 0.2.
name (optional): The name for the operation.

Return type : A tensor with the same type as that of features.

# Tensorflow library import

import tensorflow as tf

 
# Constant vector of size 6

a = tf.constant ([ 1.0 , - 0.5 , 3.4 , - 2.1 , 0.0 , - 6.5 ], dtype = tf.float32)

 
# Using the ReLu function and
# save the result to & # 39; b & # 39;

b = tf.nn.relu (a, name = `ReLU` )

 
# Initiating a Tensorflow session
with tf.Session () as sess:

print ( `Input type:` , a)

print ( `Input:` , sess.run (a))

print ( `Return type:` , b)

print ( `Output:` , sess.run (b))

# Import Tensorflow library

import tensorflow as tf 

  
# Constant vector of size 6

a = tf.constant ([ 1.0 , - 0.5 , 3.4 , - 2.1 , 0.0 , - 6.5 ], dtype = tf.float32)

  
# Applying the Leaky ReLu function with
# sloping 0.01 and saving the result to & # 39; b & # 39;

b = tf.nn .leaky_relu (a, alpha = 0.01 , name = `Leaky_ReLU` )

 
# Initiating a Tensorflow session
with tf.Session () as sess: 

print ( ` Input type: ` , a)

  print ( `Input:` , sess. run (a))

print ( `Return type:` , b)

print ( ` Output: ` , sess.run (b))

Output:

 Input type: Tensor ("Const_2: 0", shape = (6,), dtype = float32) Input: [1. -0.5 3.4000001 - 2.0999999 0. -6.5] Return type: Tensor ("Leaky_ReLU_1 / Maximum: 0", shape = (6,), dtype = float32) Output: [1. -0.005 3.4000001 -0.021 0. -0.065] 
Get Solution for free from DataCamp guru