tensorflow.nn module provides support for many basic neural network operations.
Activation function — it is a function that is applied to the output of the neural network layer, which is then passed as input to the next layer. Activation functions are an integral part of neural networks because they provide non-linearity, without which a neural network can be reduced to a simple logistic regression model. One of the many activation functions is the Softplus function, which is defined as ,
Traditional activation functions such as sigmoidal and hyperbolic tangent, have lower and upper bounds, whereas the softplus function outputs in the range (0, ∞). The derivative of the softplus function turns out to be , which is a sigmoid function. The softplus function is very similar to the rectified liner unit (ReLU) function, the main difference being the differentiability of the softplus function at x = 0. Research paper “Improving deep neural networks using softplus blocks” by Zheng et al. (2015) suggests that softplus provides more stabilization and performance for deep neural networks than ReLU. However, ReLU is generally preferred due to its ease of computation and its derivative. Evaluating the activation function and its derivative is a common operation in neural networks, and ReLU provides faster forward and backward propagation than softplus.
nn.softplus () [alias
math.softplus ] provides softplus support in Tensorflow.
Syntax : tf.nn.softplus (features, name = None) or tf .math.softplus (features, name = None)
features : A tensor of any of the following types: float32 , float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.
name (optional): The name for the operation.
Return type : A tensor with the same type as that of features.
Code # 1:
Input type: Tensor ("Const: 0", shape = (6,), dtype = float32) Input: [1. -0.5 3.4000001 - 2.0999999 0. -6.5] Return type: Tensor ("softplus: 0", shape = (6,), dtype = float32) Output: [1.31326163e + 00 4.74076986e-01 3.43282866e + 00 1.15519524e-01 6.93147182e- 01 1.50233845e-03]
Code # 2: Visualization
Input: [-5. -4.28571429 -3.57142857 -2.85714286 -2.14285714 -1.42857143 -0.71428571 0. 0.71428571 1.42857143 2.14285714 2.85714286 3.57142857 4.28571429 5.] Output: [0.00671535 0.01366993 0.02772767 0.05584391 0.11093221 0.21482992 0.39846846 0.69314718 1.11275418 1.64340135 2.25378936 2.91298677 3.59915624 4.29938421 5.00671535]
< figure class = aligncenter amp-wp-inline-e407ac51e00eb7ad9e758d070160c9d8>