Python ReLu function – All you need to know!

Python ReLu Function

Hello, readers! In this article, we will be focusing on Python ReLu function , in detail. So, let us get started!! 🙂

Also read: Building Neural Networks in Python


What is the ReLu function? — Crisp Overview

Python has been playing an important role in improvising the learning models built over the convolutional picture and also the machine learning models. These deep learning models have been benefitted a lot like the process to build them has become easy with inbuilt modules and functions offered by Python.

In order to improve the computational efficiency of the deep learning model, Python has introduced us with ReLu function, also known as, Rectified Linear Activation Function.

The ReLu function enables us to detect and present the state of the model results and the computational efficiency of the model is also improvised with it.

ReLu activation function states that, If the input is negative, return 0. Else, return 1.

ReLu function
ReLu function

Having understood about ReLu function, let us now implement the same using Python.


Basic Implementation of the ReLu function in Python

At first, we will be creating a customized ReLu function as shown below.

Example:

Here, we have created a customized and user-defined function that will make use of max() function and will compare the passed element with 0.0 which concludes it to be positive or negative.

As val is a positive number, it returns 1.0. The variable val1 is a negative number thus it returns 0.0

def ReLu(val):
    return max(0.0,val)
val = 1.0
print(ReLu(val))
val1 = -1.0
print(ReLu(val1))

Output:

1.0
0.0

Gradient value of the ReLu function

In the dealing of data for mining and processing, when we try to calculate the derivative of the ReLu function, for values less than zero i.e. negative values, the gradient found is 0. Which implicates the weight and the biases for the learning function is not updated accordingly. This may lead to problems for the training of the model.

To overcome this limitation of the ReLu function, we will be discussing about Leaky ReLu function.


Leaky ReLu function

As discussed above, to overcome the gradient issue for the negative values passing the ReLu function, Leaky ReLu function basically adds a tiny linear component of the constant number to the negative input score.

f(num)= 0.001num, num<0
    = num,   num>=0

As expressed above, we have multiplied the negative number with the constant (0.001) in our case.

Now, when we have a look at the gradient of the above Leaky ReLu function, the gradient score for the negative numbers will now appear to be non zero which indicates that the weights of the learning functions are now updated properly.

Example:

def ReLu(a):
  if a>0 :
    return a
  else :
    return 0.001*a

a = -1.0
print(ReLu(a))  

Output:

-0.001

Conclusion

By this, we have come to the end of this topic. Feel free to comment below, in case you come across any questions.

For more such posts related to Python programming, Stay tuned with us.

Till then, Happy learning!! 🙂