If you’re learning about neural networks, chances are high that you have come across the term **activation function**. In neural networks, an activation function decides whether a particular neuron will be activated or not. Activation functions take the weighted summation of the nodes as input and perform some mathematical computation, depending on the activation function, and output a value that decides whether a neuron will be activated or not.

They are many activation functions like

In this tutorial, we will be learning about the **sigmoid activation function**. So let’s begin!

## What is the sigmoid function – The math behind it

Sigmoid is a **non-linear **activation function. It is mostly used in models where we need to predict the probability of something. As probability exists in the value range of 0 to 1, hence the **range of sigmoid **is also from** 0 to 1**, both inclusive.

Let’s have a look at the equation of the sigmoid function.

Sigmoid is usually denoted using the Greek symbol sigma. So, we can also write

In the above equation, ‘**e**‘ is Euler’s number. Its value is approximately **2.718**. Similarly,

In fact, we can derive a relation between the above two equations as

We can also prove this relation as shown below:

**LHS:**

It can also be written as

**RHS:**

Therefore, LHS=RHS

Hence, we have proved the relation.

Another property of the sigmoid activation function is that it is **differentiable**. Let us see how we can differentiate it.

Differentiating *sigmoid equation 1* we get

So, from *Sigmoid Equation 1*, *Sigmoid x And -x Relation Equation 2* and *Sigmoid Differentiation Equation 2*, we can write

Or,

Phew! That’s a lot of maths! Now, let us have a look at the graph of the sigmoid function.

## Sigmoid graph using Python Matplotlib

```
#importing the required libraries
from math import exp
from matplotlib import pyplot as plt
#defining the sigmoid function
def sigmoid(x):
return 1/(1+exp(-x))
#input
input = []
for x in range(-5, 5):
input.append(x)
#output
output = []
for ip in input:
output.append(sigmoid(ip))
#plotting the graph
plt.plot(input, output)
plt.title("Sigmoid activation function")
plt.grid()
#adding labels to the axes
plt.xlabel("x")
plt.ylabel("sigmoid(x)")
plt.scatter([0], [0.5], color="red", zorder=5)
plt.show()
```

**Output:**

The above plot leads us to a few properties of the sigmoid function. They are:

**S-shaped:**The graph of`sigmoid`

is S-shaped just like the graph of`tanh`

activation function.**Domain:**The domain of`sigmoid`

is (-âˆž, +âˆž).**Continuous:**The`sigmoid`

function is continuous everywhere.- The
`sigmoid`

function is**monotonically increasing**. **sigmoid(0)= 0.5**

## Relation between sigmoid and tanh

We have previously discussed the tanh activation function in our tutorial.

The equation of tanh is:

And,

These two functions are related:

## Summary

Let’s have a quick recap: The sigmoid activation function is non-linear, monotonic, S-shaped, differentiable, and continuous. That’s all! We have learned about the sigmoid activation function and also its properties.

Hope you found this tutorial helpful. Do check out more such tutorials related to Python here.