How to Compute the Divergence of a Vector Field Using Python?

DIVERGENCE OF A VECTOR FIELD USING NUMPY

Divergence is the most crucial term used in many fields, such as physics, mathematics, and biology. The word divergence represents a separation or movement from a point. Divergence can also be a point at which two or more lines move away from each other.

In vector calculus, divergence is a term used to represent how the components or fields move towards or away from each other. Or it can also measure the distance between objects.

Divergence is typically considered a vector field that returns a scalar field as output. A vector field is a field that has a quantity and direction associated with it, whereas a scalar only has a quantity. One daily example of divergence is the flow of water from a tap.

Understanding the concept behind divergence depends on the partial derivative of each component and then summing the derivatives.

There is a module called ‘sympy’ which can be used to deal with calculus.

Learn how to work with calculus using sympy.

We can compute the divergence of a vector field using Python. We can use the ‘scipy’ library, which has a method dedicated to divergence. We can also use the Numpy library to compute divergence manually.

You can learn more about the scipy module here.

Let us get through the Numpy Library and learn how to compute divergence with Numpy.

The Numpy Library

Numpy stands for numerical python. It mainly works with arrays, and the popular library called pandas is also built on this library. It is mainly developed to bring the computational power of languages like C and Fortran to Python.

It has many built-in methods to go with linear algebra, Fourier transforms, value decompositions, and many more.

Refer to this tutorial to learn to use Einstein Summations.

Divergence of a Vector Field With np.gradient

The gradient method of the Numpy library is used to calculate the second-order derivative of an array. The syntax of the method is as follows.

numpy.gradient(f, *varargs, axis=None, edge_order=1)

Let us see the arguments of the method.

f: This argument takes an array or an array-like object as input.

varags: Is an argument for including variable arguments determining the spacing for each dimension within the input array.

axis: We can specify the axis in which we want the gradient to be computed.

edge_order: Gradient is calculated using N-th order accurate differences at the boundaries. The default is 1.

Let us see the code to compute divergence. We are going to calculate the divergence of a typical vector field – (2y^2+x-4)i+cos(x)j. The divergence of this field is constant, which is 1. We can find the divergence by applying partial derivation on the vector.

Divergence Calculation
Divergence Calculation
import numpy as np
def F_x(x, y):
    return 2 * y**2 + x - 4
def F_y(x, y):
    return np.cos(x)
x = np.linspace(-1, 1, 5)
y = np.linspace(-1, 1, 5)
X, Y = np.meshgrid(x, y)
F_X = F_x(X, Y)
F_Y = F_y(X, Y)
dFX = np.gradient(F_X, x, axis=1)
dFY = np.gradient(F_Y, y, axis=0)
div = dFX+dFY
print(div)

In the first line, we are importing the numpy library as np.

We are creating two function F_x snd F_y where F is the vector field we decided on . F_x stores the first component of the fierd – (2y^2+x-4) and `F_y stores the other component cos(x).

We are creating a meshgrid of 5 values between -1 and 1 using the linspace method to check the divergence of the field. and it is not suggested to compute the divergence of a vector field as it is.

We are computing the gradient of the respective components using the np.gradient function. The results are stored in dFX and dFY respectively.

These components are added together, and the result is stored in a variable called div, which is the divergence of the field.

Divergence Of A Field Using Gradient
Divergence Of A Field Using Gradient

There is a simpler way to do the same thing using the same method. Instead of dividing the components of the field, we can directly compute the gradient.

import numpy as np
def compute_div(F, x, y):
    par_x = np.gradient(F[0], x, axis=0)
    par_y = np.gradient(F[1], y, axis=1)
    div = par_x + par_y
    return div
x = np.linspace(-10, 10,5)
y = np.linspace(-10, 10,5)
X, Y = np.meshgrid(x, y, indexing='ij')
F = [ 2 * Y**2 + X - 4,
    np.cos(X)]
divg= compute_div(F, x, y)
print("Divergence of the vector field:\n", divg)

We are creating a function called compute_div that takes the vector field(F), and the axes(x and y) as arguments.

We are directly calculating the partial derivatives of the field by specifying the index of the component. For example, the component 2 * y**2 + x – 4 is taken as F[0[ and the other part cos(x) is taken as F[1], and the gradient is used to calculate the derivative.

These values are stored in variables like par_x and par_y. The sum of these components is stored in a variable called div.

We are also creating a mesh grid to populate the divergence of the field. F stores the vector field we use. The function compute_div is called with the respective values, and the result is stored in divg.

The divergence is printed in the next line.

Divergence Using Gradient
Divergence Using Gradient

Conclusion

We have discussed the divergence of a vector field and the approach to finding the divergence of a vector field using the Numpy library.

Divergence is used to represent a separation or movement from a point. Divergence can also be a point at which two or more lines move away from each other. An example of divergence is water running from a tap or a person going further away from you on the road.

A vector field has a direction and value, whereas a scalar has a quantity. Divergence takes a vector field and returns a scalar.

We have used the Numpy library’s gradient method to calculate the divergence of a 2D vector field. You can use the same method and use your own vector field to compute the divergence. In case you are using a 3D vector field( which has k component too), you just need to add another variable that calculates the partial derivative of the component and add all the three together.

References

You can find more about the gradient method here.

Stack overflow.