Backtracking Line Search Algorithm for Unconstrained Optimization

Implementing Backtracking Line Search Algorithm For Unconstrained Optimization Problems

Optimization is one of the most fundamental functions of any algorithm. Unconstrained optimization problems refer to those problems where we need to maximize or minimize our function without any constraint on our decision variables. These problems find their usage in many fields such as economics, engineering, machine learning, and many more.

The backtracking line search algorithm is a technique that is used in optimization to find the optimal step size along a given direction. It involves adjusting the step size iteratively so that the algorithm can converge to the optimal solution without taking large steps at a time.

The backtracking line search algorithm is an iterative optimization technique used to determine the optimal step size along a given direction in unconstrained optimization problems. It adjusts the step size iteratively until it meets the Armijo-Goldstein condition, ensuring efficient convergence to the optimal solution.

In this article, we will look at how we can implement the backtracking line search algorithm for unconstrained optimization problems. Let’s get started!

The Backtracking Line Search Algorithm – An Overview

Before, we jump into the intricacies of the algorithm, let’s understand what an unconstrained optimization problem is. An unconstrained optimization problem refers to finding the value of a variable ( say, x ) that will either minimize or maximize an objective function which is represented in terms of that variable, usually denoted by f(x).

In these problems our goal is to find the value of our variable in such a way that it represents the global maxima or global minima in the domain of f(x).

The backtracking line search algorithm is an iterative optimization technique which determines the optimal step size to reach the optimal point along a given direction in the optimization problem.

The algorithm adjusts the size of the step iteratively until it meets a specified condition, which is called the Armijo-Goldstein condition and reaches a local minimum.

Recommended: A Beginner’s Guide to Non-linear Equations and Constraints.

The Armijo-Goldstein Condition

The Armijo-Goldstein condition, also known as the sufficient decrease condition is used in a lot of optimization techniques to determine the acceptance of the optimal step size in a given direction of the optimization problem. The condition compares the actual reduction in the optimization problem with the reduction predicted by a linear approximation.

The formula of the Armijo-Goldstein formula is:

f(x+α⋅p)≤f(x)+c⋅α⋅∇f(x)T⋅p

In the above formula:

  • f(x):is the optimization function that needs to be minimized or maximized.
  • α : Alpha is the step size that is being tested.
  • p : P is the direction in which the algorithm iterates.
  • ∇f(x) : This is the gradient of the objective function or the derivative.
  • c : C is a small constant, usually between 0 and 1 which is also called the Armijo-Goldstein parameter.

The left hand side of the inequality equation represents the objective function value at the new point obtained by taking a step size of alpha in the direction p. The right hand side represents the value of the new objective function obtained from a linear approximation, by adding a small fraction (c⋅α) of the directional derivative ∇f(x)T⋅p to the objective function.

Implementing the backtracking algorithm in python.

In the diagram given below, we have shown the steps of the back tracking line search algorithm. We will be implementing them in the same way in python.

Steps Of The Back Tracking Algorithm
Steps Of The Back Tracking Algorithm

Let’s take a look at the code. We will be using the numpy library in this implementation. We will be defining a function with the objective function, 'f' , 'grad_f' as the gradient of the function, 'x' as the current value of the variable, 'p' as the direction of the variable, 'alpha' as the scaling factor or the step size with a default value of 0.5 , 'beta' as the contraction factor to reduce the step size with a default value of 0.8 and max_iter which defines the number of iterations for which the algorithm will run. In this program we will assign 100 iterations to the algorithm.

import numpy as np

def backtracking_line_search(f, grad_f, x, p, alpha=0.5, beta=0.8, max_iter=100):
    """
    Backtracking line search algorithm for unconstrained optimization.

    Parameters:
        f (function): Objective function to minimize.
        grad_f (function): Gradient of the objective function.
        x (ndarray): Current point (numpy array).
        p (ndarray): Search direction (numpy array).
        alpha (float): Scaling factor for step size (default: 0.5).
        beta (float): Contraction factor for step size (default: 0.8).
        max_iter (int): Maximum number of iterations (default: 100).

    Returns:
        float: Optimal step size.
    """
    # Initialize step size
    t = 0.5

    # Armijo-Goldstein condition parameters
    c = 0.1

    # Iteratively adjust step size
    for _ in range(max_iter):
        # Evaluate objective function at new point
        f_new = f(x + t * p)

        # Evaluate Armijo-Goldstein condition
        if f_new <= f(x) + c * t * np.dot(grad_f(x), p):
            return t  # Optimal step size found

        # Reduce step size
        t *= beta

    # If no optimal step size found, return the final step size
    return t

# Example usage:
# Define objective function and its gradient
def f(x):
    return x[0]**2 +  x[1]**2

def grad_f(x):
    return np.array([2*x[0], 2*x[1]])

# Initial point and search direction
x = np.array([1.0, 1.0])
p = np.array([-1.0, -1.0])

# Perform backtracking line search
optimal_step_size = backtracking_line_search(f, grad_f, x, p)

print("Optimal Step Size:", optimal_step_size)

In the program above, we have intialized the initial step size and the Armijo-Goldstein constant as well. We have defined a simple optimization function with initial points and direction.

The optimal step size is the output as given below:

Optimal Step Size: 0.5

Suggested: Optimization in Python – A Complete Guide.

Summary

The backtracking line search algorithm revolutionizes unconstrained optimization. It adjusts the step size, ensuring efficient convergence to the optimal solution. Python’s libraries simplify the implementation, allowing you to tackle complex optimization challenges. As we explore further into optimization, the backtracking line search algorithm remains a steadfast companion. Its efficiency in finding the optimal path opens up possibilities across various fields.