Hello readers, in this article, let’s try to understand one of the most efficient algorithms used in artificial intelligence: the beam search algorithm.

## What is Beam Search Algorithm?

Beam Search Algorithm is a modified version of the best-first search algorithm. It selects nodes based on conditional probability. Each iteration of the beam search method can include multiple pathways that are ordered and chosen according to their path length. Hence, is used to find the shortest or the cheapest path.

## How does Beam Search work?

A heuristic search strategy called beam search always expands the W number of the best nodes at each level. It advances level by level, always starting from the top W nodes in each level. Beam search constructs its search tree using breadth-first search. It produces all successors of the states at the current level at each level of the tree and organizes them in ascending heuristic value order.

It only takes into account W states at each level, though. Other nodes were omitted. Based on the node’s heuristic cost, the best nodes are chosen. W in this context refers to the beam search’s width. Only W nodes will be chosen if B is the branching factor since only W * B nodes will be considered at any level. The number of states eliminated increases as the beam width decreases. If W = 1, then the search is transformed into a hill-climbing one in which the best successor node is always picked. No states are eliminated and the beam search is the same as the breath-first search if the beam width is indefinite.

## Algorithm

Input: START and GOAL states

Local Variables: OPEN, NODE, SUCCs, W_OPEN, FOUND

Output: Yes or No

**Method: **

- NODE = Root_node: Found = false:
- if NODE is the goal node, then Found = true else find SUCCs of NODE. if any with its estimated cost and store in the OPEN list:
- while (FOUND false and not able to proceed further) do
- sort OPEN list:
- select the top W elements from the OPEN list and put them in the W_OPEN list and empty the OPEN list.
- for each node in the W_OPEN list
- { if NODE = goal state then FOUND = true find SUCCs of NODE. if any with its estimated cost and store in the OPEN list. }

- ending while loop

- if FOUND = true then return Yes otherwise return No.
- Stop.

## Complexity

- (W is the beam width, and B is the maximum depth of any path)
- Time Complexity: depends on the heuristic function, its O(W*B)
- Space Complexity: Since the algorithm only stores W nodes at each level in the search tree, its O(W*B)

## Working

The search tree generated using the Beam search algorithm, assuming W = 2 and B = 3 is given below. Here, colored nodes are selected based on their heuristic values for further expansion.

## Implementing the Beam Search Algorithm in Python

Here is a simple implementation of the beam search algorithm in python. We use the NumPy module in python to deal with the array data structure used in the performance. The main function beam_search() is iterated multiple times to find the shortest path. the parameters passed in this function are

- distances – a distance of weight values between vertices
- beta – width of the beam

```
from numpy import array
#main function
#beta here is width of beam and distances can be considered as weights.
def beam_search(distances, beta):
#initialising some record
paths_so_far = [[list(), 0]]
#traverse through the neighbouring vertices row by row.
for idx, tier in enumerate(distances):
if idx > 0:
print(f'Paths kept after tier {idx-1}:')
print(*paths_so_far, sep='\n')
paths_at_tier = list()
for i in range(len(paths_so_far)):
path, distance = paths_so_far[i]
# Extending the paths
for j in range(len(tier)):
path_extended = [path + [j], distance + tier[j]]
paths_at_tier.append(path_extended)
paths_ordered = sorted(paths_at_tier, key=lambda element: element[1])
# The best paths are saved
paths_so_far = paths_ordered[:beta]
print(f'\nPaths reduced to after tier {idx}: ')
print(*paths_ordered[beta:], sep='\n')
return paths_so_far
#Distance matrix
dists = [[1, 4, 6, 8],
[5, 2, 3, 4]]
dists = array(dists)
# Calculating the best paths
best_paths = beam_search(dists, 2)
print('\nThe best paths:')
for beta_path in best_paths:
print(beta_path)
```

## Output

From the output, we can look into the method of selection of path in the beach search algorithm. After the first iteration (tier 0), two paths are reduced or cut off and two are kept for further expansion. These two kept paths are further iterated (tier 1), six paths are cut off and two best-suited paths are kept and declared as the best paths.

## Comparison

- The beam search algorithm’s ability to preserve the scalability and efficiency of systems with restricted resources when dealing with huge and dense graphs is perhaps its most prominent characteristic, aside from its robustness.
- The beam search has the benefit of requiring less memory than the best-first search. This is due to the fact that it is not required to store all of the succeeding nodes in a queue. Instead, it only chooses the best ones in terms of beta (beam width).
- However, it still suffers from a few of Best First Search’s shortcomings. The first is that it is incomplete, which means that it might not even come up with a solution. Its subpar performance is the second issue. As a result, the answer it returns might not be the best one.

** Application-** In situations where there may be more than one appropriate solution, such as in machine translation, the beam search algorithm is widely used.

## Conclusion

In this article, we learned and implemented the beam search algorithm which is commonly used for artificial intelligence projects.