Understanding Markov Chain Monte Carlo System

MARKOV CHAIN MONTE CARLO SYSTEM

Markov Chains help us to predict future events based on the current state as the only parameter. It does not depend on history to reach the current state. Markov Chains have extensive use in Physics and material sciences.

Monte Carlo simulations are used for predicting situations through repeated simulations. It provides us with a probability distribution of the future situation under consideration. This technique is used very frequently in the financial world.

Combining the above two concepts, we get Markov Chain Monte Carlo systems. In this article, we will understand what these two concepts are individually and together. We will also learn its Python implementation as well.

Recommended: Monte-Carlo Simulation to find the probability of Coin toss in python

What are Markov Chains?

As stated above, Markov chains help us predict future events based on the current state as the only parameter. Let’s understand this with Python code.

import numpy as np
from random import choices

# Define states and transition matrix
states = ["Sunny", "Rainy", "Cloudy"]
transition_matrix = np.array([
    [0.7, 0.2, 0.1],
    [0.3, 0.4, 0.3],
    [0.2, 0.5, 0.3]
])

# Start in Sunny state
current_state = "Sunny"

# Simulate 5 transitions and print the sequence
for i in range(5):
    next_state = choices(states, weights=transition_matrix[states.index(current_state)], k=1)[0]
    current_state = next_state
    print(f"Day {i+1}: {current_state}")

Let us understand the given situation in the code above.

The variable states define the weather which is possible in a 3×3 matrix. The matrix gives us the probability of being sunny after a sunny of 70%, a rainy day after a sunny day at 20%, and a cloudy day after a sunny day at 10%. Similarly, the next two rows give us the probability of different days with the second and third rows denoting Rainy day and cloudy day respectively. The variable current_state assumes a weather condition randomly and provides us with different results after being iterated through the loop.

The above code provides us with the probability of different weather conditions on a day based on the previous day. Here’s the output:

Markov Chain
Markov Chain

Let’s move ahead and understand what Monte Carlo simulation is.

What is Monte Carlo simulation?

Monte Carlo simulations help in the prediction of future possibilities through repeated simulation. Let us understand this with a Python code where we calculate the probability of getting heads or tails through a coin toss.

import random

# Number of simulations
num_simulations = 10000

# Initialize counts for heads and tails
heads = 0
tails = 0

# Simulate coin toss
for _ in range(num_simulations):
    # Simulate flip using random.choice
    coin_flip = random.choice(['heads', 'tails'])
    
    # Update counts based on result
    if coin_flip == 'heads':
        heads += 1
    else:
        tails += 1

# Calculate probabilities
heads_prob = heads / num_simulations
tails_prob = tails / num_simulations

# Print results
print(f"Number of heads: {heads}")
print(f"Number of tails: {tails}")
print(f"Probability of heads: {heads_prob:.4f}")
print(f"Probability of tails: {tails_prob:.4f}")

In the above code, we set a total number of simulations which in our case is 10000. We keep on incrementing the number of heads and tails, and finally, we calculate simple probabilities for each head and tails. Keep in mind that as we increase the total number of simulations, we get much more realistic probabilities of our experiment. Let’s look at its output.

Monte Carlo Simulation Output
Monte Carlo Simulation Output

The probability of both heads and tails is almost 50-50. Now let’s look at Markov Chain Monte Carlo systems.

Markov Chain Monte Carlo systems

Markov Chain Monte Carlo systems combine both these concepts. We capitalize on this concept and calculate the value of π. It samples the point inside the square with only those points being accepted as successful “states” with every future state influenced only by the current parameter. Let’s look at its Python implementation:

import random

iterations = 100000

inside_circle = 0

for _ in range(iterations):
    x, y = random.uniform(-1, 1), random.uniform(-1, 1)
    old_dist = x**2 + y**2
    new_x, new_y = x + random.uniform(-0.1, 0.1), y + random.uniform(-0.1, 0.1)
    new_dist = new_x**2 + new_y**2
    if new_dist <= 1 and (new_dist < old_dist or random.random() < old_dist / new_dist):
        inside_circle += 1

pi_estimate = 4 * inside_circle / iterations
print(f"Estimated pi: {pi_estimate:.4f}")

Value Of Pi Using MCMC
Value Of π Using MCMC

The above output provides us with the value of π. This concludes the Markov Chain Monte Carlo process.

Conclusion

The Markov Chain Monte Carlo system is a powerful tool to analyze and understand systems that have uncertainty. They can be applied to multiple fields and disciplines. They have some drawbacks though, like high computational costs and its assumptions and implementations. Hope you enjoyed reading it!!

Recommended: Monte Carlo in Python