Random Search in Machine Learning: Hyperparameter Tuning Technique

Random Search In Machine Learning

Optimizing Machine Learning Models with Random Search Hyperparameter Tuning. The random search is a hyperparameter optimization technique. This is considered the best alternative to grid search. In this article, we are going to study the concept of random search in machine learning.

Random search is a machine learning hyperparameter tuning technique that randomly samples hyperparameter combinations from a defined space, trains models, and selects the best performer. It’s more efficient than grid search and suitable for large parameter spaces.

Let’s explore the topic in detail.

Introduction to Random Search in Machine Learning

Random search is a hyperparameter tuning technique applied at the beginning of the optimization process and enables a thorough search for optimal hyperparameters that define the best machine learning algorithm. Hyperparameters have an outside life with some parameters that are not learnt using the training data. They aid in the perception of the algorithm’s behaviour and performance in terms of the model’s complexity, the learning rate, and the number of hidden layers in a neural network.

Random Search Technique Overview

  1. Define the Hyperparameter Space: First, specify the hyperparameter search space. This includes the parameters to tune and their possible values or ranges, based on the specific model used.
  2. Randomly Sample Hyperparameters: Generate random combinations of hyperparameters by sampling from the defined search space. Each combination represents a unique configuration for the model.
  3. Train and Evaluate Models: For each sampled set of hyperparameters, train the model using the training data and evaluate its performance on a validation set. The evaluation metric, such as accuracy or loss, measures how well the model performs with the given hyperparameters.
  4. Select the Best Model: After training and evaluating multiple models with different hyperparameter combinations, choose the model that achieves the best performance based on the evaluation metric. This model and its corresponding hyperparameters are considered the optimal configuration.

Example of Implementing Random Search

In this example, we use the iris dataset as input. The dataset is divided into 80% and 20% for training and testing. All the variables like X_train, y_train, X_test, and y_test are initialized and predefined in this code.

from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split

iris = load_iris()
X = iris.data  # Features
y = iris.target  # Labels

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

Now, we perform a random search to tune hyperparameters for a Random Forest Classifier using scikit-learn’s ‘RandomizedSearchCV’. We define a hyperparameter parameter distribution, specify the number of iterations (n), and perform cross-validation (cv). Finally, we evaluate the best model found by the random search on a test set. Let’s see the code and implementation.

from sklearn.model_selection import RandomizedSearchCV
from sklearn.ensemble import RandomForestClassifier
from scipy.stats import randint

param_dist = {
    'n_estimators': randint(10, 1000),
    'max_depth': randint(1, 20),
    'min_samples_split': randint(2, 20),
    'min_samples_leaf': randint(1, 20),
    'max_features': ['auto', 'sqrt', 'log2']
}

rf = RandomForestClassifier()

random_search = RandomizedSearchCV(rf, param_distributions=param_dist, n_iter=100, cv=5, verbose=2, random_state=42, n_jobs=-1)

random_search.fit(X_train, y_train)

print("Best parameters found: ", random_search.best_params_)

best_model = random_search.best_estimator_
accuracy = best_model.score(X_test, y_test)
print("Accuracy of best model: ", accuracy)
Random Search
Random Search

In the output, you can see that the accuracy of the best model is 100%. This way, you can deploy your dataset on this model and find the best set of hyperparameters that gives the best results.

Advantages of Random Search in Python

Let’s look at some advantages of random search when using Python.

1. Efficiency

The algorithm searches multiple ways to optimize the hyperparameters and doesn’t exhaustively search for all the possible combinations, so the random search is more efficient than the grid search.

2. Exploration of Hyperparameter Space

Random searching explores a much bigger parameter space than grid search. It does so by randomly choosing appropriate sets of hyperparameters, which the grid technique would not probably pick.

3. Ease of Implementation

A simple random search is straightforward compared to more cerebral optimizing techniques. It helps remove the process of setting up a nested grid of hyperparameters, which could be complicated and time-consuming, in the scenario of large or continuous parameters space.

4. Parallelization

Pure random search naturally lends itself to parallel execution because different models are being evaluated in a randomized manner without any dependency on each other. This effort not only saves the costly computational resources but also permits the utilization of the system resources at full capacity when evaluations are performed on a cluster or multiple processors.

Summary

This article covered the fundamentals of random search in machine learning, a valuable method for optimizing model hyperparameters. We looked at the steps involved in the random search process, provided a practical example using the Iris dataset and scikit-learn, and highlighted the benefits of using random searches in Python, such as its efficiency, ability to explore a wide range of hyperparameters, simplicity, and potential for parallelization.

As you work on machine learning projects, consider how random search can help improve your models’ performance. How might you use random search to identify the optimal hyperparameters and get the most out of your chosen algorithms?

References

Refer to Stack Overflow Query for similar content.