Understanding the Precision-Recall Curve in Python

Understanding The Precision Recall Curve In Python

When trying to assess the performance of our machine learning model for various classification tasks, accuracy might not be enough. Most of the time when we use an imbalanced dataset, the accuracy of the model might be misleading in evaluating how good our model is. In these cases, we need other metrics such as precision and recall to check our model’s functioning.

Recall and precision give a deeper insight into the workings of our model. These metrics take into account the false positives as well as the false negatives which is essential in evaluating a model. The precision-recall curve is used to visualize these metrics at various threshold levels to observe the tradeoff between the two.

Precision measures the accuracy of positive predictions, while recall assesses the model’s ability to detect all positive instances. Implementing these metrics, especially in imbalanced datasets, provides a clearer insight into model performance than accuracy alone. This guide walks you through understanding, calculating, and visualizing precision and recall, alongside the precision-recall curve, using Python’s sklearn, seaborn, and matplotlib libraries.

In this article, we will see what exactly these metrics are and how we can implement these metrics in Python and visualize the curve to evaluate our model better. Let’s get started!

Deep Dive into Precision and Recall

In this section, we will understand what precision and recall are in depth. But before we dive right into it, let’s first get a brief idea about what threshold is since we are going to use this term extensively in this section. The threshold refers to the probability threshold used to make decisions about class labels.

When a model assigns a class to a new instance, the first step it takes is assigning a probability score to each of the two classes, 0 and 1. Based on these scores, the model decides to classify the new instance in one of the two classes. This is where the threshold comes into play.

For example, if we are trying to classify an email as spam or not spam, and our threshold is set at 0.5, it means that if the new email gets a probability score of greater than or equal to 0.5, it will be classified as “spam”(1), else it will be classified as “not spam”(0). Now let’s see what precision and recall are.

Precision: Precision measures the ratio of the true positive out of all the instances that have been classified as positive by our model. The formula for precision is:

precision = True Positive(TP)/ (True Positive(TP) + False Positive(FP))

The advantages of precision are:

  • In scenarios with class imbalances, precision is effective in reducing the number of false positives, and correctly identifying the true positives.
  • Precision also allows flexible threshold adjustments based on specific constraints which are helpful in real-life scenarios such as medical diagnosis.

Recall: Recall is also known as “sensitivity”, which is used to assess the ability of a model to capture all positive instances. The formula for Recall is:

Recall = True Positive(TP) / ( True Positive(TP) + False Negative (FN) )

The advantages of recall are:

  • It captures the model’s ability to classify all positive instances.
  • It is also useful when there is a class imbalance.

Suggested: Email Spam Classification in Python.

Implementing Precision and Recall in Python: Step-by-Step

The precision-recall curve is used for visualizing the trade-off between precision and recall at various threshold values. Just like these metrics, this graph is also useful for visualizing class imbalances, where the negative instances are more than the positive ones. Through this curve, we can evaluate the model’s performance across various thresholds.

Let’s implement this curve in Python and also calculate the precision and recall scores along with visualizing the confusion matrix for the same.

Since this article is for demonstration purposes, I will use a synthetically generated set of data rather than an actual dataset from the sklearn.datasets library, you can use your dataset for your model. The code will remain the same for the precision and recall scores, the precision-recall curve, and also for the confusion matrix.

#importing required modules
from sklearn.datasets import make_classification
from sklearn.metrics import confusion_matrix
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import precision_score, recall_score
from sklearn.metrics import precision_recall_curve
import matplotlib.pyplot as plt

# Generating synthetic dataset
X, y = make_classification(n_samples=1000, n_features=20, n_classes=2, weights=[0.9, 0.1], random_state=42)

# Splitting the dataset into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Training a logistic regression model
model = LogisticRegression()
model.fit(X_train, y_train)

# Predicting probabilities of positive class
y_probs = model.predict_proba(X_test)[:, 1]

# Calculating precision score
precision = precision_score(y_test, y_pred)

# Calculating recall score
recall = recall_score(y_test, y_pred)

print("Precision:", precision)
print("Recall:", recall)

# Predicting classes based on the probability threshold of 0.5
y_pred = (y_probs > 0.5).astype(int)

# Computing confusion matrix
conf_matrix = confusion_matrix(y_test, y_pred)

# Ploting the confusion matrix
plt.figure(figsize=(8, 6))
sns.heatmap(conf_matrix, annot=True, fmt='d', cmap='Blues')
plt.xlabel('Predicted Label')
plt.ylabel('True Label')
plt.title('Confusion Matrix')
plt.show()

# Computing precision-recall pairs
precision, recall, _ = precision_recall_curve(y_test, y_probs)

# Ploting precision-recall curve
plt.plot(recall, precision, marker='.', color='red')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.title('Precision-Recall Curve')
plt.show()

Let’s look at the output one by one. The precision and recall scores will be printed as follows:

Precision: 0.46153846153846156
Recall: 0.3

The confusion matrix would be:

Confusion Matrix 1
Confusion Matrix

And finally, the precision-recall curve would look like:

The Precision Recall Curve
The Precision Recall Curve

You might also like: What Are the Different Types of Classification Algorithms?

Summary

Precision quantifies accuracy for positive predictions, while recall reflects the ability to find all relevant cases. Examining both provides a robust view of real-world effectiveness. Using Python’s versatile data science libraries, generating these curves takes only a few lines of code. In Python, plotting these graphs is made extremely easy due to the large availability of in-built functions in libraries such as sklearn, seaborn and visualization libraries such as matplotlib.

With an understanding of precision-recall analysis, developers can make informed decisions when tuning models. Detecting pain points like low recall guides refinement towards reliable, perceptive systems. Going forward, we can continue leveraging these best practices to wring the most value out of our machine learning pipelines. What challenging dataset will you tackle next with these insights?