A Quick Guide to Pytorch Loss Functions

A QUICK GUIDE TO PYTORCH LOSS FUNCTIONS

Neural Networks can do pretty much everything. From classifying whether the given is a cat or a dog, object detection and even generating new images, they have many functionalities.

As impressive as they are, building or training a neural network model is not a piece of cake. To get things done correctly using these models, we need to deeply understand how they work and what type of parameters they might need to work correctly.

Loss functions are metrics used to evaluate model performance during training. PyTorch provides many built-in loss functions like MSELoss, CrossEntropyLoss, KLLoss etc. for tasks like regression and classification. You can also create custom loss functions for complex models by subclassing nn.Module and defining the forward pass.

PyTorch is an open-source machine learning library specifically designed to make the task of building deep learning models or neural networks easy. It is famous for its faster and more dynamic computations. In this post, we will discuss the various loss functions PyTorch has to offer.

PyTorch Tutorial

What are Loss Functions and Why are They Important?

When we are training a neural network, we need to check how close the model’s predicted output is to the expected output. A loss function(an objective function or a cost function) is a metric used to monitor the model’s performance and how the model adapts to the training data.

When we train the model through several epochs, the loss function should gradually decrease with time. If so, the model’s performance is considered good. Hence, the main objective is to minimize this loss through various methods such as hyperparameter tuning and cross-validation.

Loss Function
Loss Function

Loss functions are very popular metrics used in deep learning. There are many types of loss functions for different use cases such as classification and regression.

If you are familiar with GANs(Generative Adversarial Networks), you might have seen the generator loss and the discriminator loss used to train a generator to produce real-like images and the discriminator to differentiate between a real image and a fake image.

Introduction to GANs

Let us see the types of loss functions the PyTorch library has to offer.

Core Loss Functions in PyTorch

There are many loss functions that PyTorch offers for different use cases. Here is a list:

  • L1Loss
  • MSELoss
  • CrossEntropyLoss
  • CTCLoss
  • NLLLoss
  • PoissonNLLLoss
  • GaussianNLLLoss
  • KLDivLoss
  • BCELoss
  • BCEWithLogitsLoss
  • MarginRankingLoss
  • HingeEmbeddingLoss
  • MultiLabelMarginLoss
  • HuberLoss
  • SmoothL1Loss
  • SoftMarginLoss
  • MultiLabelSoftMarginLoss
  • CosineEmbeddingLoss
  • MultiMarginLoss
  • TripletMarginLoss
  • TripletMarginWithDistanceLoss

We are going to understand a few important losses from this list in the next section.

All the loss functions in the PyTorch library are accompanied with a prefix – torch.nn.

L1Loss – Mean Absolute Error (MAE) Loss

The L1Loss is also called the Mean Absolute Error(MAE) loss, which computes the average absolute difference between the true and predicted outputs. This loss is particularly used for regression tasks.

The MAE loss formula is given below.

L1 Loss
L1 Loss

The syntax of the class is as follows.

torch.nn.L1Loss()

Mean Squared Error (MSE) Loss

The Mean Square Error(MSE) loss is also called the L2 Loss. It is the average of the squared difference between the predicted and true values. It is also used for regression problems.

MSE formula:

MSE Loss
MSE Loss

The syntax of L2 Loss also follows the same syntax.

torch.nn.MSELoss()

Cross Entropy Loss for Classification

The cross-entropy loss is used for classification problems. It computes the difference between the true labels and predicted labels. It computes the difference between two probability distributions.

The formula for cross-entropy loss is given below.

Cross Entropy Loss
Cross Entropy Loss

We can implement the cross entropy loss with the syntax:

torch.nn.CrossEntropyLoss()

Negative Log Likelihood (NLL) Loss

The negative log-likelihood loss is preferred for multi-class classification problems. This loss is often used in combination with the softmax function and is suitable for probability distributions. This loss is defined as the negative log of the likelihood of a model.

NLL Loss
NLL Loss

Kullback-Leibler (KL) Divergence Loss

Kullback-Leibler divergence measures how a probability distribution is different from another. The KL Divergence loss class and functions compute the KL loss between the predicted and actual values. It can be used for multi-class classification tasks.

KL Divergence formula:

KL Divergence
KL Divergence

Binary Cross Entropy (BCE) Loss

BCE stands for binary cross entropy loss. It computes the cross entropy between the true and predicted labels. It can be used for classification problems that have a binary prediction(0 or 1).

This loss can be used for measuring the error of the reconstruction of an auto-encoder.

Binary Cross Entropy loss
Binary Cross Entropy loss

Building Your Own Loss Function using PyTorch

The above loss functions can only be used for certain scenarios. For more custom neural network models, we might need a more intricate loss function. Such requirements might not be satisfied using the conventional loss functions. Let us see how to create a custom loss function.

import torch
import torch.nn as nn
class MeanSquaredErrorLoss(nn.Module):
    def __init__(self):
        super(MeanSquaredErrorLoss, self).__init__()

    def forward(self, input, target):
        loss = torch.mean((input - target)**2)
        return loss

We import the torch library and the nn module in the first two lines. A loss function class called MeanSquaredErrorLoss which inherits from the nn.Module class. The forward pass of this custom loss function is defined in line six. It takes the input and target as parameters. The MSE loss is computed according to the formula and returned in the last two lines.

If we were to use this custom loss function, we could follow this code.

custom_loss = MeanSquaredErrorLoss()
input = torch.randn(3, 5, requires_grad=True)
target = torch.randn(3, 5)
loss = custom_loss(input, target)
print(loss)

The custom loss function is called the variable custom_loss. The input and target variables are being defined randomly. Then, the loss is calculated and printed.

PyTorch custom loss function
PyTorch custom loss function

Summary

Loss functions are important building blocks for training neural networks. With PyTorch’s extensive catalog of losses and the ability to create custom ones, you have the flexibility to tailor an objective function to your model’s needs. As you develop more complex architectures like GANs, think about how specialized losses could improve training. What new loss designs could further advance deep learning?

References

Loss Functions

PyTorch Loss Functions