We’ve already worked on PCA in a previous article. In this article, let’s work on Principal Component Analysis for image data. PCA is a famous unsupervised dimensionality reduction technique that comes to our rescue whenever the curse of dimensionality haunts us.
Working with image data is a little different than the usual datasets. A typical colored image is comprised of tiny pixels (‘picture element’ for short), many pixels come together in an array to form a digital image.
A typical digital image is made by stacking Red Blue and Green pixel arrays of intensities ranging from 0 to 255.
A grayscale image does not contain color but only shades of gray. The pixel intensity in a grayscale image varies from black (0 intensity) to white (255 full intensity) to make it what we usually call as a Black & White image.
Applying PCA to Digits dataset
Digits dataset is a grayscale image dataset of handwritten digit having 1797 8×8 images.
#importing the dataset import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import load_digits digits = load_digits() data = digits.data data.shape
sklearn.datasets module makes it quick to import digits data by importing
load_digits class from it. The shape of the digit data is (1797, 64). 8×8 pixels are flattened to create a vector of length 64 for every image.
Let’s view what our data looks like.
#taking a sample image to view #Remember image is in the form of numpy array. image_sample = data[0,:].reshape(8,8) plt.imshow(image_sample)
1. Reduce Image Dimensions
Now, using PCA, let’s reduce the image dimensions from 64 to just 2 so that we can visualize the dataset using a Scatterplot.
sklearn provides us with a very simple implementation of PCA.
#Import required modules from sklearn.decomposition import PCA pca = PCA(2) # we need 2 principal components. converted_data = pca.fit_transform(digits.data) converted_data.shape
The data gets reduced from (1797, 64) to (1797, 2).
2. Visualize the Resulting Dataset
We’ll use the
PCA() class to implement principal component analysis algorithm.
It accepts integer number as an input argument depicting the number of principal components we want in the converted dataset.
We can also pass a float value less than 1 instead of an integer number. i.e. PCA(0.90) this means the algorithm will find the principal components which explain 90% of the variance in data.
Let’s visualize the result.
plt.style.use('seaborn-whitegrid') plt.figure(figsize = (10,6)) c_map = plt.cm.get_cmap('jet', 10) plt.scatter(converted_data[:, 0], converted_data[:, 1], s = 15, cmap = c_map , c = digits.target) plt.colorbar() plt.xlabel('PC-1') , plt.ylabel('PC-2') plt.show()
Principal Component Analysis for Image Data Compression
Another cool application of PCA is in Image compression. Let’s have a look at how can we achieve this with python.
# Importing required libraries import cv2 import numpy as np import matplotlib.pyplot as plt from sklearn.decomposition import PCA
1. Loading the Image
We’ll use the OpenCV (Open Source Computer Vision Library). OpenCV is an open-source computer vision and machine learning library.
# Loading the image img = cv2.imread('my_doggo_sample.jpg') #you can use any image you want. plt.imshow(img)
2. Splitting the Image in R,G,B Arrays
As we know a digital colored image is a combination of R, G, and B arrays stacked over each other. Here we have to split each channel from the image and extract principal components from each of them.
# Splitting the image in R,G,B arrays. blue,green,red = cv2.split(img) #it will split the original image into Blue, Green and Red arrays.
An important point here to note is, OpenCV will split into Blue, Green, and Red channels instead of Red, Blue, and Green. Be very careful of the sequence here.
3. Apply Principal Components to Individual Arrays
Now, applying PCA to each array.
#initialize PCA with first 20 principal components pca = PCA(20) #Applying to red channel and then applying inverse transform to transformed array. red_transformed = pca.fit_transform(red) red_inverted = pca.inverse_transform(red_transformed) #Applying to Green channel and then applying inverse transform to transformed array. green_transformed = pca.fit_transform(green) green_inverted = pca.inverse_transform(green_transformed) #Applying to Blue channel and then applying inverse transform to transformed array. blue_transformed = pca.fit_transform(blue) blue_inverted = pca.inverse_transform(blue_transformed)
Here we applied PCA keeping only the first 20 principal components and applied it to RGB arrays respectively.
4. Compressing the Image
Inverse Transformation is necessary to recreate the original dimensions of the base image.
In the process of reconstructing the original dimensions from the reduced dimensions, some information is lost as we keep only selected principal components, 20 in this case.
img_compressed = (np.dstack((red_inverted, red_inverted, red_inverted))).astype(np.uint8)
Stacking the inverted arrays using
dstack function. Here it is important to specify the datatype of our arrays, as most images are of 8 bit. Each pixel is represented by one 8-bit byte.
#viewing the compressed image plt.imshow(img_compressed)
The output above is what we get when considering just 20 Principal components.
If we increase the number of Principal components the output image will get clear.
Using first 50 Principal Components:
Now, Using 100 Principal components:
With the first 100 Principal components, our output got much clearer.
Now let’s apply PCA using the first 200 Principal components.
Voila! With 200 principal components, we were able to create a sharp image just like the original one.
The number of components to consider is completely arbitrary. Start with some small value and gradually increase it until the desired output is achieved. Feel free to experiment with the code.
In this article, we explored the application of PCA as a dimensionality reduction technique and applied it to image data. We also saw how PCA finds its use in image compression.