Introduction to Feature Matching in Images using Python

Feature Matching FeaImg

Feature matching is the process of detecting and measuring similarities between features in two or more images. This process can be used to compare images to identify changes or differences between them. Feature matching can also be used to find corresponding points in different images, which can be used for tasks such as panorama stitching and object tracking.

There are a number of different algorithms that can be used for feature matching. Some of the most popular algorithms include the Harris corner detector, the SUSAN algorithm, and the FAST algorithm. Each of these algorithms has its own strengths and weaknesses, so it is important to choose the algorithm that is best suited for the specific task at hand.

The ORB algorithm that we’ll use in this article, works by detecting features in an image and then matching them to corresponding features in other images. It does this by constructing a feature descriptor for each detected feature. The feature descriptor is a vector that contains information about the feature, such as its location, size, and orientation.

In this article, we’ll use OpenCV’s ORB algorithm to feature match and display on our app.

Implementing A Feature Matching Algorithm in Python OpenCV

OpenCV is a library of computer vision algorithms that can be used to perform a wide variety of tasks, including feature matching. OpenCV is available for both Python and C++, making it a popular choice for cross-platform development.

Also Read: Identifying Keypoints in Images using Python OpenCV

Now that you know that feature matching is comparing the features of two images which may be different in orientations, perspective, lightening, or even differ in sizes and colors. Let’s now look at its implementation.

import cv2
from google.colab.patches import cv2_imshow

img1 = cv2.imread('sample.jpg')
img2 = cv2.imread('sample2.jpg')

orb = cv2.ORB_create(nfeatures=500)
kp1, des1 = orb.detectAndCompute(img1, None)
kp2, des2 = orb.detectAndCompute(img2, None)

bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
matches = bf.match(des1, des2)

matches = sorted(matches, key=lambda x: x.distance)

match_img = cv2.drawMatches(img1, kp1, img2, kp2, matches[:50], None)

cv2_imshow(match_img)
cv2.waitKey()

I know the code is a bit unclear as of now. No need to worry, we will go through the whole code line by line.

  1. Line 1 and 2 – Import the necessary libraries into the program.
  2. Line 4 and 5 – Loading the images into the program using the imread function.
  3. Line 7 – Create the Feature Matcher ORB object that will detect around 500 features
  4. Line 8 and 9 – The function detectAndCompute that will help detect the features of both the images
  5. Line 11 and 12 – The functions BFMatcher and match return the best results for the feature matching.
  6. Line 14 – Next we sort the results in ascending order on the basis of the distances which makes the better results come to the front. 
  7. Line 16 – Using the function drawMatches we will be plotting the first 50 results and then display the output image using im_show function.

Also Read: ORB Feature Detection in Python

Have a look at some outputs when the code is run for a few images.

Feature Matching Sample Output 1
Feature Matching Sample Output 1
Feature Matching Sample Output 2
Feature Matching Sample Output 2

Conclusion

In this tutorial, we have explored the concept of Feature Matching and explored the basic method to approach the concept of feature matching.

You can try out various images and get amazed by the results! Thank you for reading!

Happy coding! 😁