Registering Images Using OpenCV | python

Now we may want to "align" a particular image at the same angle as the reference image. In the images above, the first image can be assumed to be the “perfect” cover photo, while the second and third images are not very well suited for book cover photos. The image registration algorithm helps us align the second and third images in the same plane as the first.

How does image registration work?
Alignment can be thought of as a simple coordinate transformation. The algorithm works as follows:

  • Convert both images to grayscale.
  • Match the image elements to be aligned with the reference image and save the coordinates of the corresponding key points. Key points — they are simply selected multiple points that are used to compute the transformation (usually the highlight points), and the — these are histograms of image gradients that characterize the appearance of the key point. In this article, we use the ORB (Oriented FAST and Rotated BRIEF) implementation in the OpenCV library, which provides us with both cue points and their associated descriptors.
  • Map cue points between two images. In this post, we are using BFMatcher, which is a brute force method.  BFMatcher.match () retrieves the best match, while BFMatcher.knnMatch () retrieves the top K matches, where K is specified by the user.
  • Select best matches and remove noisy matches.
  • Find homomorphic transform.
  • Apply this transform to the original unaligned image to get the output image.

Image registration applications —
Some of the useful image registration applications include:

  • Stitching different scenes (which may or may not have the same camera alignment ) together to form a continuous panoramic image.
  • Align camera images to documents with standard alignment to create realistic scanned documents.
  • Align medical images for better observation and analysis.
  • >

Below is the code to register an image. We have aligned the second image with a link to the third image.

import cv2

import numpy as np

  
# Open the image files.

img1_color = cv2.imread ( "align.jpg" # Image to align.

img2_color = cv2.imread ( "ref.jpg" # Reference image.

 
# Convert Assign to grayscale.

img1 = cv2.cvtColor (img1_color, cv2.COLOR_BGR2GRAY)

img2 = cv2.cvtColor (img2_color, cv2.COLOR_BGR2GRAY)

height, width = img2.shape

 
# Create an ORB detector with 5000 functions.

orb_detector = cv2.ORB_create ( 5000 )

  
# Find cue points and descriptors.
# The first argument is an image, the second mask argument
# (which does not require in this case).

kp1, d1 = orb_detector.detectAndCompute (img1, None )

kp2, d2 = orb_detector.detectAndCompute (img2, None )

 
# Matching features between two images.
# We're building a brute force game with
# Hamming distance as a measurement mode.

matcher = cv2 .BFMatcher (cv2.NORM_HAMMING, crossCheck = True )

 
# Match two descriptor sets.

matches = matcher.match (d1, d2)

 
# Sort matches by Hamming distance.

matches.sort (key = lambda x: x.distance)

 
# Take the best 90% of matches ahead.

matches = matches [: int ( len (matches) * 90 )]

no_of_matches = len (matches)

 
# Define empty matrices of the form no_of_matches * 2.

p1 = np.zeros ((no_of_matches, 2 ))

p2 = np.zeros ((no_of_matches, 2 ))

 

for i in range ( len (matches)):

p1 [i,:] = kp1 [matches [i] .queryIdx ] .pt

p2 [i,:] = kp2 [matches [i] .trainIdx] .pt

  
# Find homography matrix.

homography, mask = cv2.findHomography (p1, p2, cv2.RANSAC)

  
# Use this matrix to transform
# color image relative to reference image.

transformed_img = cv2.warpPerspective (img1_color,

homography, (width, height))

 
# Save output. < / code>

cv2.imwrite ( 'output.jpg' , transformed_img)

Output:





Get Solution for free from DataCamp guru