Saving images in Python at a very high quality

blur | StackOverflow

How can I save Python plots at very high quality?

That is, when I keep zooming in on the object saved in a PDF file, why isn"t there any blurring?

Also, what would be the best mode to save it in?

png, eps? Or some other? I can"t do pdf, because there is a hidden number that happens that mess with Latexmk compilation.

Answer rating: 173

If you are using Matplotlib and are trying to get good figures in a LaTeX document, save as an EPS. Specifically, try something like this after running the commands to plot the image:

plt.savefig("destination_path.eps", format="eps")

I have found that EPS files work best and the dpi parameter is what really makes them look good in a document.

To specify the orientation of the figure before saving, simply call the following before the plt.savefig call, but after creating the plot (assuming you have plotted using an axes with the name ax):

ax.view_init(elev=elevation_angle, azim=azimuthal_angle)

Where elevation_angle is a number (in degrees) specifying the polar angle (down from vertical z axis) and the azimuthal_angle specifies the azimuthal angle (around the z axis).

I find that it is easiest to determine these values by first plotting the image and then rotating it and watching the current values of the angles appear towards the bottom of the window just below the actual plot. Keep in mind that the x, y, z, positions appear by default, but they are replaced with the two angles when you start to click+drag+rotate the image.

Saving images in Python at a very high quality: StackOverflow Questions

Answer #1

They contain byte code, which is what the Python interpreter compiles the source to. This code is then executed by Python"s virtual machine.

Python"s documentation explains the definition like this:

Python is an interpreted language, as opposed to a compiled one, though the distinction can be blurry because of the presence of the bytecode compiler. This means that source files can be run directly without explicitly creating an executable which is then run.

Answer #2

Well, I decided to workout myself on my question to solve above problem. What I wanted is to implement a simpl OCR using KNearest or SVM features in OpenCV. And below is what I did and how. ( it is just for learning how to use KNearest for simple OCR purposes).

1) My first question was about file that comes with OpenCV samples. I wanted to know what is inside that file.

It contains a letter, along with 16 features of that letter.

And this SOF helped me to find it. These 16 features are explained in the paperLetter Recognition Using Holland-Style Adaptive Classifiers. ( Although I didn"t understand some of the features at end)

2) Since I knew, without understanding all those features, it is difficult to do that method. I tried some other papers, but all were a little difficult for a beginner.

So I just decided to take all the pixel values as my features. (I was not worried about accuracy or performance, I just wanted it to work, at least with the least accuracy)

I took below image for my training data:

enter image description here

( I know the amount of training data is less. But, since all letters are of same font and size, I decided to try on this).

To prepare the data for training, I made a small code in OpenCV. It does following things:

  1. It loads the image.
  2. Selects the digits ( obviously by contour finding and applying constraints on area and height of letters to avoid false detections).
  3. Draws the bounding rectangle around one letter and wait for key press manually. This time we press the digit key ourselves corresponding to the letter in box.
  4. Once corresponding digit key is pressed, it resizes this box to 10x10 and saves 100 pixel values in an array (here, samples) and corresponding manually entered digit in another array(here, responses).
  5. Then save both the arrays in separate txt files.

At the end of manual classification of digits, all the digits in the train data( train.png) are labeled manually by ourselves, image will look like below:

enter image description here

Below is the code I used for above purpose ( of course, not so clean):

import sys

import numpy as np
import cv2

im = cv2.imread("pitrain.png")
im3 = im.copy()

gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray,(5,5),0)
thresh = cv2.adaptiveThreshold(blur,255,1,1,11,2)

#################      Now finding Contours         ###################

contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)

samples =  np.empty((0,100))
responses = []
keys = [i for i in range(48,58)]

for cnt in contours:
    if cv2.contourArea(cnt)>50:
        [x,y,w,h] = cv2.boundingRect(cnt)

        if  h>28:
            roi = thresh[y:y+h,x:x+w]
            roismall = cv2.resize(roi,(10,10))
            key = cv2.waitKey(0)

            if key == 27:  # (escape to quit)
            elif key in keys:
                sample = roismall.reshape((1,100))
                samples = np.append(samples,sample,0)

responses = np.array(responses,np.float32)
responses = responses.reshape((responses.size,1))
print "training complete"


Now we enter in to training and testing part.

For testing part I used below image, which has same type of letters I used to train.

enter image description here

For training we do as follows:

  1. Load the txt files we already saved earlier
  2. create a instance of classifier we are using ( here, it is KNearest)
  3. Then we use KNearest.train function to train the data

For testing purposes, we do as follows:

  1. We load the image used for testing
  2. process the image as earlier and extract each digit using contour methods
  3. Draw bounding box for it, then resize to 10x10, and store its pixel values in an array as done earlier.
  4. Then we use KNearest.find_nearest() function to find the nearest item to the one we gave. ( If lucky, it recognises the correct digit.)

I included last two steps ( training and testing) in single code below:

import cv2
import numpy as np

#######   training part    ############### 
samples = np.loadtxt("",np.float32)
responses = np.loadtxt("",np.float32)
responses = responses.reshape((responses.size,1))

model = cv2.KNearest()

############################# testing part  #########################

im = cv2.imread("pi.png")
out = np.zeros(im.shape,np.uint8)
gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
thresh = cv2.adaptiveThreshold(gray,255,1,1,11,2)

contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)

for cnt in contours:
    if cv2.contourArea(cnt)>50:
        [x,y,w,h] = cv2.boundingRect(cnt)
        if  h>28:
            roi = thresh[y:y+h,x:x+w]
            roismall = cv2.resize(roi,(10,10))
            roismall = roismall.reshape((1,100))
            roismall = np.float32(roismall)
            retval, results, neigh_resp, dists = model.find_nearest(roismall, k = 1)
            string = str(int((results[0][0])))


And it worked, below is the result I got:

enter image description here

Here it worked with 100% accuracy. I assume this is because all the digits are of same kind and same size.

But any way, this is a good start to go for beginners ( I hope so).

Answer #3

If you"re just wanting (semi) contiguous regions, there"s already an easy implementation in Python: SciPy"s ndimage.morphology module. This is a fairly common image morphology operation.

Basically, you have 5 steps:

def find_paws(data, smooth_radius=5, threshold=0.0001):
    data = sp.ndimage.uniform_filter(data, smooth_radius)
    thresh = data > threshold
    filled = sp.ndimage.morphology.binary_fill_holes(thresh)
    coded_paws, num_paws = sp.ndimage.label(filled)
    data_slices = sp.ndimage.find_objects(coded_paws)
    return object_slices
  1. Blur the input data a bit to make sure the paws have a continuous footprint. (It would be more efficient to just use a larger kernel (the structure kwarg to the various scipy.ndimage.morphology functions) but this isn"t quite working properly for some reason...)

  2. Threshold the array so that you have a boolean array of places where the pressure is over some threshold value (i.e. thresh = data > value)

  3. Fill any internal holes, so that you have cleaner regions (filled = sp.ndimage.morphology.binary_fill_holes(thresh))

  4. Find the separate contiguous regions (coded_paws, num_paws = sp.ndimage.label(filled)). This returns an array with the regions coded by number (each region is a contiguous area of a unique integer (1 up to the number of paws) with zeros everywhere else)).

  5. Isolate the contiguous regions using data_slices = sp.ndimage.find_objects(coded_paws). This returns a list of tuples of slice objects, so you could get the region of the data for each paw with [data[x] for x in data_slices]. Instead, we"ll draw a rectangle based on these slices, which takes slightly more work.

The two animations below show your "Overlapping Paws" and "Grouped Paws" example data. This method seems to be working perfectly. (And for whatever it"s worth, this runs much more smoothly than the GIF images below on my machine, so the paw detection algorithm is fairly fast...)

Overlapping Paws Grouped Paws

Here"s a full example (now with much more detailed explanations). The vast majority of this is reading the input and making an animation. The actual paw detection is only 5 lines of code.

import numpy as np
import scipy as sp
import scipy.ndimage

import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle

def animate(input_filename):
    """Detects paws and animates the position and raw data of each frame
    in the input file"""
    # With matplotlib, it"s much, much faster to just update the properties
    # of a display object than it is to create a new one, so we"ll just update
    # the data and position of the same objects throughout this animation...

    infile = paw_file(input_filename)

    # Since we"re making an animation with matplotlib, we need 
    # ion() instead of show()...
    fig = plt.figure()
    ax = fig.add_subplot(111)

    # Make an image based on the first frame that we"ll update later
    # (The first frame is never actually displayed)
    im = ax.imshow([1])

    # Make 4 rectangles that we can later move to the position of each paw
    rects = [Rectangle((0,0), 1,1, fc="none", ec="red") for i in range(4)]
    [ax.add_patch(rect) for rect in rects]

    title = ax.set_title("Time 0.0 ms")

    # Process and display each frame
    for time, frame in infile:
        paw_slices = find_paws(frame)

        # Hide any rectangles that might be visible
        [rect.set_visible(False) for rect in rects]

        # Set the position and size of a rectangle for each paw and display it
        for slice, rect in zip(paw_slices, rects):
            dy, dx = slice
            rect.set_xy((dx.start, dy.start))
            rect.set_width(dx.stop - dx.start + 1)
            rect.set_height(dy.stop - dy.start + 1)

        # Update the image data and title of the plot
        title.set_text("Time %0.2f ms" % time)
        im.set_clim([frame.min(), frame.max()])

def find_paws(data, smooth_radius=5, threshold=0.0001):
    """Detects and isolates contiguous regions in the input array"""
    # Blur the input data a bit so the paws have a continous footprint 
    data = sp.ndimage.uniform_filter(data, smooth_radius)
    # Threshold the blurred data (this needs to be a bit > 0 due to the blur)
    thresh = data > threshold
    # Fill any interior holes in the paws to get cleaner regions...
    filled = sp.ndimage.morphology.binary_fill_holes(thresh)
    # Label each contiguous paw
    coded_paws, num_paws = sp.ndimage.label(filled)
    # Isolate the extent of each paw
    data_slices = sp.ndimage.find_objects(coded_paws)
    return data_slices

def paw_file(filename):
    """Returns a iterator that yields the time and data in each frame
    The infile is an ascii file of timesteps formatted similar to this:

    Frame 0 (0.00 ms)
    0.0 0.0 0.0
    0.0 0.0 0.0

    Frame 1 (0.53 ms)
    0.0 0.0 0.0
    0.0 0.0 0.0
    with open(filename) as infile:
        while True:
                time, data = read_frame(infile)
                yield time, data
            except StopIteration:

def read_frame(infile):
    """Reads a frame from the infile."""
    frame_header =
    time = float(frame_header[-2][1:])
    data = []
    while True:
        line =
        if line == []:
    return time, np.array(data, dtype=np.float)

if __name__ == "__main__":
    animate("Overlapping paws.bin")
    animate("Grouped up paws.bin")
    animate("Normal measurement.bin")

Update: As far as identifying which paw is in contact with the sensor at what times, the simplest solution is to just do the same analysis, but use all of the data at once. (i.e. stack the input into a 3D array, and work with it, instead of the individual time frames.) Because SciPy"s ndimage functions are meant to work with n-dimensional arrays, we don"t have to modify the original paw-finding function at all.

# This uses functions (and imports) in the previous code example!!
def paw_regions(infile):
    # Read in and stack all data together into a 3D array
    data, time = [], []
    for t, frame in paw_file(infile):
    data = np.dstack(data)
    time = np.asarray(time)

    # Find and label the paw impacts
    data_slices, coded_paws = find_paws(data, smooth_radius=4)

    # Sort by time of initial paw impact... This way we can determine which
    # paws are which relative to the first paw with a simple modulo 4.
    # (Assuming a 4-legged dog, where all 4 paws contacted the sensor)
    data_slices.sort(key=lambda dat_slice: dat_slice[2].start)

    # Plot up a simple analysis
    fig = plt.figure()
    ax1 = fig.add_subplot(2,1,1)
    annotate_paw_prints(time, data, data_slices, ax=ax1)
    ax2 = fig.add_subplot(2,1,2)
    plot_paw_impacts(time, data_slices, ax=ax2)

def plot_paw_impacts(time, data_slices, ax=None):
    if ax is None:
        ax = plt.gca()

    # Group impacts by paw...
    for i, dat_slice in enumerate(data_slices):
        dx, dy, dt = dat_slice
        paw = i%4 + 1
        # Draw a bar over the time interval where each paw is in contact
        ax.barh(bottom=paw, width=time[dt].ptp(), height=0.2, 
                left=time[dt].min(), align="center", color="red")
    ax.set_yticks(range(1, 5))
    ax.set_yticklabels(["Paw 1", "Paw 2", "Paw 3", "Paw 4"])
    ax.set_xlabel("Time (ms) Since Beginning of Experiment")
    ax.set_title("Periods of Paw Contact")

def annotate_paw_prints(time, data, data_slices, ax=None):
    if ax is None:
        ax = plt.gca()

    # Display all paw impacts (sum over time)

    # Annotate each impact with which paw it is
    # (Relative to the first paw to hit the sensor)
    x, y = [], []
    for i, region in enumerate(data_slices):
        dx, dy, dz = region
        # Get x,y center of slice...
        x0 = 0.5 * (dx.start + dx.stop)
        y0 = 0.5 * (dy.start + dy.stop)
        x.append(x0); y.append(y0)

        # Annotate the paw impacts         
        ax.annotate("Paw %i" % (i%4 +1), (x0, y0),  
            color="red", ha="center", va="bottom")

    # Plot line connecting paw impacts
    ax.plot(x,y, "-wo")
    ax.set_title("Order of Steps")

alt text

alt text

alt text

Answer #4

General idea

Option 1: Load both images as arrays (scipy.misc.imread) and calculate an element-wise (pixel-by-pixel) difference. Calculate the norm of the difference.

Option 2: Load both images. Calculate some feature vector for each of them (like a histogram). Calculate distance between feature vectors rather than images.

However, there are some decisions to make first.


You should answer these questions first:

  • Are images of the same shape and dimension?

    If not, you may need to resize or crop them. PIL library will help to do it in Python.

    If they are taken with the same settings and the same device, they are probably the same.

  • Are images well-aligned?

    If not, you may want to run cross-correlation first, to find the best alignment first. SciPy has functions to do it.

    If the camera and the scene are still, the images are likely to be well-aligned.

  • Is exposure of the images always the same? (Is lightness/contrast the same?)

    If not, you may want to normalize images.

    But be careful, in some situations this may do more wrong than good. For example, a single bright pixel on a dark background will make the normalized image very different.

  • Is color information important?

    If you want to notice color changes, you will have a vector of color values per point, rather than a scalar value as in gray-scale image. You need more attention when writing such code.

  • Are there distinct edges in the image? Are they likely to move?

    If yes, you can apply edge detection algorithm first (e.g. calculate gradient with Sobel or Prewitt transform, apply some threshold), then compare edges on the first image to edges on the second.

  • Is there noise in the image?

    All sensors pollute the image with some amount of noise. Low-cost sensors have more noise. You may wish to apply some noise reduction before you compare images. Blur is the most simple (but not the best) approach here.

  • What kind of changes do you want to notice?

    This may affect the choice of norm to use for the difference between images.

    Consider using Manhattan norm (the sum of the absolute values) or zero norm (the number of elements not equal to zero) to measure how much the image has changed. The former will tell you how much the image is off, the latter will tell only how many pixels differ.


I assume your images are well-aligned, the same size and shape, possibly with different exposure. For simplicity, I convert them to grayscale even if they are color (RGB) images.

You will need these imports:

import sys

from scipy.misc import imread
from scipy.linalg import norm
from scipy import sum, average

Main function, read two images, convert to grayscale, compare and print results:

def main():
    file1, file2 = sys.argv[1:1+2]
    # read images as 2D arrays (convert to grayscale for simplicity)
    img1 = to_grayscale(imread(file1).astype(float))
    img2 = to_grayscale(imread(file2).astype(float))
    # compare
    n_m, n_0 = compare_images(img1, img2)
    print "Manhattan norm:", n_m, "/ per pixel:", n_m/img1.size
    print "Zero norm:", n_0, "/ per pixel:", n_0*1.0/img1.size

How to compare. img1 and img2 are 2D SciPy arrays here:

def compare_images(img1, img2):
    # normalize to compensate for exposure difference, this may be unnecessary
    # consider disabling it
    img1 = normalize(img1)
    img2 = normalize(img2)
    # calculate the difference and its norms
    diff = img1 - img2  # elementwise for scipy arrays
    m_norm = sum(abs(diff))  # Manhattan norm
    z_norm = norm(diff.ravel(), 0)  # Zero norm
    return (m_norm, z_norm)

If the file is a color image, imread returns a 3D array, average RGB channels (the last array axis) to obtain intensity. No need to do it for grayscale images (e.g. .pgm):

def to_grayscale(arr):
    "If arr is a color image (3D array), convert it to grayscale (2D array)."
    if len(arr.shape) == 3:
        return average(arr, -1)  # average over the last axis (color channels)
        return arr

Normalization is trivial, you may choose to normalize to [0,1] instead of [0,255]. arr is a SciPy array here, so all operations are element-wise:

def normalize(arr):
    rng = arr.max()-arr.min()
    amin = arr.min()
    return (arr-amin)*255/rng

Run the main function:

if __name__ == "__main__":

Now you can put this all in a script and run against two images. If we compare image to itself, there is no difference:

$ python one.jpg one.jpg
Manhattan norm: 0.0 / per pixel: 0.0
Zero norm: 0 / per pixel: 0.0

If we blur the image and compare to the original, there is some difference:

$ python one.jpg one-blurred.jpg 
Manhattan norm: 92605183.67 / per pixel: 13.4210411116
Zero norm: 6900000 / per pixel: 1.0

P.S. Entire script.

Update: relevant techniques

As the question is about a video sequence, where frames are likely to be almost the same, and you look for something unusual, I"d like to mention some alternative approaches which may be relevant:

  • background subtraction and segmentation (to detect foreground objects)
  • sparse optical flow (to detect motion)
  • comparing histograms or some other statistics instead of images

I strongly recommend taking a look at “Learning OpenCV” book, Chapters 9 (Image parts and segmentation) and 10 (Tracking and motion). The former teaches to use Background subtraction method, the latter gives some info on optical flow methods. All methods are implemented in OpenCV library. If you use Python, I suggest to use OpenCV ≥ 2.3, and its cv2 Python module.

The most simple version of the background subtraction:

  • learn the average value Œº and standard deviation œÉ for every pixel of the background
  • compare current pixel values to the range of (Œº-2œÉ,Œº+2œÉ) or (Œº-œÉ,Œº+œÉ)

More advanced versions make take into account time series for every pixel and handle non-static scenes (like moving trees or grass).

The idea of optical flow is to take two or more frames, and assign velocity vector to every pixel (dense optical flow) or to some of them (sparse optical flow). To estimate sparse optical flow, you may use Lucas-Kanade method (it is also implemented in OpenCV). Obviously, if there is a lot of flow (high average over max values of the velocity field), then something is moving in the frame, and subsequent images are more different.

Comparing histograms may help to detect sudden changes between consecutive frames. This approach was used in Courbon et al, 2010:

Similarity of consecutive frames. The distance between two consecutive frames is measured. If it is too high, it means that the second frame is corrupted and thus the image is eliminated. The Kullback–Leibler distance, or mutual entropy, on the histograms of the two frames:

$$ d(p,q) = sum_i p(i) log (p(i)/q(i)) $$

where p and q are the histograms of the frames is used. The threshold is fixed on 0.2.

Answer #5

Nikie"s answer solved my problem, but his answer was in Mathematica. So I thought I should give its OpenCV adaptation here. But after implementing I could see that OpenCV code is much bigger than nikie"s mathematica code. And also, I couldn"t find interpolation method done by nikie in OpenCV ( although it can be done using scipy, i will tell it when time comes.)

1. Image PreProcessing ( closing operation )

import cv2
import numpy as np

img = cv2.imread("dave.jpg")
img = cv2.GaussianBlur(img,(5,5),0)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
mask = np.zeros((gray.shape),np.uint8)
kernel1 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(11,11))

close = cv2.morphologyEx(gray,cv2.MORPH_CLOSE,kernel1)
div = np.float32(gray)/(close)
res = np.uint8(cv2.normalize(div,div,0,255,cv2.NORM_MINMAX))
res2 = cv2.cvtColor(res,cv2.COLOR_GRAY2BGR)

Result :

Result of closing

2. Finding Sudoku Square and Creating Mask Image

thresh = cv2.adaptiveThreshold(res,255,0,1,19,2)
contour,hier = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)

max_area = 0
best_cnt = None
for cnt in contour:
    area = cv2.contourArea(cnt)
    if area > 1000:
        if area > max_area:
            max_area = area
            best_cnt = cnt


res = cv2.bitwise_and(res,mask)

Result :

enter image description here

3. Finding Vertical lines

kernelx = cv2.getStructuringElement(cv2.MORPH_RECT,(2,10))

dx = cv2.Sobel(res,cv2.CV_16S,1,0)
dx = cv2.convertScaleAbs(dx)
ret,close = cv2.threshold(dx,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
close = cv2.morphologyEx(close,cv2.MORPH_DILATE,kernelx,iterations = 1)

contour, hier = cv2.findContours(close,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
for cnt in contour:
    x,y,w,h = cv2.boundingRect(cnt)
    if h/w > 5:
close = cv2.morphologyEx(close,cv2.MORPH_CLOSE,None,iterations = 2)
closex = close.copy()

Result :

enter image description here

4. Finding Horizontal Lines

kernely = cv2.getStructuringElement(cv2.MORPH_RECT,(10,2))
dy = cv2.Sobel(res,cv2.CV_16S,0,2)
dy = cv2.convertScaleAbs(dy)
ret,close = cv2.threshold(dy,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
close = cv2.morphologyEx(close,cv2.MORPH_DILATE,kernely)

contour, hier = cv2.findContours(close,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
for cnt in contour:
    x,y,w,h = cv2.boundingRect(cnt)
    if w/h > 5:

close = cv2.morphologyEx(close,cv2.MORPH_DILATE,None,iterations = 2)
closey = close.copy()

Result :

enter image description here

Of course, this one is not so good.

5. Finding Grid Points

res = cv2.bitwise_and(closex,closey)

Result :

enter image description here

6. Correcting the defects

Here, nikie does some kind of interpolation, about which I don"t have much knowledge. And i couldn"t find any corresponding function for this OpenCV. (may be it is there, i don"t know).

Check out this SOF which explains how to do this using SciPy, which I don"t want to use : Image transformation in OpenCV

So, here I took 4 corners of each sub-square and applied warp Perspective to each.

For that, first we find the centroids.

contour, hier = cv2.findContours(res,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
centroids = []
for cnt in contour:
    mom = cv2.moments(cnt)
    (x,y) = int(mom["m10"]/mom["m00"]), int(mom["m01"]/mom["m00"]),(x,y),4,(0,255,0),-1)

But resulting centroids won"t be sorted. Check out below image to see their order:

enter image description here

So we sort them from left to right, top to bottom.

centroids = np.array(centroids,dtype = np.float32)
c = centroids.reshape((100,2))
c2 = c[np.argsort(c[:,1])]

b = np.vstack([c2[i*10:(i+1)*10][np.argsort(c2[i*10:(i+1)*10,0])] for i in xrange(10)])
bm = b.reshape((10,10,2))

Now see below their order :

enter image description here

Finally we apply the transformation and create a new image of size 450x450.

output = np.zeros((450,450,3),np.uint8)
for i,j in enumerate(b):
    ri = i/10
    ci = i%10
    if ci != 9 and ri!=9:
        src = bm[ri:ri+2, ci:ci+2 , :].reshape((4,2))
        dst = np.array( [ [ci*50,ri*50],[(ci+1)*50-1,ri*50],[ci*50,(ri+1)*50-1],[(ci+1)*50-1,(ri+1)*50-1] ], np.float32)
        retval = cv2.getPerspectiveTransform(src,dst)
        warp = cv2.warpPerspective(res2,retval,(450,450))
        output[ri*50:(ri+1)*50-1 , ci*50:(ci+1)*50-1] = warp[ri*50:(ri+1)*50-1 , ci*50:(ci+1)*50-1].copy()

Result :

enter image description here

The result is almost same as nikie"s, but code length is large. May be, better methods are available out there, but until then, this works OK.

Regards ARK.

Get Solution for free from DataCamp guru