Blending several images

Blending several images together may produce some interesting effects that aren't possible with a single image. In tools like Photoshop there are several blending modes like "normal", "multiply", "hard light" and others which allow us to quickly experiment with different settings. Wikipedia also has some mathematical descriptions on some of the blending modes. The OpenCV documentation mentions the beautiful formula g(x) = (1 - alpha) * f0(x) + alpha * f1(x) that can be used to blend two images together. With blending we can use transparency to selectively decide which images to show through and with what intensity. It is preferable that the images have the same dimensions prior to the blending, otherwise the viewer may perceive some of the images as patches.

The question is whether we could take a couple of images and quickly produce a blended image from them without wasting time on manual image editing, layer manipulation and adjustments. This doesn't mean that an automated result will always be better. But it gives us a quick overview of what is possible.

A we may suspect, there is a limit of how much we could blend before we start losing the shapes and contours of the visible objects. Too much of any effect produces bad results and blending seems to make no exception from this rule.

We might think of adding a separate alpha channel for all JPEG images that we have and then seeking a way to combine the results. A single line that would do this looks like the following:

jpeg_with_alpha = np.array([ np.append(img[i,j], 0.5) for i in range(h) for j in range(w) ]).reshape(h,w,4)

Here img is an image as a Numpy array of values, where h is the height of the image and w is its width. Since the shape of the original Numpy image has dimensions (h, w, 3), we need to reshape the new one to have a third dimension of 4. We can't simply use append here, but have to resort to the Numpy-specific np.append() instead. A transparency level of 0.5 is assumed.

However, there is something unsatisfying in this approach. We have to do this for every single image, independent of its size, which tends to be slow. A simpler approach is to plot each image on top of the previous one, indicating its alpha value. The following code does this:

import numpy as np import matplotlib.pyplot as plt from PIL import Image # Images and their transparency levels images_alphas = [ ('model.jpg', 0.15), ('horse1.jpg', 0.2), ('roses.jpg', 0.3), ('leaves.jpg', 0.35), ('horse.jpg', 0.4), ('path.jpg', 0.8) ] images_alphas.reverse() image_nps = [np.array('resized/' + image)) for image, _ in images_alphas] for i, im_np in enumerate(image_nps): plt.imshow(im_np, alpha=images_alphas[i][1])

We define the order in which we want to apply the images and specify the transparency level for each one. Using a dictionary here might seem more appropriate, but dictionaries don't have a guaranteed order unless we use OrderedDict. Using a list of tuples means that the order and the backwards compatibility will be preserved. The JPEG image 'model.jpg' will be the topmost layer while 'path.jpg' will serve as a bottommost layer. Since we normally place the layers on top of each other at each subsequent iteration step, we need to reverse the sequence. This allows us to keep the intuitive order (topmost layer appearing topmost in the sequence), while satisying the machine requirements.

Then we take each image from the "resized" directory and convert it to a Numpy array. We plot each Numpy array and set the alpha parameter of the imshow function, which leads to the following image:

six separate images blended into one
An image obtained by blending six separate images with individual transparency levels. In this case the images contain leaves, roses in a basket, horses (2 images), a wooden path in a forest and a model. Some of the elements are barely visible. Click on the image to see it in full size.

You can try to see whether you can distinguish the objects described. If you feel that this image is too crowded, you could also use the following approach, which will further reduce the number of heavy libraries you need to depend on.

from PIL import Image, ImageDraw from itertools import combinations from os import listdir imx, imy = 1280, 853 thumb_size = 245, 163 im ="RGBA", (imx, imy), (0,0,0)) images = ['resized/' + file_name) for file_name in listdir('resized')] for i, (im1, im2) in enumerate(combinations(images, 2)): comp = Image.blend(im1, im2, 0.5) thumb = im.copy() thumb.paste(comp) thumb.thumbnail(thumb_size, Image.ANTIALIAS)'two_blended/blended' + str(i+1) +'.jpg', 'JPEG') thumb.close()

This will give you all combinations of two blended images, which you can then organize to be able to quickly see and pick the best ones:

Blending of two images only, in variious combinations.

Blending multiple images in this way allows us to combine their best features, partially compensating for their worst ones. We can selectively apply patterns to enhance relatively homogenous or less detailed regions in an image. Now you know how you can easily blend your own images, where you can experiment with different "layer" arrangements and transparencies.