Image Fusion using Wavelets |
|
Matt Boardman, Faculty of Computer Science |
Reducing unwanted noise in digital photography is not an easy task. One way to reduce noise is to take many identical photographs, then "fuse" them into a single image by taking the average value of each pixel. However, this requires that your images be perfectly aligned, and even then, some dynamics of the image may be lost.
Here, we will use a technique called wavelet image fusion instead. For an excellent introduction to wavelets, please see an article by Amara Graps. For a much more detailed description of how wavelet image fusion works, see several examples. (More to come, but for now I'll just show the results.)
First, we will simulate the process by creating artificially-noisy versions of an existing image.
For the next trick, we will take several images of a dark scene with a low exposure setting and no flash.
Finally, we will fuse a series of images taken with a low exposure setting and no flash, but this time using a tripod to align the pixels in each image. Our goal is to brighten the photograph, while preserving image quality.
Both the mean and the wavelet reconstructions have enhanced the brightness of the image while keeping the noise levels low. However, close examination of the details of the image, in comparison to the mean image in the centre, shows that much more detail is visible: the shiny areas are more shiny, the black areas are consistently black, and the reflection in the guitar body is more visible.
Back to top |