Published on 2nd of May, 2021.
In part 1 we saw how to do non-blind image deconvolution. In part 2 we saw a couple good image priors and we saw how they can be used for simple blind deconvolution. This worked well for deconvolution of Gaussian point spread functions, but it gave bad artifacts for motion blur kernels. Typical distortions seen in pictures taken by conventional cameras have both a motion blur and Gaussian component, so having good deconvolution for motion blur is absolutely essential.
We will explore two methods to improve the deconvolution method. First is a simple modification to our current method, and second is an more expensive iterative method for deconvolution that works better for sparse kernels.
Recall that deconvolution comes down to solving the equation
where is the observed (blured) image, is the point-spread function, is the unobserved (sharp) images. If we take a discrete Fourier Transform (DFT) then this equation becomes
where capital letters denote the Fourier-transformed variables, and is the pointwise multiplication. To solve the deconvolution problem we can then do pointwise division by and then do the inverse Fourier transform. Because may have zero or near-zero entries, we can run into numerical instability. A quick fix is to instead multiply by , giving solution
This is fast to compute, and gives decent results. This simple method of deconvolution is known as the Wiener filter. In the situation where there is some noise such that , this corresponds (for a certain value of ) to minimizing the expected square error . Instead of minimizing the error, we can accept that , and then find the smoothest with that error, to avoid ringing artifacts. Smoothness can be modeled by the laplacian This leads to the problem
If is the Fourier transform of the Laplacian kernel, then the solution to this problem has form
where the parameter is determined by the noise level. In the end this is a simple modification to the Wiener filter, that should give less ringing effects. Let’s see what this does in practice.
In the picture above we tried to deblur a motion blur consisting of a diagonal strip of 10 pixels. The deblurring is done with a kernel of 9.6 pixels (the last pixel on either end is dimmed). We do this both with and without the Laplacian, with amounts of regularization so that the two methods have a similar amount of ringing artifacts. The two methods look very similar, and if anything the method without Laplacian may look a little sharper. The reason the methods behave so similarly is probably because the Fourier transform of the Laplacian (show below) has a fairly spread-out distribution and is therefore not too different from a uniform distribution we use in the Wiener filter.
There are many iterative deconvolution methods, and one often-used method in particular is Richardson-Lucy decomposition. The iteration step is given by
Here is the flipped point spread function, its Fourier transform is the complex conjugate of the Fourier transform of . As first iteration we typically pick . Note that if , then , with a constant 1 signal. Therefore if we plug in we obtain
This both shows that is a fixed point of the Richardson-Lucy algorithm, and at the same time it show that the algorithm doesn’t necessarily converge, since it could alternate between and for example. In practice on natural images, if initialized with , it does seem to converge. Below we try this algorithm for different number of iterations, considering the same image and point spread function as before.
We see very similar ringing artifacts as with the Wiener filter. The number of iterations of the algorithm is related to the size of the regularization constant. The more iterations, the sharper the image is, but also the more pronounced the ringing artifacts are.
Like with the Wiener filter, we need to add a small positive constant when dividing, to avoid division-by-zero errors. Unlike the Wiener filter however, Richardson-Lucy deconvolution is very insensitive to the amount of regularization used.
Richardson-Lucy deconvolution is much slower than Wiener filter, requiring perhaps 100 iterations to reach good result. Each iteration takes roughly as long as applying the Wiener filter. Fortunately the algorithm is easy to implement on a GPU, and each iteration of the (426, 640) image above takes only about 1ms on my computer with a simple GPU implementation using
One issue that I have so far swept under the rug is the problem of boundary effects. If we convolve an image by a kernel, then the result is an image of size , and not . There is typically a ‘fuzzy border’ around the image, which we crop away when displaying, but not when deconvolving. In real life we don’t have the luxury of including this fuzzy border around the image, and this can lead to heavy artifacts when deconvolving an image. Below is the St. Vitus church image blurred with Gaussian blur, and subsequently deblurred using a Wiener filter with and without using the border around the image.
The ringing at the boundary is known as Gibbs oscillation. The reason it occurs is because the deconvolution method implicitly assumes the image is periodic. This is because the convolution theorem (stating that convolution becomes multiplication after a (discrete) Fourier transform) needs the assumption that the signal is periodic. If we would periodically stack a natural image we would find a sudden sharp transition at the boundary, and this contributes to high-frequency components in the Fourier transform, giving the sharp oscillations at the boundary.
The more we regularize the deconvolution, the less big the boundary effects. This is because regularization essentially acts as a low-pass filter, getting rid of high-frequency effects. However, this also blurs the image considerably. For Richardson-Lucy deconvolution we essentially have the same problem.
The straightforward to deal with this problem is to extend the image to mimmick the ‘fuzzy’ border introduces by convolution. Or better yet, we should pad the image in such a way that the image is as regular as possible when stacked periodically. This is a strategy employed by Liu and Jia, they extend the image to be periodic by using three different ‘tiles’ stacked in a pattern shown below. The image is then cropped to the dotted line, and this gives a periodic image. The tiles are optimized such that the image is continuous along each boundary, and such that the total Laplacian is minimized.
There are many similar methods in the literature. Unfortunately, all of these methods are complicated, and very few methods include a reference implementation. If there is one, it is almost always in Matlab. This seems to be a general problem when reading literature about (de)convolution and image processing, for some reason in this scientific community it is not standard practice to include code with papers, and descriptions of algorithms are often vague require significant work to translate to working code. I found a Python implementation of Liu-Jia’s algorithm at this github.
Below we see the Laplacian of the image extended using Liu-Jia’s method, using zero padding and by reflecting the image. We see that both in the reflected image, and the one using Liu-Jia’s method there are no large values of the Laplacian around the border, because of the soft transition to the border.
Next we can check if these periodic extensions of the images actually reduces boundary artifacts when deconvolving. Below we see the three methods for both the Wiener and Richardson-Lucy (RL) deconvolution in action on an image distorted with Gaussian blur.
We can see that the Liu-Jia’s method gives a significant improvement, especially for the Wiener filter. More strikingly, the reflective padding works even better. This is because the convolution that the distorted the image implicitly used reflective padding as well. If you change the settings of the convolution blurring the image, then the results will not be as good. Liu-Jia’s method probably works the best out-of-the box on images blurred by natural means.
It is interesting to note that Richardson-Lucy deconvolution suffers heavily in quality regardless of padding method. Interestingly, if we look at motion blur instead of Gaussian blur, the roles are a bit reversed. For the Wiener filter we have to use fairly aggressive regularization to not get too many artifacts, whereas RL deconvolution works without problems.
We have reiterated the fact that even non-blind deconvolution can be a difficult problem. The relatively simple Wiener filter in general does a good job, and changing it to use a Laplacian for regularization doesn’t seem to help much. The Richardson-Lucy algorithm often performs comparably to the Wiener filter, although it seems to perform relatively better for sparse kernels like the motion blur kernel we used.
Before we have completely ignored boundary problems, which is not something we can do with real images. Fortunately, we can deal with these issues by appropriately padding the image. Simply using reflections of the image for padding works quite well, especially depending on how we blur the image in the first place. Extending the image to be periodic while minimizing the Laplacian is more complicated, but also works well, and probably performs better in natural images.
In the next part (and hopefully final part) we will dive into some simple approaches for blind deconvolution. Starting off with a modification of the Richardson-Lucy algorithm, and then trying to use what we learned about image priors in part 2.
Rik Voorhaar © 2024