Deconvolution of a High-Resolution Lunar Image

By Juan Conejero (PTeam)
With original raw data acquired by Vicent Peris (PTeam)



Introduction

The Deconvolution and WienerDeconvolution processing tools have been introduced in PixInsight Standard Core version 1.0.25. This document describes a step-by-step processing example and provides some introductory reference material about these important tools.

The new Deconvolution tool implements state-of-the-art regularized Richardson-Lucy and Van Cittert deconvolution algorithms. Using wavelet-based multiscale analysis techniques, these algorithms separate significant image structures from the noise at each deconvolution iteration. Significant structures are kept and enhanced, while the noise is suppressed or attenuated. This allows for simultaneous deconvolution and noise reduction, which leads to robust, adaptive deconvolution procedures that in most cases outperform other implementations based on classical or less sophisticated design principles and techniques. In addition, our implementation includes a deringing algorithm that helps fixing the well-known ringing problem (e.g. dark halos around stars).

Our implementation of regularized deconvolution methods is based on many algorithms and ideas exposed by Jean-Luc Starck and Fionn Murtagh in their books Astronomical Image and Data Analysis (first and second editions, Springer, 2002 and 2006, resp.), with some additions necessary to write more versatile and adaptable tools, as well as slight modifications in the way noise is evaluated at each iteration.

The WienerDeconvolution tool has been written by our team member Carlos Milovic. Wiener deconvolution is a fast deconvolution algorithm in the Fourier domain. It is an interesting deconvolution tool that can be very useful to process any kind of images with relatively high signal-to-noise ratios. Wiener deconvolution is particularly suitable for lunar and planetary imaging. It is also good to shape star profiles, as correction for less-than-perfect stellar images due to minor optical aberrations or slight tracking errors, and to quickly find and test PSFs.

Both tools include selectable Gaussian, motion blur and custom PSFs, as well as a dynamic range extension feature that is very useful to fix saturated areas.

For general information about deconvolution and its applications to astronomical images, there are many resources in the literature and on the Internet. You can start with a good general description from the Wikipedia. If you are interested in deconvolution of deep-sky images, we recommend you to read a specific processing example with on our website. Finally, if you want to learn more about wavelet-based deconvolution and other multiscale techniques, Jean-Luc Starck's website provides interesting links and some excellent online tutorials. Another essential web resource is multiresolution.com.


The Image

In this document we present a step-by-step processing example with the new deconvolution tools in PixInsight. We'll use a high-resolution lunar image acquired by Vicent Peris under good seeing conditions.

Mouseover comparison between the final deconvolved image and the original. Images scaled to one half of the original size.
Images Copyright © 2007 by Vicent Peris.

Here is a summary of relevant data for this image:

The signal-to-noise ratio of the original image is quite high, and only a moderate amount of uniform noise is present at the smallest scales. This makes it a relatively easy target for deconvolution and a good candidate to build an introductory processing example.


Files and Links

Here you can download full-size JPEG versions, a crop of the original raw image, and a process icon file:

Please note that the above full-size "raw" image is an 8-bit JPEG version that cannot be used to reproduce the actual deconvolution procedures described here. For testing purposes, you should use the linear 32-bit crop.


Step 1: Selecting a Deconvolution Algorithm

The Deconvolution tool in PixInsight provides two iterative deconvolution algorithms: Richardson-Lucy and Van Cittert. Both algorithms are available with and without regularization. In general, Richardson-Lucy is the algorithm of choice for deconvolution of deep-sky images. On the other hand, the Van Cittert algorithm is extremely efficient for deconvolution of high-resolution lunar and planetary images due to its ability to enhance very small image structures. Of course, this is just a generalization based on our experience; it shouldn't be taken as a hint to process every image in any particular context.

On Figure 1 you can see some comparisons between the results of both algorithms applied to a small crop of the image used for this example, plus the Wiener deconvolution algorithm. An important drawback of plain Van Cittert deconvolution is noise intensification. In presence of even a very small amount of small-scale noise, this algorithm easily leads to a useless result after a relatively reduced number of iterations (Figure 1.b). Our regularized implementation fixes this problem completely (Figure 1.c), but special care must be paid in fine tuning regularization parameters to avoid noise intensification without degrading the algorithm's performance. After a sufficiently large number of iterations, the regularized Richardson-Lucy algorithm also does a good job. Finally, Wiener deconvolution with a careful tuning of parameters (Figure 1.e) rivals the performance of regularized Van Cittert (1.c) for high SNR data —you may want to save both crops and watch them enlarged 4:1 to evaluate the differences.

Generally, we tend to favor regularized Van Cittert deconvolution for lunar images. Compared to Richardson-Lucy, regularized Van Cittert usually yields sharper results and much better separation of tiny details. On the other hand, Wiener deconvolution is a close competitor for high SNR data and, considering its speed and lower memory consumption, an interesting alternative that must be seriously taken into account.

Figure 1— Comparison between several deconvolution algorithms with a crop of the original image enlarged 2:1. In all cases the PSF used is of the Gaussian family with sigma=2.3 and shape=1.0.


1.a— Raw image


1.b— Van Cittert, 75 iterations.


1.c— Regularized Van Cittert, 200 iterations.


1.d— Regularized Richardson-Lucy, 300 iterations.


1.e— Wiener deconvolution, k-noise = 8×10-6.


Step 2: Conversion To 32-bit Floating Point Format

In general, we prefer this format to work with complex algorithms and procedures. The PixInsight platform supports 32 and 64-bit floating point formats, as well as 8, 16 and 32-bit integer formats. Of course, 8-bit images should never be used for complex tasks like deconvolution —they shouldn't be used at all, in fact, for any nontrivial processing task. The 16-bit format is normally just sufficient, but a 32-bit format is always preferable, except when severe memory restrictions apply. The 64-bit format is reserved to deal with extremely large dynamic ranges and very complex procedures; of course it isn't necessary in this case. The SampleFormatConversion process (Image category) can be used in PixInsight to transform images to any supported data format. Command-line-oriented users will find the f32 command (a standard alias for SampleFormatConversion -f32) very useful, and much faster.

Figure 2— Conversion to 32-bit floating point format with the SampleFormatConversion process (click on image to enlarge).


Step 3: Determine the PSF

The choice of an appropriate point-spread function (PSF) is crucial in the deconvolution task, since it characterizes the effects of the instrumental limitations and atmospheric perturbations that we are trying to overcome. In absence of point or point-like sources, finding the correct PSF for a lunar image is basically a trial-error task, unless we count with some a priori knowledge than can help (e.g., accurate seeing measurements), which is not the case in the present example.


Standard Deviation of a Gaussian PSF

This problem is actually very easy to solve with the tools currently available in PixInsight. We start trying out several Gaussian PSFs with different standard deviations. Some comparisons can be seen on Figure 3.

Figure 3— Comparison between several Gaussian PSFs. Regularized Van Cittert deconvolution. All PSFs are Gaussian functions (shape=2).


3.a— Raw image


3.b— Standard deviation = 2.0, 75 iterations.


3.c— Standard deviation = 2.5, 50 iterations.


3.d— Standard deviation = 3.0, 25 iterations.

Figure 3 shows the results of some Gaussian PSFs with the regularized Van Cittert algorithm. Regularization parameters (we'll see later how to define them) are identical in all cases. In our opinion, the PSF with sigma=2.0 (Figure 3.b) is too small: it acts on small-scale features but doesn't seem able to boost image structures as we expect for the image of this example. We definitely need something between sigma=2.5 (Figure 3.c) and 2.0 (Figure 3.b); probably closer to 2.5. Sigma=3 (Figure 3.d) seems to work on too large structures (note that we have been able to use only 25 iterations for Figure 3.d to get a reasonable result, in contrast to 50 iterations for Figure 3.c). With increasing sigma, on the other hand, some ringing artifacts start being objectionable (look at dark regions inside craters).


Varying the Shape of the PSF

So let's refine our search around a Gaussian PSF with sigma=2.5 as a starting point. How about the shape parameter, available in the Deconvolution tool for Gaussian PSFs? This parameter governs the kurtosis of the PSF distribution, or in less technical terms, the peakedness or flatness of the PSF's profile. When shape < 2, we have a leptokurtic PSF with a prominent central peak. When shape > 2, the PSF is mesokurtic, with a flatter profile. When shape = 2, we have a pure normal (or Gaussian) distribution. Strictly, when shape ≠ 2 we are not defining a Gaussian distribution at all; however we informally speak of the Gaussian family of PSFs because their formulations are all almost identical.

Peaked PSFs (shape < 2) tend to improve resolution at relatively larger dimensional scales, while flat PSFs (shape > 2) do the same at comparatively smaller scales. This is a somewhat counter-intuitive behavior; to understand it, think that deconvolution tries to unwise a previous convolution with the PSF that you are selecting. With the shape parameter, we can define PSFs of the Gaussian family with high accuracy. On Figure 4 we have included a few illustrative examples.

Figure 4— Comparison between several PSFs of the Gaussian family with different shape values. Regularized Van Cittert deconvolution, 50 iterations and PSF standard deviation = 2.5.


4.a— Standard deviation = 2.5, shape = 0.75 (strongly peaked).


4.b— Standard deviation = 2.5, shape = 1.5 (peaked).


4.c— Standard deviation = 2.5, shape = 2.0 (normal distribution).

4.d— Standard deviation = 2.5, shape = 3.0 (flat).

Since the true PSF is unknown —and also because we are not trying to perform a scrupulously scientific work—, the choice is entirely up to the user. Of course, common sense and good taste are necessary companions, as are in all fields of image processing. On the other hand, nothing prevents you from combining two (or more) resulting images after applying deconvolution with different PSFs. This is very easy to do with PixelMath in PixInsight.

It is interesting to point out that Wiener deconvolution can be used very efficiently to find out correct PSF values, even if we plan to use a regularized algorithm later. This is a good idea because the WienerDeconvolution tool is extremely fast (it implements a one-step, FFT-based algorithm) and usually performs quite well for lunar and planetary work. It can save us a good amount of time during testing stages.

Our final decision has been a PSF of the Gaussian family with standard deviation = 2.3 and shape = 1.0. In our opinion, this PSF represents a good balance between resolution improvement at small scales and overall brightness/contrast enhancement. The result for the same crop used above, after 200 iterations of the regularized Van Cittert algorithm, can be seen on Figure 5.

Figure 5— Mouseover comparison between a crop of the deconvolved image (regularized Van Cittert, 200 iterations, PSF sigma=2.3, shape=1.0) and the original raw image.


Step 4: Regularization Parameters

The same set of regularization parameters is available for the regularized versions of the Van Cittert and Richardson-Lucy deconvolution algorithms. Basically, these parameters define how the algorithms perform separation between significant image structures and the noise at each deconvolution iteration, and how noise is controlled and suppressed during the whole procedure.


The Regularization Procedure

Before describing regularization parameters and their roles, it is convenient that you understand at least the basics of some key principles behind regularized deconvolution algorithms. This is important to successfully apply our Deconvolution tool with consistent results.

Our implementation of regularized deconvolution utilizes multiscale analysis techniques based on wavelets. Wavelets are mathematical functions that can be used to study data sets (as images) by dividing them into a number of frequency components that can be analyzed as a function of their characteristic scales.

At each iteration, our deconvolution algorithms perform wavelet decompositions of several working images into a number of wavelet layers. Each wavelet layer isolates image structures corresponding to a given scale. Informally speaking, this means that each wavelet layer contains only those image structures that fall into a given range of sizes, as represented in the wavelet space. For example, if we decompose into four wavelet layers using the dyadic scaling sequence, the first layer will contain only one-pixel image structures, the second layer only structures between one and two pixels, the third layer between two and four pixels, and the fourth layer between four and eight pixels. Of course, this scheme can be extended to any number of wavelet layers: 1, 2, 4, 8, 16, 32, 64, 128, ... but for our purposes, four layers are sufficient.

Then the algorithms search for significant image structures into each wavelet layer. Structure significance is a function of certain statistical and morphological properties of the data, as represented by the wavelet transform. Normally, most of the noise present in the data is gathered by the first wavelet layer, where it can be identified and isolated very easily. This is what we call small-scale noise (also known as high-frequency noise). More noisy structures are frequently present in the scale of two pixels, sometimes also in the third wavelet layer, and rarely in the fourth layer. For each wavelet layer, those structures that have not been labeled as significant are considered as noise, and a special noise reduction procedure is applied to them in order to attenuate or even suppress them completely.

Since we effectively identify and remove the noise from our working data set at each deconvolution iteration, deconvolution can continue across subsequent iterations without noise intensification at all, provided that the regularization scheme is well designed and implemented (you can assert this as fact in our case), and of course that appropriate regularization parameters have been defined to control the procedure (this is your responsibility).


Available Parameters

Having explained how regularization works, we can describe now the regularization parameters that are available in our Deconvolution tool:

The most important and critical regularization parameters are, in order: number of wavelet layers, noise threshold, and noise reduction. High-SNR images usually require just two wavelet layers for regularization, and many times even a single layer is sufficient. This is because the noise typically occurs at the scales of one and two pixels. Sometimes there are noise structures also at the scale of four pixels, and rarely we have to apply noise reduction also at the scale of 8 pixels.

The noise threshold parameter is very critical. Too high of a threshold value will destroy significant structures and/or degrade the performance of the deconvolution procedure. Too low thresholds allow some noise to survive across deconvolution iterations. This is especially important for the regularized Van Cittert algorithm.


Initial Analysis with Wavelets

In the image used for this example, as we have said, the noise is only present at the smallest scales, that is in the first wavelet layer almost exclusively. This can be easily verified by inspecting the first wavelet layers with the ATrousWaveletTransform tool working in layer preview mode, as shown on Figure 6.


6.a—First wavelet layer (scale of one pixel).


6.b—Second wavelet layer (scale of two pixels).

Figure 6— Previewing the first two wavelet layers to identify small-scale noise. Click on the thumbnail images to enlarge. 

Figure 6.a shows that at the scale of one pixel the image contains noise structures whose spatial distribution is essentially uniform. This can be a problem to apply deconvolution and is a good reason to use a regularized algorithm. The second wavelet layer (6.b) includes no noticeable noise, and obviously there is no noise at larger scales. This initial study of the noise distribution is very helpful to define deconvolution regularization parameters, but we prevent you that there will be a little surprise —if you are so impatient, scroll down to Figure 8, but don't forget to return here.


Adjusting Noise Threshold and Noise Reduction

On Figure 7 you can compare the results of several noise threshold values. In all cases 100 iterations of the regularized Van Cittert algorithm have been applied with a PSF of the Gaussian family with sigma = 2.3 and shape = 1.0.

Figure 7— Comparison between several noise threshold values (images enlarged 2:1). Regularized Van Cittert deconvolution, 100 iterations, PSF of the Gaussian family with sigma = 2.3 and shape = 1.0. In all cases regularization parameters are as follows:

  • Number of wavelet layers: 2
  • First layer: noise threshold = as specified for each case; noise reduction = 1.0
  • Second layer: noise threshold = 2.0, noise reduction = 0.7 (default values)


7.a— Raw image.


7.b— 1st layer: Noise threshold = 1.0, noise reduction = 1.0.


7.c— 1st layer: Noise threshold = 2.0, noise reduction = 1.0.


7.d— 1st layer: Noise threshold = 4.0, noise reduction = 1.0.

As can be seen on Figure 7, a noise threshold value of at least 4 sigma is necessary for the first wavelet layer to completely cut propagation of small-scale noise across deconvolution iterations. We have applied a noise reduction value of 1.0, which means that all detected noisy structures are being completely removed at each iteration. This is our recommended value for noise reduction in the first wavelet layer with the regularized Van Cittert algorithm.

What about the second wavelet layer? Our initial study in the wavelet space (see Figure 6) has shown that there is apparently no noise at scales of 2 pixels and above. However, as Figure 8 demonstrates, this is just what we have seen as an initial state of the image, and doesn't necessarily represent what happens after many deconvolution iterations.

Figure 8— Comparison between regularization on two and one wavelet layers (images enlarged 4:1). Regularized Van Cittert deconvolution, 200 iterations, PSF of the Gaussian family with sigma = 2.3 and shape = 1.0.


8.a—With regularization applied to the first two wavelet layers:


8.b— Same as 8.a, but with regularization applied to the first wavelet layer only (number of wavelet layers = 1).

Considering the instrumental and environmental conditions under which the original data have been acquired, we think that most of the smallest structures shown on Figure 8.b are not real. In our opinion, they are just noisy structures that have been enhanced after 200 deconvolution iterations. In our opinion, what is shown on Figure 8.a is a more honest result. Consequently, our final choice is to apply regularization to the first two wavelet scales, with the parameters shown on Figure 8.a.


Step 5: Number of Deconvolution Iterations

When dealing with high SNR images, as in the present example, regularized deconvolution algorithms work essentially with noise-free data most of the time. This allows us to apply large amounts of deconvolution iterations. If you are accustomed to work with implementations of the classical (plain) Van Cittert and Richardson-Lucy algorithms, or even with other simpler implementations, you probably will be surprised hearing us talking of 200 or 300 Van Cittert iterations, for example.

Once a reasonable set of regularization parameters has been found, regularized deconvolution guarantees that the overall process is convergent. Among other nice things, this means that the final image always contains less noise and more significant structures than the input image. For this reason, we can apply so many iterations: because the noise is being tightly controlled during the whole procedure.

Of course, with more noisy input data more restrictions apply, and despite the efficiency of regularization, deconvolution cannot be applied so happily.

As usual, let's put a practical example. On Figure 9 you have a mouseover comparison that shows the results of applying 10, 25, 50, 100, 200 and —just because these wild things have to be done from time to time— 500 iterations. Watch the images carefully because the differences, especially between the last three images, are quite subtle. As you can verify, there is a clear and quick improvement up to 100 iterations. Above this level, the regularized algorithm slows down considerably in terms of resolution improvement versus number of iterations; this is just a practical consequence of the fact that regularized deconvolution always converges to a stable solution.

Figure 9— Mouseover comparison between different amounts of deconvolution iterations (images enlarged 2:1). Regularized Van Cittert deconvolution, PSF of the Gaussian family with sigma = 2.3 and shape = 1.0. Regularization parameters are as specified on Figure 8.a, except for 500 iterations, where noise reduction has been increased to 0.85 for the second wavelet layer.

Number of iterations: [10]  [25]   [50]   [100]   [200]   [500]


Step 6: Fixing Saturated Areas: Dynamic Range Extension

An important drawback common to all edge-enhancement procedures is saturation: as a result of improving local contrast, some structures that were already highly contrasted in the original image may become completely saturated on the transformed image. Of course, deconvolution is no exception.

Saturation occurs when some processed pixels take values beyond the available dynamic range. When this happens, the corresponding pixels become pure black or pure white, causing an entire loss of the information that they were transporting originally. Figure 10 shows a good example of saturation: the bright side of a crater has been severely saturated after deconvolution.

Figure 10— Saturation of a bright image feature after deconvolution (images enlarged 4:1). Regularized Van Cittert deconvolution, 200 iterations, PSF of the Gaussian family with sigma = 2.3 and shape = 1.0.


10.a— Raw image. The bright feature at the center is not saturated in the original: there are no white pixels.


10.b— Deconvolved image. Note the saturated bright feature: some pixels are now pure white, with total loss of information.

Our solution to this problem is conceptually straightforward: extend the effective dynamic range of the transformed image to make room for those pixels that would otherwise take too high (or too low) values. This is the dynamic range extension section that can be found on many standard processing tools in PixInsight. For example, UnsharpMask and ATrousWaveletTransform include dynamic range extension parameters, and also, of course, Deconvolution and WienerDeconvolution.

You can see our dynamic range extension algorithm in action on Figure 11. The bright feature has no white pixels now, so the whole data range of the original image is being represented.

Figure 11— Dynamic range extension applied to fix saturation with the Deconvolution tool:

Mouseover: No dynamic range extension.
Mouseout: High dynamic range extension = 0.300.

It is important to point out that dynamic range extension does not vary the linearity (or nonlinearity) of the original data.

A side effect of high dynamic range extension is that the overall brightness of the image decreases. This is a logical consequence of extending the effective dynamic range at the highlights, which shifts the original midtones toward the shadows. However, this is very easy to fix with a simple nonlinear transform, such as HistogramTransform or CurvesTransform.


Conclusion

We have applied our new Deconvolution and WienerDeconvolution tools to deconvolve a high-resolution lunar image in PixInsight. In our opinion, the obtained results are excellent and represent the performance of state-of-the-art deconvolution algorithms and implementations.

This has been an exercise to introduce these important tools and some guidelines about their practical applications. Although many procedures and techniques described in this document are directly applicable to deconvolution of planetary and deep-sky images, these fields pose specific problems that require also specific solutions, techniques, and learning resources.

In an incoming document we'll put a step-by-step processing example of deconvolution with a deep-sky image. We'll explain how our Deconvolution tool can be used to deconvolve raw linear images, and how our deringing algorithm fixes the ring artifacts problem (the Gibbs phenomenon), an important drawback of deconvolution applied to deep-sky images.