Despite its advantages, the R-L method has some serious shortcomings. In particular, noise amplification can be a problem. This is a generic problem for all maximum likelihood techniques, which attempt to fit the data as closely as possible. If one performs many R-L iterations on an image containing an extended object such as a galaxy, the extended emission usually develops a ``speckled'' appearance (Fig. 1). The speckles are not representative of any real structure in the image, but are instead the result of fitting the noise in the data too closely. In order to reproduce a small noise bump in the data it is necessary for the unblurred image to have a very large noise spike; pixels near the bright spike must then be very black (near zero brightness) in order to conserve flux.
The only limit on the amount of noise amplification in the R-L method is the requirement that the image not become negative. Thus, once the compensating holes in the image are pushed down to zero flux, nearby spikes cannot grow any further and noise amplification ceases. The positivity constraint alone is sufficient to control noise amplification in images of star fields on a black background; in that case one can perform thousands of R-L iterations without generating an unacceptable amount of noise. However, for smooth objects observed at low signal-to-noise, even a modest number of R-L iterations (20-30) can produce objectionable noise.
The usual practical approach to limiting noise amplification is simply to stop the iteration when the restored image appears to become too noisy. However, the question of where to stop is a difficult one. The approach suggested by Lucy (1974) was to stop when the reduced between the data and the blurred model is about 1 per degree of freedom. Unfortunately, one does not really know how many degrees of freedom have been used to fit the data. If one stops after a very few iterations then the model is still very smooth and the resulting should be comparable to the number of pixels. If one performs many iterations, however, then the model image develops a great deal of structure and so the effective number of degrees of freedom used is large; in that case, the fit to the data ought to be considerably better. There is no criterion for the R-L method that tells how close the fit ought to be. Note that there is such a criterion built in to the MEMSYS 5 maximum entropy package (Gull and Skilling 1991), and the pixon approach of Piña and Puetter (1993) uses similar ideas.
Another problem is that the answer to the question of how many iterations to perform often is different for different parts of the image. It may require hundreds of iterations to get a good fit to the high signal-to-noise image of a bright star, while a smooth, extended object may be fitted well after only a few iterations. In Fig. 1, note how the images of both the central star and the bright star at the top center continue to improve as the number of iterations increases, while the noise amplification in the extended nebulosity is getting much worse. Thus, one would like to be able to slow or stop the iteration automatically in regions where a smooth model fits the data adequately, while continuing to iterate in regions where there are sharp features (edges or point sources).
Another approach to controlling noise amplification is to smooth the final restored image. This method has been developed and mathematically justified by Snyder and his co-workers (Snyder and Miller 1985, Snyder et al. 1987). Unfortunately, for HST images the amount of smoothing required to reduce the noise amplification is very large. Fig. 2 shows the effect of various amounts of smoothing on the restored planetary nebula image. By the time the noise amplification in the nebulosity has been reduced to a visually acceptable level, the images of stars have been grossly blurred. For most purposes, one must pay far too high a price to avoid noise amplification using this method.