The restoration of the Hubble Space Telescope (HST) image data is currently being done using several different algorithms. One of these algorithms is the Richardson-Lucy (R-L) algorithm (Richardson 1972, Lucy 1974), which is a realization of the Expectation-Maximization (EM) algorithm (Dempster et al. 1977). The R-L method is iterative in nature, and converges to the maximum likelihood solution (Shepp and Vardi 1982). Unfortunately, due to the noise inherent in the problem, this maximum likelihood solution may not be the most visually pleasing solution. This noise amplification problem has been observed in other iterative algorithms, and the method of stopping the algorithm before it has reached convergence has been shown to be a valid method of regularizing the solution (Biemond et al. 1990). The reasoning behind stopping the iterations is as follows: the algorithm attempts to fit the solution to the observed data, but the observed data is degraded with noise. Thus, at some point in the process, the solution is being fit more to the noise than the image data. Therefore, the process should be stopped at the point where there is a balance between the fit to the image data and the amplification of the noise. The randomized generalized cross-validation (RGCV) criterion has been used as a stopping rule for linear iterative restoration algorithms (Reeves and Perry 1992). In this paper, we will show how the RGCV criterion can be applied to nonlinear algorithms, specifically the R-L method.
For our purposes, the observed image shall be modeled as the result
of a blur operation
upon the original image
with an additive
term to model any noise from the physical system or the observation system.
Therefore,
where the images and noise are ordered as vectors. The problem is then to
determine given the observation
and often the blur operation
. Some
information about the statistical nature of the noise may also be available.
The problem is generally ill-posed.