A major question in the minds of most astronomical users was discussed by Ivan King at this workshop - the issue of validation of the output from deconvolution. Astronomers are deeply concerned not only about morphology but with photometric integrity. It is of limited use to accurately distinguish thousands of stars in a restored cluster image if the positions and intensities are subject to unknown biases produced by the reconstruction procedures. Several presentation here have dealt with photometric issues for stellar images. For regimes useful in analyzing galaxy structure, existing techniques already yield sufficient accuracy to learn interesting and otherwise inaccessible parameters. Fig. 1 shows an example of cross-validation, using data taken from Keel &Windhorst (1993). The image is of the faint radio galaxy 53W044 at redshift . The surface-brightness profile along the local major axis has been evaluated in three quite different ways. Direct measurements were made on deconvolved images, one made using the STSDAS implementation of the Lucy-Richardson algorithm run to , and the other using a hybrid CLEAN with noise model. The CLEAN result was constrained to have a resolution equal to the Nyquist limit for sampling in the WFC, and the Lucy-Richardson result is demonstrably very close to the same effective resolution. For galaxies with smooth, symmetric structure, modelling allows an independent comparison. In this case, a family of models with various bulge:disk intensity ratio, and scale lengths for bulge and disk components, was generated and convolved with the empirical PSF (from a foreground star projected 5 arcseconds away). The model from this family that fit best in a sense was numerically realized and measured in the same way as the deconvolved images. All three surface-brightness profiles track to within a few per cent at all radii over a dynamic range exceeding 100:1 (5 magnitudes). The three techniques could scarcely have more different basic principles and sensitivity to details of application, so this treatment is an empirical demonstration that HST, in its aberrated state, can deliver reliable surface-brightness profiles for galaxies at substantial redshifts; this means that we can do reliable quantitative studies of galaxy morphology.
The two-dimensional galaxy modelling used in this validation exercise introduces a crucial issue. When is deconvolution the analysis tool of choice, and just as important, when is it the wrong tool? Frequently, the science we seek is not directly in the image, however crisp. In imaging a star cluster, the desired result may be a Hertzsprung-Russell diagram; in observing the inner regions of an elliptical galaxy, the goal may be a high-resolution azimuthally averaged intensity profile. Problems of this kind, where the astrophysical background gives us strong a priori knowledge of the size or shape of the target objects, lend themselves to modelling and direct comparison with data in the observed domain, thus avoiding any possible biases from deconvolution. Specific tools for these applications exist, both for stars (Stetson 1994) and galaxies (Keel &Windhorst 1993, Windhorst et al. 1993b, Ratnatunga 1994). However, there are many problems for which the universe is not so cooperative: the targets may have unknown or complex structure. In these cases, deconvolution is the only way to retrieve a faithful representation of the object's properties. It may be the only way to match measurements made with various instruments. As a concrete example, if we observe some galaxy with a small aperture and the FOS, the only way to find what fraction of the galaxy's total light was included if an HST image is available involved deconvolving the image and reconvolving with a PSF appropriate to the wavelength range observed spectroscopically. Finally, producing faithful images for public release is not a trivial need, for use in education and in letting the taxpayers know they're still getting something for their money.
In seeing how different investigators use or avoid deconvolution algorithms, and in the choice for various problems, one can see some philosophical differences in how the results are approached, and what deconvolution is supposed to do. Some users see it as an operation on the data, approximating what would have been seen with a more favorable PSF, including noise and artifacts so that the quality of the processed data is immediately apparent. Other users want to go straight to ``truth'', and want to see a best-estimate model for the object with noise suppressed as irrelevant. Algorithms are certainly available to do both, and cases in between as well.
Before discussing specific science results, it is useful consider various regimes of deconvolution, where its practice may be affected by angular extent (in choice of instrument, mode, sampling, and extent of PSF changes) and signal-to-noise ratio (controlling the available dynamic range). For example, a planetary image with a good PSF needs only Fourier reconstruction (Cunningham &Anthony 1993; note that their reconstruction tests used only a space-invariant PSF and are thus applicable only to small objects), while this is completely unacceptable in noise properties for faint galaxies. As size increases and S/N drops, the degree of difficulty (measured in computational expense, investigator's time, trouble with subtle instrumental effects, and perhaps number of false starts) grows. Contrast issues are also important in determining which classes of problems are amenable to deconvolution. Fig. 2 shows where some well-known observations fall in these terms. The faint-galaxy results I will stress fall across the bottom in this diagram, which makes them relatively forgiving subjects for restoration; the dynamic range is limited by signal-to-noise ratio rather than PSF errors or sampling.