With corrective optics in place, some consider the thought of deconvolving HST images as something akin to insanity. However, even if HST were an absolutely perfect optics system, deconvolution would often help increase image sharpness and contrast. For the imperfect telescope, this is even more true.
The PSF produced by the HST+WFPC2 combination has a sharp core which contains about 70% of the light within a 0.1 arcsec radius. The other 30% of the light, however, is distributed into the PSF wings and diffraction spikes. This reduces contrast and can fill in dark gaps (see Convolving object models with model PSFs ). Deconvolution can help sort out what is due to the PSF and what is real.
Another problem with WFPC2 is undersampling. Some of the "lost" resolution can be regained using subsampled PSFs with some deconvolution programs. The mem package in STSDAS is pretty good and directly supports subsampled PSFs, which Tiny Tim can produce (those who know me know that this is high praise considering how I feel about IRAF and such).
The image below of Supernova 1987a was taken on the WFPC2 Planetary Camera (0.0455"/pixel) in filter F656N. A subsampled Tiny Tim PSF was used with the mem program. Notice that in the original image the interior of the central ring appears to be filled with nebulosity. However, that is simply the result of the PSF distributing light from the ring into the center. Deconvolution removes the light, providing a more accurate representation of the object itself. Also notice that it cleanly separates the stars from the rings. The outer rings appear thinner, too. They are actually unresolved - their apparent thickness in the original image is the width of the PSF.
In a related subject, it is important to consider including PSF effects when modelling objects and comparing them with observed images. An example of this is given in "Convolving object models with model PSFs". A case involving the above supernova images is mentioned there.