The "Drizzle" algorithm was invented by Andrew Fruchter and Richard Hook in order to combine multiple stacks of dithered, geometrically distorted, undersampled image frames. It is being used to produce the combined output images of the HDF project from the flat-fielded, cosmic ray and hot pixel cleaned stacked frames corresponding to the different dither positions.
The drizzling code is a relatively well developed prototype, but it has not yet been fully documented. This note is intended to fill in the explanatory gap until a more complete description of the algorithm becomes available.
The algorithm is conceptually simple: With each value in an input frame a "footprint" or "drop" is associated. The footprint may be the "characteristic function" of the geometrical area covered by pixel (1 inside and 0 outside of the pixel area), or it may be the characteristic function of a square area smaller than the pixel size. This "drop" area is then subdivided into nx X ny subpixel areas called "droplets". The idea behind this metaphor is that the pixel acts as a "light bucket".
The center of each droplet is computed and its value is averaged in into the value found in the output pixel. A weighted average is used exploiting a weight associated with the input pixel value and a weight associated with the output pixel. Then the weights of the output pixel is updated, before considering the next droplet.
In order to preserve resolution, typically a drop size is used which is smaller than the input pixel size. Because individual values are pushed from the input frame to the output frame, an output pixel may receive no droplets when drizzling an individual input frame. In the above figure, the top left output pixel represents such a situation. These "zero valleys" are not a concern as long as there are enough input frames with different sub-pixel dither positions to fill in the image. It is, in the end, the placement of the dither positions which determines how small the drop size can be made.

The observed variations could have been reduced by using a larger final pixel size. However, doing so would have meant either using a different pixel scale for the PC than the WF or suffering even further degradation of the PC resolution. While the high frequency scatter will affect attempts to fit the PSF, it does not significantly affect aperture photometry. Remember that 5 pixels in the drizzled image corresponds to 2 pixels in the original WF images. By a radius of 2 WF pixels, which is an aperture frequently favored by those doing HST photometry in crowded regions, the scatter is essentially gone. Furthermore, the scatter is only seen where the variations in surface brightness of an object are so rapid that they are undersampled in the original image. Users for whom the scatter is a problem may wish to convolve the image with a narrow gaussian, and thus trade a bit of resoulution for a smoother PSF. In the future we may investigate developing other image recombinations schemes which may trade a modest amount of signal-to-noise for better high frequency behavior. It is unlikely, however, that any interpolation scheme that does not substantially broaden the PSF will entirely remove the scatter seen above, as it is largely a feature caused by the the undersampling of the image with large square pixels.
In order to obtain optimal signal-to-noise on faint sources, photometry and detection should be performed using the weight maps that are produced by the drizzling algorithm. In the case of the HDF, the standard deviation of the pixel weights is 20-25% of the average weight value, and so ignoring the weight files produces a penalty in final S/N of about 10%. These weights were created assuming that the image is background limited. The S/N of bright sources is determined by Poisson noise, and therefore the proper weighting for these objects would have been the exposure time of each image, not the background variance. However, as S/N is not a problem for the bright sources, we have provided only the one set of weight maps. A fortran program is being provided with the version 2 release which can be installed as a task in IRAF and which performs a weighted block average of the images using the weight images.
In addition to producing variable subtle changes in the PSF, drizzling also causes the noise in one pixel to be correlated with the noise in an adjacent pixel. This is because a single pixel from an input image typically affects the value in several output pixels, even though most of the power often goes into a single output pixel. One can approximate the noise characteristics of the final HDF drizzled images by convolving an image which has independent noise in each pixel with the matrix
.06 .18 .06
.18 .93 .18
.06 .18 .06
The expected standard deviation of an NxN box of pixels (where N is much larger than 1) is therefore 1/N times the sum of the above matrix elements, or about 1.9/N. Note, however, that this only simulates the correlation of pixels with equal intrinic noise. To completely simulate the HDF images one would need to also allow pixel-to-pixel variations in the standard deviation of the noise.
In some sense the present situation with the Drizzle algorithm can be likened to the situation that developers and users found themselves in when the Clean restoration method was invented. The Clean algorithm is intuitively very appealing, but its properties were not easy to characterise, and it took some time to develop an understanding. One of the reasons that the Drizzle process is somewhat difficult to understand is that it attempts so much in one step: geometric distortion correction and statistically optimum co-addition of multiple dithered frames.
Copyright © 1997 The Association of Universities for Research in Astronomy, Inc. All Rights Reserved.