Next: Results Up: Approximate Shift-Invariance by Warping Previous: Introduction

Sample Shifting

Moving the sample locations to achieve the desired warping will result in an irregularly sampled image plane. These irregularly spaced samples will then be brought back to regularly spaced positions, hence warping the image. Typical restoration techniques can then be performed using these regularly spaced samples. A method of optimally finding these sample positions is given below. Only the one-dimensional case will be considered here; the two-dimensional case will be attempted in the future.

Error Metric

The linear mapping of Eq. 1 is put into discrete form, with uniformly spaced samples of and . In one dimension, Eq. 1 then becomes

where uniform sampling, and , is often assumed. The sampled impulse response, , is a function of the sample positions, . In particular, describes the amount the object sample affects the image samples away from the image sample.

The sample locations are to be moved so that the resulting impulse response is approximately space-invariant. To this end, the invariant term of Eq. 3 can be found, in sampled form for a one dimensional mapping, to be Since and are not predetermined, their explicit dependence on is unnecessary. Let and . A space-invariant impulse response should not depend on the coordinate of the object, i.e., a space-invariant impulse response should obey , . If this equality does not hold, then the difference between the different impulse responses describes an error from the assumption of shift-invariance.

A metric which measures the total error over the field is

where the limits of the image are . The limits on the summations keep the sample points within the image boundaries. Allowing the sum to go outside the image boundaries ( or ) includes some ``anchor'' samples outside the image, which tends to stabilize the solution trajectory when is minimized. For example, if all of the sample positions may be moved, there is a global minimum () when all of the samples are co-located. If all of the sample positions which start off inside the image are forced to remain in the image, then keeping a fixed sample position outside the image removes this zero point. Note also that describes the distance from the ``center'' of the space-variant impulse response. The limits on could therefore be constrained PSFs with support more compact than that of the entire image (which is usually the case). However, it is the actual distance , not just the magnitude of , which needs to be within the support of the PSF. Since the solution will necessarily change the sample locations, the limits on will depend upon the resulting sample locations.

The goal here is to minimize . This will provide a sampling scheme which will yield the most space-invariant impulse response.


For minimization, the gradient elements are needed.

An application of the chain rule gives

When expressed in the form of Eq. 6, the value inside the summations of are symmetric with respect to and , so there are redundant terms when forming the product in the summations of . Accounting for these redundant terms with a scaling factor of two, and rearranging,

The lower () term sifts through the sum in a straightforward manner. The upper () term can sift through either or . Since the sum will be sifted in the lower equation, and will typically have more terms than the sum (making sifting through computationally beneficial), the sum will be sifted through the upper term as well.

The sifting for the term is still not straightforward, since the limits on are a function of . The bookkeeping is as follows:

  1. The limits on are , and at this range is at its maximum: . For any , the limits of fall within this range.
  2. For , the possible values of run over .
  3. The net permissible range of is that range for which both of the above conditions hold, i.e., the intersection of the above ranges.

With these new limits on , the substitution completes the sifting process of . If compact support of is also included (), then the permissible values of are

The gradient elements then become


The minimization used is a version of the steepest descent technique for finding zeros of functions. Each sample position is incremented by (), where and is the iteration counter. This expression describes the shortest Euclidean line between the current point, , and the plane, along the hyperplane which is tangent to . This technique was used because of its simplicity - errors in the software were easily tracked because each step could be followed. This method is designed to find zeros, not minima, so it was adapted by decreasing the size (but maintaining the direction) of sample position jumps which would create an larger than that of the previous iteration. When minima were reached, the magnitude of the sample position jumps got cut drastically (). If this was suspected to be a local minima then a large increment was permitted, effectively searching a new solution space (similar to simulated annealing).

Another constraint added to this procedure was that sample points were not to cross over one another (Let be an increasing sequence. then must be an increasing sequence ). If any pair of sample locations approached each other too fast in any given iteration step (e.g., if then or , as appropriate, was decreased so that .

Next: Results Up: Approximate Shift-Invariance by Warping Previous: Introduction
Fri Apr 15 19:07:34 EDT 1994