Next: Observation Process Up: Prior Models and the Previous: Introduction

Bayesian Paradigm

The philosophy within statistics known as Bayesian inference has a very long history. It is distinguished from the perhaps more familiar classical statistical ideas by using prior information about the images being studied.

Bayesian methods start with a prior distribution, a probability distribution over images , , (it is here that we incorporate information on the expected structure within an image), it is also necessary to specify the probability distribution , of observed images g if f were the true image. The Bayesian paradigm dictates that inference about the true should be based on given by

To show just one restoration, it is common to choose the mode of this posterior distribution, that is to display the image which satisfies

This is known as the maximum a posteriori (MAP) estimate of .

Equivalently, we can choose to minimize

The first term in (1) is the familiar log likelihood of . The second term can be thought of as a roughness penalty, as images which do not correspond to our prior conceptions will be assigned a small and hence a large penalty.

In statistical physics it is common to define probabilities by the energy of a system, so that

where is , being the temperature and Boltzmann's constant. If we adopt this notation, minimizes

We can recognize this as a Lagrangian form, so its solution is equivalent to solving

and to

which correspond to the regularization approach to image restoration.

Many other deconvolution principles fit into one of these forms; in particular - as we will see later - the R-L restoration method. Maximum entropy methods also fit into this framework (Molina et al. 1992a).

Having described the Bayesian paradigm let us move on to examine the two ingredients of this paradigm, the observation process, , and the prior model or image model, .



Next: Observation Process Up: Prior Models and the Previous: Introduction


rlw@sundog.stsci.edu
Mon Apr 18 14:28:26 EDT 1994