0% found this document useful (0 votes)
189 views

DIP-5-Image Restoration

1) Image restoration aims to reconstruct an original image that has been degraded, by using knowledge of the degradation process. It is an objective process, whereas image enhancement is more subjective. 2) Degradations can occur during image acquisition due to sensor noise or during transmission due to interference. Common noise models include Gaussian, Rayleigh, exponential, uniform, and impulse noise. 3) Restoration techniques can operate in either the spatial or frequency domain. Spatial techniques are suitable for additive noise, while frequency techniques using filters are better for degradations like blurring.

Uploaded by

Somanath Balwant
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
189 views

DIP-5-Image Restoration

1) Image restoration aims to reconstruct an original image that has been degraded, by using knowledge of the degradation process. It is an objective process, whereas image enhancement is more subjective. 2) Degradations can occur during image acquisition due to sensor noise or during transmission due to interference. Common noise models include Gaussian, Rayleigh, exponential, uniform, and impulse noise. 3) Restoration techniques can operate in either the spatial or frequency domain. Spatial techniques are suitable for additive noise, while frequency techniques using filters are better for degradations like blurring.

Uploaded by

Somanath Balwant
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

M.Sc. 4.4: Digital Image Processing Dr. R. S.

Hegadi

5. IMAGE RESTORATION
As in image enhancement, the ultimate goal of restoration techniques is to improve an
image in some predefined sense. Although there are areas of overlap, image enhancement
is largely a subjective process, while image restoration is for the most part an objective
process. Restoration attempts to reconstruct or recover an image that has been degraded
by using a priori knowledge of the degradation phenomenon. Thus restoration techniques
are oriented toward modeling the degradation and applying the inverse process in order to
recover the original image.
This approach usually involves formulating a criterion of goodness that will yield an
optimal estimate of the desired result. By contrast, enhancement techniques basically are
heuristic procedures designed to manipulate an image in order to take advantage of the
psychophysical aspects of the human visual system. For example, contrast stretching is
considered an enhancement technique because it is based primarily on the pleasing
aspects it might present to the viewer, whereas removal of image blur by applying a de-
blurring function is considered a restoration technique.
The material developed in this chapter is strictly introductory. We consider the
restoration problem only from the point where a degraded, digital image is given: thus we
consider topics dealing with sensor, digitizer, and display degradations only superficially.
These subjects, although of importance in the overall treatment of image restoration
applications, are beyond the scope of the present discussion.
As in previous Chapters, some restoration techniques are best formulated in the spatial
domain, while others are better suited for the frequency domain. For example, spatial
processing is applicable when the only degradation is additive noise. On the other hand,
degradations such as image blur are difficult to approach in the spatial domain using
small masks. In this case, frequency domain filters based on various criteria of optimality
are the approaches of choice. These filter also lake into account the presence of noise.
Restoration filter that solves a given application in the frequency domain often is used as
the basis for generating a digital filter that will be more suitable for routine operation
using a hardware/firmware implementation.

Fig. 5-1: A model of the image degradation/restoration process

5.1 A Model of the Image Degradation/Restoration Process


As Fig. 5-1 shows, the degradation process is modeled in this chapter as a degradation
function that, together with an additive noise term, operates on an input image f(x, y) to
produce a degraded image g(x, y). Given g(x, y), some knowledge about the degradation
function H, and some knowledge about the additive noise term η(x, y), the objective of

-80-
M.Sc. 4.4: Digital Image Processing Dr. R. S. Hegadi

)
restoration is to obtain an estimate f ( x , y ) of the original image. We want the estimate to
be as close as possible) to the original input image and, in general, the more we know about
H and η, the closer f ( x , y ) will be to f(x, y). The approach used throughout most of this
chapter is based on various types of image restoration filters.
If H is a linear, position-invariant process, then the degraded image is given in the spatial
domain by
g(x, y) = h(x, y) * f(x, y) + η(x, y) (5-1)
where h(x, y) is the spatial representation of the degradation function and, the symbol "*"
indicates convolution. We know that convolution in the spatial domain is equal to
multiplication in the frequency domain, so we may write the model in Eq. (5-1) in an
equivalent frequency domain representation:
G(u, v) = H(u, v)F(u, v) + N(u, v) (5-2)
where the terms in capital letters are the Fourier transforms of the corresponding terms in
Eq. (5-1). These two equations are the bases for most of the material in this chapter.

5.2 Noise Models


The principal sources of noise in digital images arise during image acquisition
(digitization) and/or transmission. The performance of imaging sensors is affected by a
variety of factors, such as environmental conditions during image acquisition, and by the
quality of the sensing elements themselves. For instance, in acquiring images with a CCD
camera, light levels and sensor temperature are major factors affecting the amount of noise
in the resulting image. Images are corrupted during transmission principally due to
interference in the channel used for transmission. For example, an image transmitted using
a wireless network might be corrupted as a result of lightning or other atmospheric
disturbance.

Spatial and Frequency Properties of Noise


Relevant to our discussion are parameters that define the spatial characteristics of noise,
and whether the noise is correlated with the image. Frequency properties refer to the
frequency content of noise in the Fourier sense (i.e. as opposed to the electromagnetic
spectrum). For example, when the Fourier spectrum of noise is constant, the noise usually
is called white noise. This terminology is a carryover from the physical properties of white
light, which contains nearly all frequencies in the visible spectrum in equal proportions.
From the discussion in Chapter 4, it is not difficult to show that the Fourier spectrum of a
function containing all frequencies in equal proportions is a constant.

Some Important Noise Probability Density Functions


Based on the assumptions in the previous section, the spatial noise descriptor with which
we shall be concerned is the statistical behavior of the gray-level values in the noise
component of the model in Fig. 5-1. These may be considered random variables,
characterized by a probability density function (PDF). The following are among the most
common PDFs found in image processing applications.

-81-
M.Sc. 4.4: Digital Image Processing Dr. R. S. Hegadi

Gaussian noise: Because of its mathematical tractability in both the spatial and frequency

Fig. 5-2: Some important probability density functions

-82-
M.Sc. 4.4: Digital Image Processing Dr. R. S. Hegadi

domains, Gaussian (also called normal) noise models are used frequently in practice, in
fact, this tractability is so convenient that it often results in Gaussian models being used in
situations in which they are marginally applicable at best. The PDF of a Gaussian random
variable, z, is given by
1 2
p( z ) = e ( z − µ ) / 2σ (5-3)
2π σ
where z represents gray level, µ is the mean of average value of z, and σ is its standard
deviation. The standard deviation squared, σ2, is called the variance of z. A plot of this
function is shown in Fig. 5-2(a). When z is described by Eq. (5-3) approximately 70% of
its values will be in the range [(µ-σ), (µ + σ)], and about 95% will be in the range [(µ - 2σ),
(µ + 2σ)].

Rayleigh noise: The PDF of Rayleigh noise is given by


2 −( z −a)2 / b
 ( z − a )e for z ≥ a
p( z ) =  b (5-4)
0 for z < a
The mean and variance of this density are given by
µ = a + πb / 4
and
b( 4 − π )
σ2 =
4
Figure 5-2(b) shows a plot of the Rayleigh density. Note the displacement from the origin
and the fact that the basic shape of this density is skewed to the right. The Rayleigh density
can be quite useful for approximating skewed histograms.

Exponential noise: The PDF of exponential noise is given by


ae − az for z ≥ 0
p( z ) =  (5-5)
0 for z < 0
where a > 0. The mean and variance of this density function are
1
µ=
a
and
1
σ2 =
a2
Note that this PDF is a special case of the Erlang PDF, with b = 1. Figure 5-2(d) shows a
plot of this density function.

-83-
M.Sc. 4.4: Digital Image Processing Dr. R. S. Hegadi

Uniform noise: The PDF of uniform noise is given by


 1
 for a ≤ z ≤ b
p( z ) =  b − a (5-6)
0 otherwise
The mean of this density function is given by
a+b
µ=
2
and its variance by
(b − a) 2
σ2 =
12
Figure 5-2(e) shows a plot of the uniform density.

Impulse (salt-and-pepper) noise: The PDF of (bipolar) impulse noise is given by


Pa for z = a

p ( z ) = Pb for z = b
0 otherwise

If b>a, gray-level b will appear as a light dot in the image. Conversely, level a will appear
like a dark dot. If either Pa or Pb is zero, the impulse noise is called unipolar. If neither
probability is zero, and especially if they are approximately equal, impulse noise values
will resemble salt-and-pepper granules randomly distributed over the image. For this
reason, bipolar impulse noise also is called salt-and-pepper noise. Shot and spike noise also
are terms used to refer to this type of noise. In our discussion we will use the terms impulse
or salt-and-pepper noise interchangeably.
Noise impulses can be negative or positive. Scaling usually is part of the image digitizing
process. Because impulse corruption usually is large compared with the strength of the
image signal, impulse noise generally is digitized as extreme (pure black or white) values
in an image. Thus, the assumption usually is that a and b are "'saturated" values, in the
sense that they are equal to the minimum and maximum allowed values in the digitized
image. As a result, negative impulses appear as black (pepper) points in an image. For the
same reason, positive impulses appear white (salt) noise. For an 8-bit image this means that
a = 0 (black) and b = 255 (white). Figure 5.2(f) shows the PDF of impulse noise.
As a group, the preceding PDFs provide useful tools for modeling a broad range of noise
corruption situations found in practice. For example, Gaussian noise arises in an image due
to factors such as electronic circuit noise and sensor noise due to poor illumination and/or
high temperature. The Rayleigh density is helpful in characterizing noise phenomena in
range imaging. The exponential and gamma densities find application in laser imaging.
Impulse noise is found in situations where quick transients, such as faulty switching, take
place during imaging, as mentioned in the previous paragraph. The uniform density is
perhaps the least descriptive of practical situations. However, the uniform density is quite
useful as the basis for numerous random number generators that are used in simulations.

-84-
M.Sc. 4.4: Digital Image Processing Dr. R. S. Hegadi

Periodic Noise: Periodic noise in an image arises typically from electrical or


electromechanical interference during image acquisition. Periodic noise can be reduced
significantly via frequency domain filtering.

5.3 Restoration in the Presence of Noise Only-Spatial Filtering


When the only degradation present in an image is noise, Eqs. (5-1) and (5-2) become
g(x,y) = f(x, y) + η(x, y)
and
G(u, v) = F(u, v) + N(u, v)
The noise terms are unknown, so subtracting them from g(x, y) or G(u, v) is not a realistic
option. In the ease of periodic noise, it usually is possible to estimate N(u, v) from the
spectrum of G(u, v). In this case N(u, v) can be subtracted from G(u, v) to obtain an
estimate of the original image. In general, however, this type of knowledge is the exception
rather than the rule.

Mean Filters
In this section we discuss briefly the noise-reduction spatial filters introduced in chapter 3
and develop several other filters whose performance is in many cases superior to the filters
discussed in that section.

Arithmetic mean filter: This is the simplest of the mean filters. Let Sxy represent the set of
coordinates in a rectangular subimage window of size m × n. centered at point (x, y).The
arithmetic mean filtering process computes the average value of the corrupted image g(x,
y) in the area defined by Sxy. The value of the restored image fˆ at any point (x, y) is
simply the arithmetic mean computed using the pixels in the region defined by Sxy. In other
words
1
fˆ ( x, y ) = ∑ g ( s, t )
mn ( s , t )∈S xy

This operation can be implemented using a convolution mask in which all co-efficients
have value 1/mn. A mean filter simply smoothes local variations in an image. Noise is
reduced as a result of blurring.

Geometric mean filter: An image restored using a geometric mean filter is given by the
expression
1
  mn
fˆ ( x, y ) =  ∏ g ( s, t ) 
( s , t )∈S xy 
Here, each restored pixel is given by the product of the pixels in the subimage window,
raised to the power 1/mn. A geometric mean filter achieves smoothing comparable to the
arithmetic mean filter, but it tends to lose less image detail in the process.

-85-
M.Sc. 4.4: Digital Image Processing Dr. R. S. Hegadi

Harmonic mean filter: The harmonic mean filtering operation is given by the expression
mn
fˆ ( x, y ) =
1

( s , t )∈ S xy g ( s, t )
The harmonic mean filter works well for salt noise, but fails for pepper noise. It does well
also with other types of noise like Gaussian noise.

Order-Statistics Filters
Order-statistics filters were introduced in previous chapter. We now expand the discussion
in that section and introduce some additional order-statistics filters. Order-statistics filters
are spatial filters whose response is based on ordering (ranking) the pixels contained in the
image area encompassed by the filter. The response of the filter at any point is determined
by the ranking result.

Median filter: The best-known order-statistics filter is the median filter, which, as its
name implies, replaces the value of a pixel by the median of the gray levels in the
neighborhood of that pixel:
fˆ ( x, y ) = median{g ( s, t )}
{ s ,t }∈S xy

The original value of the pixel is included in the computation of the median. Median filters
are quite popular because, for certain types of random noise, they provide excellent noise-
reduction capabilities, with considerably less blurring than linear smoothing filters of
similar size. Median filters are particularly effective in the presence of both bipolar and
unipolar impulse noise. Computation of the median and implementation of this filter are
discussed in detail in chapter 3.

Max and min filters: Although the median filter is by far the order-statistics filter most
used in image processing, it is by no means the only one. The median represents the 50th
percentile of a ranked set of numbers, but the reader will recall from basic statistics that
ranking lends itself to many other possibilities. For example, using the 100th percentile
results in the so called max filler, given by
fˆ ( x, y ) = max {g ( s, t )}
{ s ,t }∈S xy

This filler is useful for finding the brightest points in an image. Also, because pepper noise
has very low values, it is reduced by this filter as a result of the max selection process in
the subimage area Sxy. The 0th percentile filler is the min filter:
fˆ ( x, y ) = min {g ( s, t )}
{ s ,t }∈S xy

This filler is useful for finding the darkest points in an image. Also, it reduces salt noise as
a result of the min operation.

Midpoint filter: The midpoint filler simply computes the midpoint between the maximum
and minimum values in the area encompassed by the filter:

-86-
M.Sc. 4.4: Digital Image Processing Dr. R. S. Hegadi

1 
fˆ ( x, y ) =  max {g ( s, t )} + min {g ( s , t )}
2{ s , t }∈S xy { s , t }∈S xy 
Note that this filter combines order statistics and averaging. This filter works best for
randomly distributed noise, like Gaussian or uniform noise.

Adaptive Filters
Once selected, the filters discussed thus far are applied to an image without regard for how
image characteristics vary from one point to another. In this section we take a look at two
simple adaptive filters whose behavior changes based on statistical characteristics of the
image inside the filter region defined by the m × n rectangular window Sxy. Adaptive fillers
are capable of performance superior to that of the filters discussed thus far. The price paid
for improved filtering power is an increase in filter complexity. Keep in mind that we still
are dealing with the case in which the degraded image is equal to the original image plus
noise. No other types of degradations are being considered yet.

Adaptive, local noise reduction filter: The simplest statistical measures of a random
variable are its mean and variance. These are reasonable parameters on which to base an
adaptive filter because they are quantities closely related to the appearance of an image.
The mean gives a measure of average gray level in the region over which the mean is
computed, and the variance gives a measure of average contrast in that region. Our filter is
to operate on a local region, Sxy. The response of the filter at any point (x, y) on which the
region is centered is to be based on four quantities: (a) g(x, y), the value of the noisy image
at (x, y); (b) σ η2 , the variance of the noise corrupting f(x, y) to form g(x, y); (c) mL, the local
mean of the pixels in Sxy; and (d) σ L2 , the local variance of the pixels in Sxy. We want the
behavior of the filter to be as follows:
1. If σ η2 is zero, the filter should return simply the value of g(x, y). This is the
trivial, zero-noise case in which g(x, y) is equal to f(x, y).
2. If the local variance is high relative to σ η2 the filter should return a value close
to g(x, y). A high local variance typically is associated with edges, and these
should be preserved.
3. If the two variances are equal, we want the filter to return the arithmetic mean
value of the pixels in Sxy. This condition occurs when the local area has the
same properties as the overall image, and local noise is to be reduced simply by
averaging.

An adaptive expression for obtaining fˆ ( x , y ) based on these assumptions maybe written


as
2
ση
fˆ ( x, y ) = g ( x, y ) − 2 [g ( x, y ) − m L ]
σL

-87-
M.Sc. 4.4:: Digital Image Processing Dr. R. S. Hegadi

The only quantity that needs to be known or estimated is the variance of the overall noise,
σ η2 .The other parameters are computed from the pixels in Sxy at each location (x,
( y) on
which the filterr window is centered. A tacit assumption in above Eqn. is that σ η2 ≤ σ L2 .

5.4 Periodic Noise Reduction by Frequency Domain Filtering


In Chapter 4 we discussed lowpass and highpass frequency domain filters as fundamental
tools for image enhancement.
cement. In this section we discuss the more specialized bandreject.
bandpass, and notch filters as tools for periodic noise reduction or removal.

Bandreject Filters
Bandreject filters remove or attenuate a band of freq
frequencies
uencies about the origin of the Fourier
transform. An ideal bandreject filter is given by the expression
 W
1 if D(u, v) < D0 −
2
 W W
H (u , v) =  0 if D0 − ≤ D(u, v) ≤ D0 +
 2 2
1 W
if D (u , v) > D0 +
 2
Where D(u, v)) is the distance from the origin ooff the centered frequency rectangle,
rectan W is the
width of the band, and D0 is its radial center. Similarly, a Butterworth bandreject filter of
order n is given by the expression
1
H (u , v ) = 2n
 D (u , v ) W 
1+  2 2 
 D (u , v ) − D0 
and a Gaussian bandreject filter is given by
1  D 2 ( u , v ) − D02 
−  
2  D ( u ,v ) W 
H (u , v ) = 1 − e
Figure below shows perspective plots of these three filters.

Fig. 5-3:
3: From left to right, perspectiv
perspectivee plots of ideal, Butterworth (of order 1), and
Gaussian bandreject filters.

Bandpass Filters
A bandpass filler performs the opposite operation of a bandreject filler. In previous chapter
we showed how a highpass filter can be obtained from corresponding lowpass filler.

-88-
M.Sc. 4.4:: Digital Image Processing Dr. R. S. Hegadi

Similarly, the transfer function Hbp(u, v)) of a bandpass filter is obtained from a
corresponding bandreject filter with transfer function Hbr(u, v)) by using the equation
Hbp(u, v)= 1 - Hbr(u,
u, vv)

Notch Filters
A notch filter rejects (or passes)
sses) frequencies in predefined neighborhoods about a center
ce
frequency. Figure 5-4 shows 33-D plots of ideal, Butterworth, and Gaussian ian notch (reject)
filters. Due to the symmetry of the Fourier transform, notch filters must appear in
symmetric pairs about the origin in order to obtain meaningful results. The one exception
to this rule is if the notch filter is located at the origin, in which case it appears by itself.
Althoughgh we show only one pair for il illustrative
lustrative purposes, the number of pairs of notch
notc
filters
ters that can be implement
implemented is arbitrary. The shape of the notch areas eas also can be
arbitrary (e.g. rectangular).

Fig. 5-4: Perspective


tive plots of (a) ideal, (b) But
Butterworth
terworth (of order 2) and (c) Gaussian notch
(reject) filters

The transfer function of an ideal notch reject filter of radius D0, with centers at (u
( 0, v0) and,
by symmetry, at (-u0, -v0) is
0 if D1 (u, v) ≤ D0 or D2 (u, v) ≤ D0
H (u, v) = 
1 otherwise
where
D1(u, v) = [(uu – M/2 – u0)2 + (v – N/2 – v0) 2] 1/2
and
D2(u, v) = [(uu – M/2 + u0)2 + (v – N/2 + v0) 2] 1/2
As usual, thee assumption is that the center of the frequency rectangle has been shifted to
the point (M/2, N/2.. Therefore, the values of at (u0, v0) are with respect to the shifted
center.
The transfer function of a But
Butterworth notch reject filter of order n is given by
1
H (u, v) =
 D02 
1+  
 D1 (u, v) D2 (u, v) 

1  D1 ( u ,v ) D2 ( u ,v ) 
 
2  D02 
form: H (u , v ) = 1 − e
A Gaussian notch reject filter has the form

-89-
M.Sc. 4.4: Digital Image Processing Dr. R. S. Hegadi

It is interesting to note that these three filters become highpass filters if u0 = v0 = 0.


As shown in the previous section for bandpass filters, we can obtain notch filters that pass,
rather than suppress, the frequencies contained in the notch areas. Since these filters
perform exactly the opposite function as the notch reject filters, their transfer functions are
given by
Hnp(u, v) = 1 - Hnr(u, v)
where Hnp(u, v) is the transfer function of the notch pass filter corresponding to the notch
reject filter with transfer function Hnr(u, v).

5.5 Linear, Position Invariant Degradations


The input-output relationship before the restoration stage is expressed as
g(x,y) = H[f(x,y)] + η(x,y)
For the moment, let us assume that η(x, y) = 0 so that g(x, y) = H[f(x, y)]. H is linear if
H[af1(x, y) + bf2(x, y)]= aH[f1(x, y)] + aH[f2(x, y)]
where a and b are scalars and f1(x, y)andf2(x, y) are any two input images.
If a = b = 1, above Equation becomes
H[f1(x, y) + f2(x, y)]= H[f1(x, y)] + H[f2(x, y)]
which is called the property of additivity. This property simply says that, if His a linear
operator, the response to a sum of two inputs is equal to the sum of the two responses.
Withf2(x, y) = 0,Equation becomes
H[af1(x, y)]= aH[f1(x, y)]
which is called the property of homogeneity. It says that the response to a constant multiple
of any input is equal to the response to that input multiplied by the same constant. Thus a
linear operator possesses both the property of additivity and the property of homogeneity.
An operator having the input-output relationship g(x, y) = H[f(x, y)] is said to be position
(or space) invariant if
H[f(x - α, y - β)] = g(x - α, y - β)
for any f(x, y) and any α and β. This definition indicates that the response at any point in
the image depends only on the value of the input at that point, not on its position.
With a slight (but equivalent) change in notation in the definition of the discrete impulse
function, f(x, y) can be expressed in terms of a continuous impulse function:
∞ ∞
f ( x, y ) = ∫ ∫ f (α , β ) δ ( x − α , y − β ) dα dβ
− ∞ −∞
This, in fact, is the definition using continuous variables of a unit impulse located at
coordinates (x, y).
Assume again for a moment that η(x, y) = 0.Then,
g ( x, y ) = H [ f ( x, y )] = H  ∫ ∫ f (α , β ) δ ( x − α , y − β )dα dβ 
∞ ∞

 −∞ −∞ 
If H is a linear operator and we extend the additivity property to integrals, then

-90-
M.Sc. 4.4: Digital Image Processing Dr. R. S. Hegadi

∞ ∞
g ( x, y ) = ∫ ∫ H [ f (α , β ) δ ( x − α , y − β )]dα dβ
− ∞ −∞
Because f(α, β) is independent of x and y. and using the homogeneity property, it follows
that
∞ ∞
g ( x, y ) = ∫ ∫ f (α , β ) H [δ ( x − α , y − β )]dα dβ
−∞ −∞
The term
h(x, α, y, β) = H[δ(x - α,y - β)]
is called the impulse response of H. In other words, if η(x, y) = 0, then h(x, α, y, β) is the
response of H to an impulse of strength 1 at coordinates (x, y). In optics, the impulse
becomes a point of light and h(x, α, y, β)is commonly referred to as the point spread
function (PSF).This name arises from the fact that all physical optical systems blur
(spread) a point of light to some degree, with the amount of blurring being determined by
the quality of the optical components. From previous two equations:
∞ ∞
g ( x, y ) = ∫ ∫ f (α , β ) h( x, α , y, β )dα dβ
−∞ −∞
which is called the superposition (or Fredholm) integral of the first kind. This expression is
a fundamental result that is at the core of linear system theory. It states that if the response
of H to an impulse is known, the response to any input f (α , β ) can be calculated by
means of above Equation. In other words, a linear system His completely characterized by
its impulse response.
If H is position invariant, then,
H [δ ( x − α , y − β )] = h( x − α , y − β )
hence
∞ ∞
g ( x, y ) = ∫ ∫ f (α , β ) h( x − α , y − β )dα dβ
−∞ − ∞
This expression is called the convolution integral; it is the continuous-variable equivalent
of the discrete convolution expression. This integral tells us that knowing the impulse
response of a linear system allows us to compute its response, g, to any input f. The result
is simply the convolution of the impulse response and the input function.
In the presence of additive noise, the expression of the linear degradation model becomes
∞ ∞
g ( x, y ) = ∫ ∫ f (α , β ) h( x, α , y, β )dα dβ + η ( x, y )
−∞ −∞
If H is position invariant, then
∞ ∞
g ( x, y ) = ∫ ∫ f (α , β ) h( x − α , y − β )dα dβ + η ( x, y )
− ∞ −∞

The values of the noise term η ( x, y ) are random, and are assumed to be independent of
position. Using the familiar notation for convolution, we can write
g ( x, y ) = h ( x, y ) * f ( x, y ) + η ( x, y )
or, based on the convolution theorem, we can express it in the frequency domain, as
G (u, v ) = H (u, v ) * F (u , v ) + N (u, v )
In summary, the preceding discussion indicates that a linear, spatially-invariant
degradation system with additive noise can be modeled in the spatial domain as the

-91-
M.Sc. 4.4: Digital Image Processing Dr. R. S. Hegadi

convolution of the degradation (point spread) function with an image, followed by the
addition of noise. Based on the convolution theorem, the same process can be expressed in
the frequency-domain as the product of the transforms of the image and degradation,
followed by the addition of the transform of the noise. When working in the frequency
domain, we make use of an FFT algorithm. Keep in mind also the need for function
padding in the implementation of discrete Fourier transforms.
Many types of degradations can be approximated by linear, position-invariant processes.
The advantage of this approach is that the extensive tools of linear system theory then
become available for the solution of image restoration problems. Nonlinear and position-
dependent techniques, although more general (and usually more accurate), introduce
difficulties that often have no known solution or are very difficult to solve
computationally. This chapter focuses on linear, space-invariant restoration techniques.
Because degradations are modeled as being the result of convolution, and restoration seeks
to find filters that apply the process in reverse, the term image deconvolution is used
frequently to signify linear image restoration. Similarly, the filters used in the restoration
process often are called deconvolution filters.

5.6 Estimating the Degradation Function


There are three principal ways to estimate the degradation function for use in image
restoration: (1) observation, (2) experimentation, and (3) mathematical modeling. These
methods are discussed in the following sections. The process of restoring an image by
using a degradation function that has been estimated in some way sometimes is called
blind deconvolution, due to the fact that the true degradation function is seldom known
completely.

Estimation by Image Observation


Suppose that we are given a degraded image without any knowledge about the degradation
function H. One way to estimate this function is to gather information from the image
itself. For example, if the image is blurred, we can look at a small section of the image
containing simple structures, like part of an object and the background. In order to reduce
the effect of noise in our observation, we would look for areas of strong signal content.
Using sample gray levels of the object and background, we can construct an unblurred
image of the same size and characteristics as the observed subimage. Let the observed
subimage be denoted by gs(x, y), and let the constructed subimage (which in reality is our
estimate of the original image in that area) be denoted by fˆ ( x , y ) . Then, assuming that the
effect of noise is negligible because of our choice of a strong-signal area, it follows that
G (u, v)
H S (u, v) = ˆS
f (u, v)
From the characteristics of this function we then deduce the complete function H(u, v) by
making use of the fact that we are assuming position invariance. For example, suppose that
a radial plot of Hs(u, v) turns out to have the shape of Butterworth lowpass filter. We can
use that information to construct a function H(u, v) on a larger scale, but having the same
shape.

-92-
M.Sc. 4.4: Digital Image Processing Dr. R. S. Hegadi

Estimation by Experimentation
If equipment similar to the equipment used to acquire the degraded image is available, it is
possible in principle to obtain an accurate estimate of the degradation. Images similar to
the degraded image can be acquired with various system settings until they are degraded as
closely as possible to the image we wish to restore. Then the idea is to obtain the impulse
response of the degradation by imaging an impulse (small dot of light) using the same
system settings. A linear, space-invariant system is described completely by its impulse
response. An impulse is simulated by a bright dot of light, as bright as possible to reduce
the effect of noise. Then, recalling that the Fourier transform of an impulse is a constant:
G (u, v)
H (u, v) =
A
where, as before, G(u, v) is the Fourier transform of the observed image and A is a constant
describing the strength of the impulse. Figure below shows an example.

Fig: Degradation estimation by impulse characterization.


(a) An impulse of Light (magnified), (b) Imaged (degraded) impulse

Estimation by Modeling
Degradation modeling has been used for many years because of the insight it affords into
the image restoration problem. In some cases, the model can even take into account
environmental conditions that cause degradations. For example, a degradation model
proposed by Hufnagel and Stanley [1964] is based on the physical characteristics of
atmospheric turbulence. This model has a familiar form:
2 2 5/6
H (u , v ) = e − k ( u + v )
where k is a constant that depends on the nature of the turbulence. With the exception of
the 5/6 power on the exponent, this equation has the same form as the Gaussian lowpass
filter. In fact, the Gaussian LPF is used sometimes to model mild, uniform blurring.

5.7 Inverse Filtering


The material in this section is our first step in studying restoration of images degraded by a
degradation function H, which is given or obtained by a method such as those discussed in
the previous section. The simplest approach to restoration is direct inverse filtering, where
we compute an estimate, Fˆ (u , v ) , of the transform of the original image simply by dividing
the transform of the degraded image, G(u, v), by the degradation function:
G (u, v)
Fˆ (u, v) =
H (u, v)

-93-
M.Sc. 4.4: Digital Image Processing Dr. R. S. Hegadi

The divisions are between individual elements of the functions. Substituting the right side
for G(u, v) in above Eq. yields
N (u, v)
Fˆ (u, v) = F (u, v) +
H (u, v)
This is an interesting expression. It tells us that even if we know the degradation function
we cannot recover the undegraded image [the inverse Fourier transform of F(u, v)] exactly
because N(u, v) is a random function whose Fourier transform is not known. There is more
bad news. If the degradation has zero or very small values, then the ratio N(u, v)/H(u, v)
could easily dominate the estimate Fˆ (u , v ) .
One approach to get around the zero or small-value problem is to limit the filter
frequencies to values near the origin. We know that H(0,0) is equal to the average value of
h(x, y) and that this is usually the highest value of H(u, v) in the frequency domain. Thus,
by limiting the analysis to frequencies near the origin, we reduce the probability of
encountering zero values.

5.8 Minimum Mean Square Error (Wiener) Filtering


The inverse filtering approach discussed in the previous section makes no explicit
provision for handling noise. In this section we discuss an approach that incorporates both
the degradation function and statistical characteristics of noise into the restoration process.
The method is founded on considering images and noise as random processes, and the
objective is to find an estimate fˆ of the uncorrupted image f such that the mean square
error between them is minimized. This error measure is given by

{( ) }
e 2 = E f − fˆ
2

where E{.} is the expected value of the argument. It is assumed that the noise and the
image are uncorrected; that one or the other has zero mean; and that the gray levels in the
estimate are a linear function of the levels in the degraded image. Based on these
conditions, the minimum of the error function in above Eq. is given in the frequency
domain by the expression
 H * (u , v ) S f (u , v ) 
Fˆ (u , v ) =  2
 G (u , v )
 S f (u , v ) H (u , v ) + Sη (u , v) 

 H * (u , v ) 
= 2
 G (u , v )
 H (u , v ) + Sη (u , v ) / S f (u , v ) 

 1 H (u , v )
2

= 2
 G (u , v)
 H (u , v) H (u , v ) + Sη (u , v) / S f (u , v) 

where we used the fact that the product of a complex quantity with its conjugate is equal to
the magnitude of the complex quantity squared.This result is known as the Wiener filler,
after N. Wiener, who first proposed the concept in the year shown. The filter, which
consists of the terms inside the brackets, also is commonly referred to as the minimum
mean square error filter or the least square error filter.Note from the first line in Eq. above

-94-
M.Sc. 4.4: Digital Image Processing Dr. R. S. Hegadi

that the Wiener filter does not have the same problem as the inverse filter with zeros in the
degradation function, unless both H(u, v) and Sη(u, v) are zero for the same value(s) of u
and v. The terms in above Eq. are as follows:
H(u,v) = degradation function
H*(u,v) = complex conjugate of H(u, v)
|H(u, v)|2 =H*(u, v) H(u, v)
Sη(u, v) = |N(u, v)|2 = Power spectrum of the noise
Sf(u, v) = |F(u, v)|2 = Power spectrum of the undegraded image.

As before, H(u, v) is the transform of the degradation function and G(u, v) is the transform
of the degraded image. The restored image in the spatial domain is given by the inverse
Fourier transform of the frequency-domain estimate Fˆ (u , v ) . Note that if the noise is zero,
then the noise power spectrum vanishes and the Wiener filter reduces to the inverse filter.
When we are dealing with spectrally white noise, the spectrum | N(u, v) |2 is a constant,
which simplifies things considerably. However, the power spectrum of the undegraded
image seldom is known. An approach used frequently when these quantities are not known
or cannot be estimated is to approximate above Eq. by the expression
 1 H ( u , v )
2

Fˆ (u , v ) =  2
 G (u , v )
 H (u , v ) H (u , v ) + K 

whereK is a specified constant.

5.9 Constrained Least Squares Filtering


The problem of having to know something about the degradation function H is common to
all methods discussed in this chapter. However, the Wiener filter presents an additional
difficulty: The power spectra of the undegraded image and noise must be known. We
showed in the previous section that it is possible to achieve excellent results using the
approximation given in Eq. above. However, a constant estimate of the ratio of the power
spectra is not always a suitable solution.
The method discussed in this section requires knowledge of only the mean and variance of
the noise. These parameters usually can be calculated from a given degraded image, so this
is an important advantage. Another difference is that the Wiener filter is based on
minimizing a statistical criterion and, as such, it is optimal in an average sense. The
algorithm presented in this section has the notable feature that it yields an optimal result
for each image to which it is applied. Of course, it is important to keep in mind that these
optimality criteria, while satisfying from a theoretical point of view, are not related to the
dynamics of visual perception. As a result, the choice of one algorithm over the other will
almost always be determined (at least partially) by the perceived visual quality of the
resulting images.
By using the definition of convolution, we can express the degradation Eq. in vector-
matrix form, as follows:
g = Hf + η
For example, suppose that g(x, y) is of size M×N. Then we can form the firstN elements of
the vector g by using the image elements in first row of g(x, y), the next N elements from

-95-
M.Sc. 4.4: Digital Image Processing Dr. R. S. Hegadi

the second row, and so on. The resulting vector will have dimensions MN × 1. These are
also the dimensions of f and η, as these vectors are formed in the same manner. The matrix
H then has dimensions MN ×MN.
It would be reasonable to come to the conclusion that the restoration problem can now be
reduced to simple matrix manipulations. Unfortunately, this is not the case. For instance,
suppose that we are working with images of medium size: say M = N = 512. Then the
vectors in Eq. above would be of dimension 262,144 × l, and matrix H would be of
dimensions 262,144 × 262,144. Manipulating vectors and matrices of these sizes is not a
trivial task. The problem is complicated further by the fact H is highly sensitive to noise
(after the experiences we had with the effect of noise in the previous two sections, this
should not be a surprise). However, formulating the restoration problem in matrix form
does facilitate derivation of restoration techniques.
Although we do not fully derive the method of constrained least squares that we are about
to present, this method has its roots in a matrix formulation. Central to the method is the
issue of the sensitivity of H to noise. One way to alleviate the noise sensitivity problem is
to base optimality of restoration on a measure of smoothness, such as the second derivative
of an image (our old friend the Laplacian).To be meaningful, the restoration must be
constrained by the parameters of the problems at hand. Thus, what is desired is to find the
minimum of a criterion function, C, defined as

[∇
M -1 N -1 2

C = ∑∑ 2
]
f ( x, y )
x =0 y =0

subject to the constraint


||g - H fˆ ||2 = ||η||2
where ||w||2 = wTw is the Euclidean vector norm and fˆ is the estimate of the undegraded
image.
The frequency domain solution to this optimization problem is given by the expression
 H * (u , v) 
Fˆ =  2
 G (u , v)
 H (u , v ) 2 +γ P (u , v) 
where γ is a parameter that must be adjusted so that the constraint in previous Eq. is
satisfied, and P(u, v) is the Fourier transform of the function
 0 -1 0 
p ( x, y ) = - 1 4 - 1
 0 - 1 0 
We recognize this function as the Laplacian operator. As noted earlier, it is important to
keep in mind that p(x, y), as well as all other relevant spatial domain functions, must be
properly padded with zeros prior to computing their Fourier transforms for use in above
Eq. Note that above Eq. reduces to inverse filtering if γ is zero.

5.10 Geometric Mean Filter


It is possible to generalize slightly the Wiener filter discussed in the previous Section. The
generalization is in the form of the so-called geometric mean filler.

-96-
M.Sc. 4.4: Digital Image Processing Dr. R. S. Hegadi

1-α
 
α 
 H * (u , v )   H * ( u , v ) 
Fˆ (u , v ) =    G (u , v )
2
 H (u , v )   2  Sη (u , v )  
H (u , v ) + β  
  S f (u , v )  

with α and β being positive, real constants. The geometric mean filter consists of the two
expressions in brackets raised to the powers α and 1 - α. respectively. When α= 1 this filter
reduces to the inverse filter. With α = 0 the filter becomes the so-called parametric Wiener
filter, which reduces to the standard Wiener filter when β = 1. If α= ½, the filter becomes a
product of the two quantities raised to the same power, which is the definition of the
geometric mean, thus giving the filter its name. With β = 1, as α decreases below ½, the
filter performance will tend more toward the inverse filter. Similarly, when α increases
above ½, the filter will behave more like the Wiener filter. When α =½ and β = 1, the filter
also is commonly referred to as the spectrum equalization filter. Above Equation is quite
useful when implementing restoration filters because it really represents a family of filters
combined into a single expression.

5.11 Geometric Transforms


We conclude this chapter with an introductory discussion on the use of geometric
transformations for image restoration. Unlike the techniques discussed so far, geometric
transformations modify the spatial relationships between pixels in an image. Geometric
transformations often are called rubber-sheet transformations, because they may be viewed
as the process of "printing" an image on a sheet of rubber and then stretching this sheet
according to some predefined set of rules.
In terms of digital image processing, a geometric transformation consists of two basic
operations: (1) a spatial transformation, which defines the "re-arrangement" of pixels on
the image plane; and (2) gray-level interpolation, which deals with the assignment of gray
levels to pixels in the spatially transformed image. We discuss in the following sections the
fundamental ideas underlying these concepts, and their use in the context of image
restoration.

Spatial Transformations
Suppose that an image f with pixel coordinates (x, y) undergoes geometric distortion to
produce an image g with coordinates (x', y').This transformation may be expressed as
x' = r(x,y)
and
y' = s(x,y)
where r(x, y) and s(x, y) are the spatial transformations that produced the geometrically
distorted image g(x',y'). For example, if r(x, y) =x/2 and s(x, y) = y/2, the "distortion" is
simply a shrinking of the size of f(x, y) by one-half in both spatial directions.
If r(x, y) and s(x, y) were known analytically, recovering f(x, y) from the distorted image
g(x',y') by applying the transformations in reverse might be possible theoretically. In
practice, however, formulating a single set of analytical functions r(x, y) and s(x, y) that
describe the geometric distortion process over the entire image plane generally is not
possible. The method used most frequently to overcome this difficulty is to formulate the

-97-
M.Sc. 4.4: Digital Image Processing Dr. R. S. Hegadi

spatial relocation of pixels by the use of tiepoints, which are a subset of pixels whose
location in the input (distorted) and output (corrected) images is known precisely.
Below Figure shows quadrilateral regions in a distorted and corresponding corrected
image. The vertices of the quadrilaterals are corresponding tiepoints.

FIGURE: Corresponding tiepoints in two image segments.

Suppose that the geometric distortion process within the quadrilateral regions is modeled
by a pair of bilinear equations so that
r(x, y) = c1x + c2y + c3xy + c4
and
s(x, y) = c5x + c6y + c7xy + c8
Then,
x' = c1x + c2y + c3xy + c4
and
y' = c5x + c6y + c7xy + c8.
Since there are a total of eight known tiepoints, these equations can be solved for the eight
coefficients ci,i= 1,2,..., 8. The coefficients constitute the geometric distortion model used
to transform all pixels within the quadrilateral region defined by the tiepoints used to
obtain the coefficients. In general, enough tiepoints are needed to generate a set of
quadrilaterals that cover the entire image, with each quadrilateral having its own set of
coefficients.
Once we have the coefficients, the procedure used to generate the corrected (i.e., restored)
image is not difficult. If we want to find the value of the undistorted image at any point (x0,
y0), we simply need to know where in the distorted image f(x0, y0) was mapped. This we
find out by substituting (x0, y0) into above Eqs. to obtain the geometrically distorted
coordinates (x0', y0'). The value of the point in the undistorted image that was mapped to
(x0', y0') is g(x0', y0'). So we obtain the restored image value simply by letting fˆ (x0, y0) =
g(x0', y0'). For example, to generate f(0,0), we substitute (x, y) = (0, 0) into above two Eqs.
to obtain a pair of coordinates (x', y') from those equations. Then we let fˆ (x0, y0) = g(x0',
y0'), where x' and y' are the coordinate values just obtained. Next, we substitute (x, y) =
(0,1) into above two Eqs., obtain another pair of values (x', y'), and let f(0,1) = g(x', y') for

-98-
M.Sc. 4.4: Digital Image Processing Dr. R. S. Hegadi

those coordinate values. The procedure continues pixel by pixel and row by row until an
array whose size does not exceed the size of image g is obtained. A column (rather than a
row) scan would yield identical results. Also, a bookkeeping procedure is needed to keep
track of which quadrilaterals apply at a given pixel location in order to use the proper
coefficients.
Tiepoints are established by a number of different techniques, depending on the
application. For instance, some image generation systems having physical artifacts (such as
metallic points) embedded on the imaging sensor itself. These produce a known set of
points (called reseau marks) directly on the image as it is acquired. If the image is distorted
later by some other process (such as an image display or image reconstruction process),
then the image can be geometrically corrected using the technique just described.

Gray-Level Interpolation
The method discussed in the preceding section steps through integer values of the
coordinates (x, y) to yield the restored image fˆ (x, y). However, depending on the values
of the coefficients ci, above two Eqs. can yield noninteger values for x' and y'. Because the
distorted image g is digital, its pixel values are defined only at integer coordinates. Thus
using noninteger values for x' and y' causes a mapping into locations of g for which no gray
levels are defined. Inferring what the gray-level values at those locations should be, based
only on the pixel values at integer coordinate locations, then becomes necessary. The
technique used to accomplish this is called gray-level interpolation.

Figure: Gray-level interpolation based on the nearest neighbor concept.

The simplest scheme for gray-level interpolation is based on a nearest neighbor approach.
This method, also called zero-order interpolation, is illustrated in Fig. above. This figure
shows (1) the mapping of integer (x, y) coordinates into fractional coordinates (x', y') by
means of above Eqs.; (2) the selection of the closest integer coordinate neighbor to (x', y');
and (3) the assignment of the gray level of this nearest neighbor to the pixel located at (x,
y).
Although nearest neighbor interpolation is simple to implement, this method often has the
drawback of producing undesirable artifacts, such as distortion of straight edges in images
of high resolution. Smoother results can be obtained by using more sophisticated
techniques, such as cubic convolution interpolation, which fits a surface of the sin(z)/z type

-99-
M.Sc. 4.4: Digital Image Processing Dr. R. S. Hegadi

through a much larger number of neighbors (say, 16) in order to obtain a smooth estimate
of the gray level at any desired point. Typical areas in which smoother approximations
generally are required include 3-D graphics and medical imaging. The price paid for
smoother approximations is additional computational burden. For general-purpose image
processing a bilinear interpolation approach that uses the gray levels of the four nearest
neighbors usually is adequate. This approach is straightforward. Because the gray level of
each of the four integral nearest neighbors of a nonintegral pair of coordinates (x', y') is
known, the gray-level value at these coordinates, denoted v(x', y'), can be interpolated from
the values of its neighbors by using the relationship
v(x', y') = ax' + by' + cx'y' + d
where the four coefficients are easily determined from the four equations in four unknowns
that can be written using the four known neighbors of (x', y'). When these coefficients have
been determined, v(x', y') is computed and this value is assigned to the location in f(x, y)
that yielded the spatial mapping into location (x', y'). It is easy to visualize this procedure
with the aid of Fig. above. The exception is that, instead of using the gray-level value of
the nearest neighbor to (x', y'), we actually interpolate a value at location (x', y') and use this
value for the gray-level assignment at (x, y).

-100-

You might also like