0% found this document useful (0 votes)
46 views

Module 4: Solving Linear Algebraic Equations Section 5: Iterative Solution Techniques

This document discusses iterative solution techniques for solving linear algebraic equations. It introduces Jacobi, Gauss-Seidel, and relaxation methods. For each method, it provides the iterative equations and algorithms. It then discusses analyzing the convergence of iterative methods by expressing them in vector-matrix notation. Several theorems regarding sufficient and necessary conditions for convergence are also presented, such as strict diagonal dominance guaranteeing convergence of Jacobi and Gauss-Seidel methods.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views

Module 4: Solving Linear Algebraic Equations Section 5: Iterative Solution Techniques

This document discusses iterative solution techniques for solving linear algebraic equations. It introduces Jacobi, Gauss-Seidel, and relaxation methods. For each method, it provides the iterative equations and algorithms. It then discusses analyzing the convergence of iterative methods by expressing them in vector-matrix notation. Several theorems regarding sufficient and necessary conditions for convergence are also presented, such as strict diagonal dominance guaranteeing convergence of Jacobi and Gauss-Seidel methods.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Module 4 : Solving Linear Algebraic Equations

Section 5 : Iterative Solution Techniques



5 Iterative Solution Techniques
By this approach, we start with some initial guess solution, say for solution and generate an
improved solution estimate from the previous approximation This method is a very effective
for solving differential equations, integral equations and related problems Kelley. Let the residue vector
be defined as
--------(57)
i.e. The iteration sequence is terminated when some norm of
the residue becomes sufficiently small, i.e.
--------(58)
where is an arbitrarily small number (such as or ). Another possible termination criterion
can be
--------(59)
It may be noted that the later condition is practically equivalent to the previous termination condition.
A simple way to form an iterative scheme is Richardson iterations Kelley
--------(60)
or Richardson iterations preconditioned with approximate inversion
--------(61)
where matrix is called approximate inverse of if A question that naturally arises
is 'will the iterations converge to the solution of ?'. In this section, to begin with, some well
known iterative schemes are presented. Their convergence analysis is presented next. In the derivations
that follow, it is implicitly assumed that the diagonal elements of matrix are non-zero, i.e. If
this is not the case, simple row exchange is often sufficient to satisfy this condition.
5.1 Iterative Algorithms
5.1.1 Jacobi-Method
Suppose we have a guess solution, say
for To generate an improved estimate starting from consider the first equation in the set
of equations , i.e.,
--------(62)
Rearranging this equation, we can arrive at a iterative formula for computing , as
--------(63)
Similarly, using second equation from , we can derive
--------(64)

Table 1: Algorithm for Jacobi Iterations
INITIALIZE
WHILE
FOR
END FOR

END WHILE
In general, using row of we can generate improved guess for the i'th element of as
follows
--------(65)
The above equation can also be rearranged as follows
where is defined by equation (Ri). The algorithm for implementing the Jacobi iteration scheme is
summarized in Table 1
5.1.2 Gauss-Seidel Method
When matrix is large, there is a practical difficulty with the Jacobi method. It is required to store all
components of in the computer memory (as a separate variables) until calculations of is over.
The Gauss-Seidel method overcomes this difficulty by using immediately in the next equation
while computing This modification leads to the following set of equations
--------(66)

Table 2: Algorithm for Gauss-Seidel Iterations
INITIALIZE
WHILE
FOR
END FOR

END WHILE
--------(67)
--------(68)
In general, for i'th element of , we have
To simplify programming, the above equation can be rearranged as follows
--------(69)
where
The algorithm for implementing Gauss-Siedel iteration scheme is summarized in Table 2
5.1.3 Relaxation Method
Suppose we have a starting value say , of a quantity and we wish to approach a target value, say
by some method. Let application of the method change the value from to . If is between
and which is even closer to than , then we can approach faster by magnifying the change (
) Strang. In order to achieve this, we need to apply a magnifying factor and get
Table 3: Algorithms for Over-Relaxation Iterations
INITIALIZE
WHILE
FOR
END FOR

END WHILE
--------(70)
This amplification process is an extrapolation and is an example of over-relaxaation. If the
intermediate value tends to overshoot target , then we may have to use ; this is called
under-relaxation.
Application of over-relaxation to Gauss-Seidel method leads to the following set of equations
--------(71)
where are generated using the Gauss-Seidel method, i.e.,
--------(72)
The steps in the implementation of the over-relaxation iteration scheme are summarized in Table T
O
R.
It may be noted that is a tuning parameter, which is chosen such that
\caption{Algorithms for Over-Relaxation Iterations}
5.2 Convergence Analysis of Iterative Methods [3, 2]
5.2.1 Vector-Matrix Representati
When is to be solved iteratively, a question that naturally arises is 'under what conditions the
iterations converge?'. The convergence analysis can be carried out if the above set of iterative
equations are expressed in the vector-matrix notation. For example, the iterative equations in the
Gauss-Siedel method can be arranged as follows
(73)
Let and be diagonal, strictly lower triangular and strictly upper triangular parts of , i.e.,
--------(74)
(The representation given by equation (74) should NOT be confused with matrix factorization
). Using these matrices, the Gauss-Siedel iteration can be expressed as follows
--------(75)
or
--------(76)
Similarly, rearranging the iterative equations for Jacobi method, we arrive at
--------(77)
and for the relaxation method we get
--------(78)
Thus, in general, an iterative method can be developed by splitting matrix . If is expressed as
--------(79)
then, equation can be expressed as
Starting from a guess solution
--------(80)
we generate a sequence of approximate solutions as follows
--------(81)
Requirements on and matrices are as follows [3] : matrix A should be decomposed into
such that
Matrix should be easily invertible
Sequence should converge to where is the solution of .
The popular iterative formulations correspond to the following choices of matrices and [3,4]
Jacobi Method:
--------(82)
Forward Gauss-Seidel Method
--------(83)
Relaxation Method:
--------(84)
Backward Gauss Seidel: In this case, iteration begins the update of with n'th coordinate
rather than the first. This results in the following splitting of matrix [4]
--------(85)
In Symmetric Gauss Seidel approach, a forward Gauss-Seidel iteration is followed by a backward
Gauss-Seidel iteration.
5.2.2 Iterative Scheme as a Linear Difference Equation
In order to solve equation (LAE), we have formulated an iterative scheme
--------(86)
Let the true solution equation (LAE) be
--------(87)
Defining error vector
--------(88)
and subtracting equation (true) from equation (Itr), we get
--------(89)
Thus, if we start with some then after iterations we have
--------(90)
--------(91)
--------(92)
--------(93)
The convergence of the iterative scheme is assured if

--------(94)


--------(95)

for any initial guess vector
Alternatively, consider application of the general iteration equation (86) k times starting from initial
guess At the k'th iteration step, we have
--------(96)
If we select such that
--------(97)
where represents the null matrix, then, using identity
we can write
for large The above expression clearly explains how the iteration sequence generates a numerical
approximation to provided condition (97) is satisfied.
5.2.3 Convergence Criteria for Iteration Schemes
It may be noted that equation (89) is a linear difference equation of form
--------(98)
with a specified initial condition . Here, and is a matrix. In Appendix A, we
analyzed behavior of the solutions of linear difference equations of type (98). The criterion for
convergence of iteration equation (89) can be derived using results derived in Appendix A. The
necessary and sufficient condition for convergence of (89) can be stated as
i.e. the spectral radius of matrix should be less than one.
The necessary and sufficient condition for convergence stated above requires computation of
eigenvalues of which is a computationally demanding task when the matrix dimension is large.
For a large dimensional matrix, if we could check this condition before starting the iterations, then we
might as well solve the problem by a direct method rather than using iterative approach to save
computations. Thus, there is a need to derive some alternate criteria for convergence, which can be
checked easily before starting iterations. Theorem 14 in Appendix A states that spectral radium of a
matrix is smaller than any induced norm of the matrix. Thus, for matrix we have
is any induced matrix norm. Using this result, we can arrive at the following sufficient conditions for
the convergence of iterations
Evaluating 1 or norms of is significantly easier than evaluating the spectral radius of .
Satisfaction of any of the above conditions implies However, it may be noted that these
are only sufficient conditions. Thus, if or we cannot conclude anything
about the convergence of iterations.
If the matrix has some special properties, such as diagonal dominance or symmetry and positive
definiteness, then the convergence is ensured for some iterative techniques. Some of the important
convergence results available in the literature are summarized here.
Definition 1 :A matrix is called strictly diagonally dominant if
--------(99)
Theorem 2 [ 2] A sufficient condition for the convergence of Jacobi and Gauss-Seidel methods is that
the matrix of linear system is strictly diagonally dominant.
Proof: Refer to Appendix B.
Theorem 3 [ 5] The Gauss-Seidel iterations converge if matrix is symmetric and positive definite.
Proof: Refer to Appendix B.
Theorem 4 [ 3] For an arbitrary matrix , the necessary condition for the convergence of relaxation
method is
Proof: Refer to appendix B.
Theorem 5 [ 2] When matrix is strictly diagonally dominant, a sufficient condition for the
convergence of relaxation methods is that
Proof: Left to reader as an exercise.
Theorem 6 [ 2] For a symmetric and positive definite matrix , the relaxation method converges if and
only if
Proof: Left to reader as an exercise.
The Theorems 3 and 6 guarantees convergence of Gauss-Seidel method or relaxation method when
matrix is symmetric and positive definite. Now, what do we do if matrix in is not
symmetric and positive definite? We can multiply both the sides of the equation by and transform
the original problem as follows
--------(100)
If matrix is non-singular, then matrix is always symmetric and positive definite as
--------(101)
Now, for the transformed problem, we are guaranteed convergence if we use the Gauss-Seidel method
or relaxation method such that .
Example 7 [ 3] Consider system where
--------(102)
For Jacobi method

--------(103)

--------(104)
Thus, the error norm at each iteration is reduced by factor of 0.5For Gauss-Seidel method
--------(105)
--------(106)
Thus, the error norm at each iteration is reduced by factor of 1/4. This implies that, for the example
under consideration
--------(107)
For relaxation method,


--------(108)


--------(109)

--------(110)
--------(111)

--------(112)
Now, if we plot v/s , then it is observed that at From equation (110), it
follows that
--------(113)
at optimum Now,
--------(114)
--------(115)
--------(116)
--------(117)
This is a major reduction in spectral radius when compared to Gauss-Seidel method. Thus, the error
norm at each iteration is reduced by factor of 1/16 ( ) if we choose
Example 8 Consider system where
--------(118)
If we use Gauss-Seidel method to solve for the iterations do not converge as
--------(119)
--------(120)
Now, let us modify the problem by pre-multiplying by on both the sides, i.e. the modified
problem is The modified problem becomes
--------(121)
The matrix is symmetric and positive definite and, according to Theorem 3, the iterations should
converge if Gauss-Seidel method is used. For the transformed problem, we have
--------(122)
--------(123)
and within 220 iterations (termination criterion ), we get following solution
--------(124)
which is close to the solution
--------(125)
computed as
Table 4: Rate of Convergence of Iterative Methods
Example 9 Consider system where
--------(126)
If it is desired to solve the resulting problem using Jacobi method / Gauss-Seidel method, will the
iterations converge? To establish convergence of Jacobi / Gauss-Seidel method, we can check whether
A is strictly diagonally dominant. Since the following inequalities hold
matrix A is strictly diagonally dominant, which is a sufficient condition for convergence of Jacobi /
Gauss-Seidel iterations Theorem 2. Thus, Jacobi / Gauss-Seidel iterations will converge to the solution
starting from any initial guess.
From these examples, we can clearly see that the rate of convergence depends on A
comparison of rates of convergence obtained from analysis of some simple problems is presented in
Table 4[2]

You might also like