• 沒有找到結果。

Iterative techniques in matrix algebra

N/A
N/A
Protected

Academic year: 2022

Share "Iterative techniques in matrix algebra"

Copied!
87
0
0

加載中.... (立即查看全文)

全文

(1)

師大

Tsung-Ming Huang

Department of Mathematics National Taiwan Normal University, Taiwan

August 28, 2011

(2)

師大

Outline

1 Norms of vectors and matrices

2 Eigenvalues and eigenvectors

3 Iterative techniques for solving linear systems

4 Error bounds and iterative refinement

5 The conjugate gradient method

2 / 87

(3)

師大

Definition 1

k · k : Rn→ R is a vector norm if (i) kxk ≥ 0, ∀ x ∈ Rn,

(ii) kxk = 0 if and only if x = 0,

(iii) kαx| = |α|kxk ∀ α ∈ R and x ∈ Rn, (iv) kx + yk ≤ kxk + kyk ∀ x, y ∈ Rn. Definition 2

The `2 and `norms for x = [x1, x2, · · · , xn]T are defined by

kxk2 = (xTx)1/2= ( n

X

i=1

x2i )1/2

and kxk= max

1≤i≤n|xi|.

The `2 norm is also called the Euclidean norm.

(4)

師大

Theorem 3 (Cauchy-Bunyakovsky-Schwarz inequality) For each x = [x1, x2, · · · , xn]T and y = [y1, y2, · · · , yn]T in Rn,

xTy =

n

X

i=1

xiyi ≤ ( n

X

i=1

x2i

)1/2( n X

i=1

y2i )1/2

= kxk2· kyk2.

Proof: If x = 0 or y = 0, the result is immediate.

Suppose x 6= 0 and y 6= 0. For each α ∈ R, 0 ≤ kx − αyk22 =

n

X

i=1

(xi− αyi)2=

n

X

i=1

x2i − 2α

n

X

i=1

xiyi+ α2

n

X

i=1

yi2,

and 2α

n

X

i=1

xiyi

n

X

i=1

x2i + α2

n

X

i=1

yi2= kxk22+ α2kyk22.

4 / 87

(5)

師大

Since kxk2> 0and kyk2 > 0, we can let α = kxk2

kyk2 to give

 2kxk2

kyk2

 n

X

i=1

xiyi

!

≤ kxk22+kxk22

kyk22kyk22= 2kxk22. Thus

xTy =

n

X

i=1

xiyi ≤ kxk2kyk2.

(6)

師大

For each x, y ∈ Rn, kx + yk = max

1≤i≤n|xi+ yi| ≤ max

1≤i≤n(|xi| + |yi|)

≤ max

1≤i≤n|xi| + max

1≤i≤n|yi| = kxk+ kyk

and

kx + yk22 =

n

X

i=1

(xi+ yi)2=

2

X

i=1

x2i + 2

n

X

i=1

xiyi+

n

X

i=1

yi2

≤ kxk22+ 2kxk2kyk2+ kyk22 = (kxk2+ kyk2)2, which gives

kx + yk2 ≤ kxk2+ kyk2.

6 / 87

(7)

師大

Definition 4

A sequence {x(k)∈ Rn}k=1is convergent to x with respect to the norm k · k if ∀ ε > 0, ∃ an integer N (ε) such that

kx(k)− xk < ε, ∀ k ≥ N (ε).

Theorem 5

{x(k)∈ Rn}k=1 converges to x with respect to k · kif and only if

k→∞lim x(k)i = xi, ∀ i = 1, 2, . . . , n.

Proof: “⇒” Given any ε > 0, ∃ an integer N (ε) such that

1≤i≤nmax |x(k)i − xi| = kx(k)− xk< ε, ∀ k ≥ N (ε).

(8)

師大

This result implies that

|x(k)i − xi| < ε, ∀ i = 1, 2, . . . , n.

Hence

k→∞lim x(k)i = xi, ∀ i.

“⇐” For a given ε > 0, let Ni(ε)represent an integer with

|x(k)i − xi| < ε, whenever k ≥ Ni(ε).

Define

N (ε) = max

1≤i≤nNi(ε).

If k ≥ N (ε), then

1≤i≤nmax |x(k)i − xi| = kx(k)− xk< ε.

This implies that {x(k)} converges to x with respect to k · k.

8 / 87

(9)

師大

Theorem 6 For each x ∈ Rn,

kxk≤ kxk2 ≤√

nkxk. Proof: Let xj be a coordinate of x such that

kxk2= |xj|2

n

X

i=1

x2i = kxk22,

so kxk≤ kxk2 and kxk22 =

n

X

i=1

x2i

n

X

i=1

x2j = nx2j = nkxk2,

so kxk2 ≤√

nkxk.

(10)

師大

Definition 7

A matrix norm k · k on the set of all n × n matrices is a real-valued function satisfying for all n × n matrices A and B and all real number α:

(i) kAk ≥ 0;

(ii) kAk = 0 if and only if A = 0;

(iii) kαAk = |α|kAk;

(iv) kA + Bk ≤ kAk + kBk;

(v) kABk ≤ kAkkBk;

Theorem 8

If k · k is a vector norm on Rn, then kAk = max

kxk=1kAxk

is a matrix norm. 10 / 87

(11)

師大

For any z 6= 0, we have x = z/kzk as a unit vector. Hence kAk = max

kxk=1kAxk = max

z6=0

A

 z kzk



= max

z6=0

kAzk kzk .

Corollary 9

kAzk ≤ kAk · kzk.

Theorem 10

If A = [aij]is an n × n matrix, then kAk= max

1≤i≤n n

X

j=1

|aij|.

(12)

師大

Proof: Let x be an n-dimension vector with 1 = kxk= max

1≤i≤n|xi|.

Then

kAxk = max

1≤i≤n

n

X

j=1

aijxj

≤ max

1≤i≤n n

X

j=1

|aij| max

1≤j≤n|xj| = max

1≤i≤n n

X

j=1

|aij|.

Consequently,

kAk= max

kxk=1kAxk≤ max

1≤i≤n n

X

j=1

|aij|.

On the other hand, let p be an integer with

n

X

j=1

|apj| = max

1≤i≤n n

X

j=1

|aij|,

12 / 87

(13)

師大

and x be the vector with xj =

 1, if apj≥ 0,

−1, if apj< 0.

Then

kxk= 1 and apjxj = |apj|, ∀ j = 1, 2, . . . , n, so

kAxk= max

1≤i≤n

n

X

j=1

aijxj

n

X

j=1

apjxj

=

n

X

j=1

|apj|

= max

1≤i≤n n

X

j=1

|aij|.

This result implies that kAk= max

kxk=1

kAxk≥ max

1≤i≤n n

X

j=1

|aij|.

which gives

kAk= max

1≤i≤n n

X

j=1

|aij|. 13 / 87

(14)

師大

Eigenvalues and eigenvectors

Definition 11 (Characteristic polynomial)

If A is a square matrix, the characteristic polynomial of A is defined by p(λ) =det(A − λI).

Definition 12 (Eigenvalue and eigenvector)

If p is the characteristic polynomial of the matrix A, the zeros of p are eigenvalues of the matrix A. If λ is an eigenvalue of A and x 6= 0 satisfies (A − λI)x = 0, then x is an eigenvector of A corresponding to the eigenvalue λ.

Definition 13 (Spectrum and Spectral Radius)

The set of all eigenvalues of a matrix A is called the spectrum of A.

The spectral radius of A is

ρ(A) = max{|λ|; λis an eigenvalue ofA}.

14 / 87

(15)

師大

Theorem 14

If A is an n × n matrix, then (i) kAk2 =p

ρ(ATA);

(ii) ρ(A) ≤ kAkfor any matrix norm.

Proof: Proof for the second part. Suppose λ is an eigenvalue of Aand x 6= 0 is a corresponding eigenvector such that Ax = λx and kxk = 1. Then

|λ| = |λ|kxk = kλxk = kAxk ≤ kAkkxk = kAk, that is, |λ| ≤ kAk. Since λ is arbitrary, this implies that ρ(A) = max |λ| ≤ kAk.

Theorem 15

For any A and anyε > 0, there exists a matrix norm k · k such that

ρ(A) < kAk < ρ(A) + ε.

(16)

師大

Definition 16

We call an n × n matrix A convergent if

k→∞lim(Ak)ij = 0 ∀ i = 1, 2, . . . , n and j = 1, 2, . . . , n.

Theorem 17

The following statements are equivalent.

1 Ais aconvergent matrix;

2 lim

k→∞kAkk = 0forsomematrix norm;

3 lim

k→∞kAkk = 0forallmatrix norm;

4 ρ(A) < 1;

5 lim

k→∞Akx = 0foranyx.

16 / 87

(17)

師大

Iterative techniques for solving linear systems

For small dimension of linear systems, it requires for direct techniques.

For large systems, iterative techniques are efficient in terms of both computer storage and computation.

The basic idea of iterative techniques is to split the coefficient matrix A into

A = M − (M − A),

for some matrix M , which is called thesplitting matrix. Here we assume that A and M are bothnonsingular. Then the original problem is rewritten in the equivalent form

M x = (M − A)x + b.

(18)

師大

This suggests an iterative process

x(k)= (I − M−1A)x(k−1)+ M−1b ≡ T x(k−1)+ c,

whereT is usually called theiteration matrix. The initial vector x(0)can be arbitrary or be chosen according to certain

conditions.

Two criteria for choosing the splitting matrix M are x(k)is easily computed. More precisely, the system M x(k)= y is easy to solve;

the sequence {x(k)} converges rapidly to the exact solution.

Note that one way to achieve the second goal is to choose M so that M−1approximate A−1,

In the following subsections, we will introduce some of the mostly commonly used classic iterative methods.

18 / 87

(19)

師大

Jacobi Method

If we decompose the coefficient matrix A as A = L + D + U,

whereDis thediagonal part,Lis thestrictly lower triangular part, andU is thestrictly upper triangular part, ofA, and chooseM = D, then we derive the iterative formulation for Jacobi method:

x(k)= −D−1(L + U )x(k−1)+ D−1b.

With this method, the iteration matrixTJ = −D−1(L + U )and c = D−1b.

Each component x(k)i can be computed by x(k)i =

bi

i−1

X

j=1

aijx(k−1)j

n

X

j=i+1

aijx(k−1)j

 ,

aii.

(20)

師大

a11x(k)1 + a12x(k−1)2 + a13x(k−1)3 + · · · + a1nx(k−1)n = b1

a21x(k−1)1 +a22x(k)2 + a23x(k−1)3 + · · · + a2nx(k−1)n = b2 ... an1x(k−1)1 + an2x(k−1)2 + an3x(k−1)3 + · · · +annx(k)n = bn. Algorithm 1 (Jacobi Method)

Given x(0), tolerance T OL, maximum number of iteration M . Set k = 1.

While k ≤ M and kx − x(0)k2 ≥ T OL Set k = k + 1, x(0) = x.

For i = 1, 2, . . . , n xi=



bi−Pi−1

j=1aijx(0)j −Pn

j=i+1aijx(0)j

 ,

aii

End For

End While 20 / 87

(21)

師大

Example 18

Consider the linear system Ax = b given by

E1 : 10x1 − x2 + 2x3 = 6,

E2 : −x1 + 11x2 − x3 + 3x4 = 25, E3 : 2x1 − x2 + 10x3 − x4 = −11,

E4 : 3x2 − x3 + 8x4 = 15

which has the unique solution x = [1, 2, −1, 1]T. Solving equation Ei for xi, for i = 1, 2, 3, 4, we obtain

x1 = 1/10x2 − 1/5x3 + 3/5,

x2 = 1/11x1 + 1/11x3 − 3/11x4 + 25/11,

x3 = −1/5x1 + 1/10x2 + 1/10x4 − 11/10,

x4 = − 3/8x2 + 1/8x3 + 15/8.

(22)

師大

Then Ax = b can be rewritten in the form x = T x + c with

T =

0 1/10 −1/5 0

1/11 0 1/11 −3/11

−1/5 1/10 0 1/10

0 −3/8 1/8 0

and c =

3/5 25/11

−11/10 15/8

and the iterative formulation for Jacobi method is x(k)= T x(k−1)+ c for k = 1, 2, . . . .

The numerical results of such iteration is list as follows:

22 / 87

(23)

師大

k x1 x2 x3 x4

0 0.0000 0.0000 0.0000 0.0000 1 0.6000 2.2727 -1.1000 1.8750 2 1.0473 1.7159 -0.8052 0.8852 3 0.9326 2.0533 -1.0493 1.1309 4 1.0152 1.9537 -0.9681 0.9738 5 0.9890 2.0114 -1.0103 1.0214 6 1.0032 1.9922 -0.9945 0.9944 7 0.9981 2.0023 -1.0020 1.0036 8 1.0006 1.9987 -0.9990 0.9989 9 0.9997 2.0004 -1.0004 1.0006 10 1.0001 1.9998 -0.9998 0.9998

(24)

師大

Matlab code of Example

clear all; delete rslt.dat; diary rslt.dat; diary on;

n = 4; xold = zeros(n,1); xnew = zeros(n,1); T = zeros(n,n);

T(1,2) = 1/10; T(1,3) = -1/5; T(2,1) = 1/11;

T(2,3) = 1/11; T(2,4) = -3/11; T(3,1) = -1/5;

T(3,2) = 1/10; T(3,4) = 1/10; T(4,2) = -3/8; T(4,3) = 1/8;

c(1,1) = 3/5; c(2,1) = 25/11; c(3,1) = -11/10; c(4,1) = 15/8;

xnew = T * xold + c; k = 0;

fprintf(’ k x1 x2 x3 x4 \n’);

while ( k <= 100 & norm(xnew-xold) > 1.0d-14 ) xold = xnew; xnew = T * xold + c; k = k + 1;

fprintf(’%3.0f ’,k);

for jj = 1:n

fprintf(’%5.4f ’,xold(jj));

end

fprintf(’\n’);

end 24 / 87

(25)

師大

Gauss-Seidel Method

When computing x(k)i for i > 1, x(k)1 , . . . , x(k)i−1have already been computed and are likely to be better approximations to the exact x1, . . . , xi−1than x(k−1)1 , . . . , x(k−1)i−1 . It seems reasonable to compute x(k)i using these most recently computed values.

That is

a11x(k)1 + a12x(k−1)2 + a13x(k−1)3 + · · · + a1nx(k−1)n = b1

a21x(k)1 +a22x(k)2 + a23x(k−1)3 + · · · + a2nx(k−1)n = b2

a31x(k)1 +a32x(k)2 +a33x(k)3 + · · · + a3nx(k−1)n = b3

... an1x(k−1)1 +an2x(k−1)2 +an3x(k−1)3 + · · · +annx(k)n = bn. This improvement induce the Gauss-Seidel method.

The Gauss-Seidel method setsM = D + Land defines the iteration as

x(k)= −(D + L)−1U x(k−1)+ (D + L)−1b.

(26)

師大

That is, Gauss-Seidel method usesTG= −(D + L)−1U as the iteration matrix. The formulation above can be rewritten as

x(k)= −D−1

Lx(k)+ U x(k−1)− b .

Hence each component x(k)i can be computed by

x(k)i =

bi

i−1

X

j=1

aijx(k)j

n

X

j=i+1

aijx(k−1)j

 ,

aii.

For Jacobi method, only the components of x(k−1)are used to compute x(k). Hence x(k)i , i = 1, . . . , n,can be computed in parallel at each iteration k.

At each iteration of Gauss-Seidel method, since x(k)i can not be computed until x(k)1 , . . . , x(k)i−1are available, the method is not a parallel algorithm in nature.

26 / 87

(27)

師大

Algorithm 2 (Gauss-Seidel Method)

Given x(0), tolerance T OL, maximum number of iteration M . Set k = 1.

For i = 1, 2, . . . , n xi=



bi−Pi−1

j=1aijxj−Pn

j=i+1aijx(0)j

 ,

aii

End For

While k ≤ M and kx − x(0)k2 ≥ T OL Set k = k + 1, x(0) = x.

For i = 1, 2, . . . , n xi=

bi−Pi−1

j=1aijxj−Pn

j=i+1aijx(0)j  ,

aii End For

End While

(28)

師大

Example 19

Consider the linear system Ax = b given by

E1 : 10x1 − x2 + 2x3 = 6,

E2 : −x1 + 11x2 − x3 + 3x4 = 25, E3 : 2x1 − x2 + 10x3 − x4 = −11,

E4 : 3x2 − x3 + 8x4 = 15

which has the unique solution x = [1, 2, −1, 1]T. Gauss-Seidel method gives the equation

x(k)1 = 101x(k−1)215x(k−1)3 + 35, x(k)2 = 111x(k)1 + 111x(k−1)3113x(k−1)4 + 2511, x(k)3 = −15x(k)1 + 101x(k)2 + 101x(k−1)41110, x(k)4 = − 38x(k)2 + 18x(k)3 + 158.

28 / 87

(29)

師大

The numerical results of such iteration is list as follows:

k x1 x2 x3 x4

0 0.0000 0.0000 0.0000 0.0000 1 0.6000 2.3273 -0.9873 0.8789 2 1.0302 2.0369 -1.0145 0.9843 3 1.0066 2.0036 -1.0025 0.9984 4 1.0009 2.0003 -1.0003 0.9998 5 1.0001 2.0000 -1.0000 1.0000 The results of Example appear to imply that the

Gauss-Seidel method is superior to the Jacobi method.

This is almost always true, but there are linear systems for which the Jacobi method converges and the Gauss-Seidel method does not.

See Exercises 17 and 18.

(30)

師大 Matlab code of Example

clear all; delete rslt.dat; diary rslt.dat; diary on;

n = 4; xold = zeros(n,1); xnew = zeros(n,1); A = zeros(n,n);

A(1,1)=10; A(1,2)=-1; A(1,3)=2; A(2,1)=-1; A(2,2)=11; A(2,3)=-1; A(2,4)=3; A(3,1)=2; A(3,2)=-1;

A(3,3)=10; A(3,4)=-1; A(4,2)=3; A(4,3)=-1; A(4,4)=8; b(1)=6; b(2)=25; b(3)=-11; b(4)=15;

for ii = 1:n xnew(ii) = b(ii);

for jj = 1:ii-1

xnew(ii) = xnew(ii) - A(ii,jj) * xnew(jj);

end for jj = ii+1:n

xnew(ii) = xnew(ii) - A(ii,jj) * xold(jj);

end

xnew(ii) = xnew(ii) / A(ii,ii);

end

k = 0; fprintf(’ k x1 x2 x3 x4 \n’);

while ( k <= 100 & norm(xnew-xold) > 1.0d-14 ) xold = xnew; k = k + 1;

for ii = 1:n xnew(ii) = b(ii);

for jj = 1:ii-1

xnew(ii) = xnew(ii) - A(ii,jj) * xnew(jj);

end for jj = ii+1:n

xnew(ii) = xnew(ii) - A(ii,jj) * xold(jj);

end

xnew(ii) = xnew(ii) / A(ii,ii);

end

fprintf(’%3.0f ’,k);

for jj = 1:n

fprintf(’%5.4f ’,xold(jj));

end fprintf(’\n’);

end

diary off 30 / 87

(31)

師大

Lemma 20 (20)

Ifρ(T ) < 1, then(I − T )−1exists and

(I − T )−1=

X

i=0

Ti= I + T + T2+ · · · .

Proof: Let λ be an eigenvalue of T , then 1 − λ is an eigenvalue of I − T . But |λ| ≤ ρ(A) < 1, so 1 − λ 6= 0 and 0 is not an eigenvalue of I − T , which means (I − T ) is nonsingular.

Next we show that (I − T )−1= I + T + T2+ · · ·. Since (I − T )

m

X

i=0

Ti

!

= I − Tm+1, and ρ(T ) < 1 implies kTmk → 0 as m → ∞, we have

(I − T ) lim

m→∞

m

X

i=0

Ti

!

= (I − T )

X

i=0

Ti

!

= I.

31 / 87

(32)

師大

Theorem 21

Foranyx(0)∈ Rn , the sequence produced by x(k)= T x(k−1)+ c, k = 1, 2, . . . ,

convergesto theuniquesolution of x = T x + c if and only if ρ(T ) < 1.

Proof: Suppose ρ(T ) < 1. The sequence of vectors x(k)produced by the iterative formulation are

x(1) = T x(0)+ c

x(2) = T x(1)+ c = T2x(0)+ (T + I)c x(3) = T x(2)+ c = T3x(0)+ (T2+ T + I)c

... In general

x(k)= Tkx(0)+ (Tk−1+ Tk−2+ · · · T + I)c.

32 / 87

(33)

師大

Since ρ(T ) < 1, limk→∞Tkx(0)= 0for any x(0) ∈ Rn. By Lemma 20,

(Tk−1+ Tk−2+ · · · T + I)c → (I − T )−1c, as k → ∞.

Therefore

k→∞lim x(k)= lim

k→∞Tkx(0)+

X

j=0

Tj

c = (I − T )−1c.

Conversely, suppose {x(k)} → x = (I − T )−1c. Since

x − x(k) = T x + c − T x(k−1)− c = T (x − x(k−1)) = T2(x − x(k−2))

= · · · = Tk(x − x(0)).

Let z = x − x(0). Then

k→∞lim Tkz = lim

k→∞(x − x(k)) = 0.

It follows from theorem ρ(T ) < 1.

(34)

師大

Theorem 22

IfkT k < 1, then the sequencex(k)convergestoxforanyinitial x(0)and

1 kx − x(k)k ≤ kT kkkx − x(0)k

2 kx − x(k)k ≤ 1−kT kkT kk kx(1)− x(0)k.

Proof: Since x = T x + c and x(k) = T x(k−1)+ c, x − x(k) = T x + c − T x(k−1)− c

= T (x − x(k−1))

= T2(x − x(k−2)) = · · · = Tk(x − x(0)).

The first statement can then be derived

kx − x(k)k = kTk(x − x(0))k ≤ kT kkkx − x(0)k.

For the second result, we first show that

kx(n)− x(n−1)k ≤ kT kn−1kx(1)− x(0)k for any n ≥ 1. 34 / 87

(35)

師大

Since

x(n)− x(n−1) = T x(n−1)+ c − T x(n−2)− c

= T (x(n−1)− x(n−2))

= T2(x(n−2)− x(n−3)) = · · · = Tn−1(x(1)− x(0)), we have

kx(n)− x(n−1)k ≤ kT kn−1kx(1)− x(0)k.

Let m ≥ k, x(m)− x(k)

= 

x(m)− x(m−1) +

x(m−1)− x(m−2)

+ · · · +

x(k+1)− x(k)

= Tm−1

x(1)− x(0)

+ Tm−2

x(1)− x(0)

+ · · · + Tk

x(1)− x(0)

=



Tm−1+ Tm−2+ · · · Tk

 

x(1)− x(0) ,

(36)

師大

hence

kx(m)− x(k)k

≤ 

kT km−1+ kT km−2+ · · · + kT kk



kx(1)− x(0)k

= kT kk

kT km−k−1+ kT km−k−2+ · · · + 1

kx(1)− x(0)k.

Since limm→∞x(m) = x, kx − x(k)k

= lim

m→∞kx(m)− x(k)k

≤ lim

m→∞kT kk

kT km−k−1+ kT km−k−2+ · · · + 1



kx(1)− x(0)k

= kT kkkx(1)− x(0)k lim

m→∞

kT km−k−1+ kT km−k−2+ · · · + 1

= kT kk 1

1 − kT kkx(1)− x(0)k.

This proves the second result. 36 / 87

(37)

師大

Theorem 23

If A isstrictly diagonal dominant, then both theJacobiand Gauss-Seidelmethodsconvergesforanyinitial vector x(0).

Proof: By assumption, A is strictly diagonal dominant, hence aii6= 0 (otherwise A is singular) and

|aii| >

n

X

j=1,j6=i

|aij|, i = 1, 2, . . . , n.

For Jacobi method, the iteration matrix TJ= −D−1(L + U )has entries

[TJ]ij = (aaij

ii, i 6= j,

0, i = j.

Hence

kTJk= max

1≤i≤n n

X

j=1,j6=i

aij aii

= max

1≤i≤n

1

|aii|

n

X

j=1,j6=i

|aij| < 1,

and this implies that the Jacobi method converges.

(38)

師大

For Gauss-Seidel method, the iteration matrix

TG= −(D + L)−1U. Let λ be any eigenvalue of TGand y, kyk= 1, is a corresponding eigenvector. Thus

TGy = λy =⇒ −U y = λ(D + L)y.

Hence for i = 1, . . . , n,

n

X

j=i+1

aijyj = λaiiyi+ λ

i−1

X

j=1

aijyj. This gives

λaiiyi = −λ

i−1

X

j=1

aijyj

n

X

j=i+1

aijyj

and

|λ||aii||yi| ≤ |λ|

i−1

X

j=1

|aij||yj| +

n

X

j=i+1

|aij||yj|.

38 / 87

(39)

師大

Choose the index k such that |yk| = 1 ≥ |yj| (this index can always be found since kyk= 1). Then

|λ||akk| ≤ |λ|

k−1

X

j=1

|akj| +

n

X

j=k+1

|akj| which gives

|λ| ≤ Pn

j=k+1|akj|

|akk| −Pk−1

j=1|akj| <

Pn

j=k+1|akj| Pn

j=k+1|akj| = 1

Since λ is arbitrary, ρ(TG) < 1. This means the Gauss-Seidel method converges.

The rate of convergence depends on the spectral radius of the matrix associated with the method.

One way to select a procedure to accelerate convergence is to choose a method whose associated matrix has minimal spectral radius.

(40)

師大

Successive over-relaxation (SOR) method

Definition 24

Suppose ˜x ∈ Rn is an approximated solution of Ax = b. Theresidual vectorrfor ˜xisr = b − A˜x.

Let the approximate solution x(k,i)produced by Gauss-Seidel method be defined by

x(k,i)=h

x(k)1 , . . . , x(k)i−1, x(k−1)i , . . . , x(k−1)n iT

and

r(k)i =h

r(k)1i , r2i(k), . . . , rni(k)iT

= b − Ax(k,i)

be the corresponding residual vector. Then the mth component of ri(k)is

r(k)mi = bm

i−1

X

j=1

amjx(k)j

n

X

j=i

amjx(k−1)j ,

40 / 87

(41)

師大

or, equivalently,

r(k)mi = bm

i−1

X

j=1

amjx(k)j

n

X

j=i+1

amjx(k−1)j − amix(k−1)i ,

for each m = 1, 2, . . . , n.

In particular, the ith component of ri(k)is

r(k)ii = bi

i−1

X

j=1

aijx(k)j

n

X

j=i+1

aijx(k−1)j − aiix(k−1)i , so

aiix(k−1)i + r(k)ii = bi

i−1

X

j=1

aijx(k)j

n

X

j=i+1

aijx(k−1)j

= aiix(k)i .

(42)

師大

Consequently, the Gauss-Seidel method can be characterized as choosing x(k)i to satisfy

x(k)i = x(k−1)i +r(k)ii aii

.

Relaxation method is modified the Gauss-Seidel procedure to x(k)i = x(k−1)i + ωrii(k)

aii

= x(k−1)i + ω aii

bi

i−1

X

j=1

aijx(k)j

n

X

j=i+1

aijx(k−1)j − aiix(k−1)i

= (1 − ω)x(k−1)i + ω aii

bi

i−1

X

j=1

aijx(k)j

n

X

j=i+1

aijx(k−1)j

 (1) for certain choices of positive ω such that the norm of the

residual vector is reduced and the convergence is significantly

faster. 42 / 87

(43)

師大

These methods are called for

ω < 1: under relaxation, ω = 1: Gauss-Seidel method, ω > 1: over relaxation.

Over-relaxation methods are calledSOR (Successive

over-relaxation). To determine the matrix of the SOR method, we rewrite (1) as

aiix(k)i + ω

i−1

X

j=1

aijx(k)j = (1 − ω)aiix(k−1)i − ω

n

X

j=i+1

aijx(k−1)j + ωbi, so that if A = L + D + U , then we have

(D + ωL)x(k) = [(1 − ω)D − ωU ] x(k−1)+ ωb or

x(k) = (D + ωL)−1[(1 − ω)D − ωU ] x(k−1)+ ω(D + ωL)−1b

≡ Tωx(k−1)+ cω.

(44)

師大

Example 25

The linear system Ax = b given by

4x1 + 3x2 = 24,

3x1 + 4x2 x3 = 30,

x2 + 4x3 = −24, has the solution [3, 4, −5]T.

Numerical results of Gauss-Seidel method with x(0)= [1, 1, 1]T:

k x1 x2 x3

0 1.0000000 1.0000000 1.0000000 1 5.2500000 3.8125000 -5.0468750 2 3.1406250 3.8828125 -5.0292969 3 3.0878906 3.9267578 -5.0183105 4 3.0549316 3.9542236 -5.0114441 5 3.0343323 3.9713898 -5.0071526 6 3.0214577 3.9821186 -5.0044703

7 3.0134110 3.9888241 -5.0027940 44 / 87

(45)

師大

Numerical results of SOR method withω = 1.25and x(0)= [1, 1, 1]T:

k x1 x2 x3

0 1.0000000 1.0000000 1.0000000 1 6.3125000 3.5195313 -6.6501465 2 2.6223145 3.9585266 -4.6004238 3 3.1333027 4.0102646 -5.0966863 4 2.9570512 4.0074838 -4.9734897 5 3.0037211 4.0029250 -5.0057135 6 2.9963276 4.0009262 -4.9982822 7 3.0000498 4.0002586 -5.0003486

(46)

師大

Numerical results of SOR method withω = 1.6and x(0)= [1, 1, 1]T:

k x1 x2 x3

0 1.0000000 1.0000000 1.0000000 1 7.8000000 2.4400000 -9.2240000 2 1.9920000 4.4560000 -2.2832000 3 3.0576000 4.7440000 -6.3324800 4 2.0726400 4.1334400 -4.1471360 5 3.3962880 3.7855360 -5.5975040 6 3.0195840 3.8661760 -4.6950272 7 3.1488384 4.0236774 -5.1735127

46 / 87

(47)

師大 Matlab code of SOR

clear all; delete rslt.dat; diary rslt.dat; diary on;

n = 3; xold = zeros(n,1); xnew = zeros(n,1); A = zeros(n,n); DL = zeros(n,n); DU = zeros(n,n);

A(1,1)=4; A(1,2)=3; A(2,1)=3; A(2,2)=4; A(2,3)=-1; A(3,2)=-1; A(3,3)=4;

b(1,1)=24; b(2,1)=30; b(3,1)=-24; omega=1.25;

for ii = 1:n DL(ii,ii) = A(ii,ii);

for jj = 1:ii-1

DL(ii,jj) = omega * A(ii,jj);

end

DU(ii,ii) = (1-omega)*A(ii,ii);

for jj = ii+1:n

DU(ii,jj) = - omega * A(ii,jj);

end end

c = omega * (DL \ b); xnew = DL \ ( DU * xold ) + c;

k = 0; fprintf(’ k x1 x2 x3 \n’);

while ( k <= 100 & norm(xnew-xold) > 1.0d-14 ) xold = xnew; k = k + 1; xnew = DL \ ( DU * xold ) + c;

fprintf(’%3.0f ’,k);

for jj = 1:n

fprintf(’%5.4f ’,xold(jj));

end fprintf(’\n’);

end diary off

(48)

師大

Theorem 26 (Kahan)

If aii6= 0, for each i = 1, 2, . . . , n, then ρ(Tω) ≥ |ω − 1|. This implies that the SOR method can converge only if 0 < ω < 2.

Theorem 27 (Ostrowski-Reich)

If A ispositive definiteand the relaxation parameter ω satisfying 0 < ω < 2, then theSORiterationconvergesforanyinitial vector x(0).

Theorem 28

If A ispositive definiteandtridiagonal, thenρ(TG) = [ρ(TJ)]2< 1and theoptimalchoice of ω for the SOR iteration is

ω = 2

1 + q

1 − [ρ(TJ)]2 .

With this choice of ω,ρ(Tω) = ω − 1.

48 / 87

(49)

師大

Example 29 The matrix

A =

4 3 0

3 4 −1

0 −1 4

,

given in previous example, is positive definite and tridiagonal.

Since

TJ = −D−1(L + U ) =

1

4 0 0

0 14 0 0 0 14

0 −3 0

−3 0 1

0 1 0

=

0 −0.75 0

−0.75 0 0.25

0 0.25 0

,

(50)

師大

we have

TJ − λI =

−λ −0.75 0

−0.75 −λ 0.25

0 0.25 −λ

, so

det(TJ − λI) = −λ(λ2− 0.625).

Thus,

ρ(TJ) =√ 0.625 and

ω = 2

1 +p1 − [ρ(TJ)]2 = 2 1 +√

1 − 0.625 ≈ 1.24.

This explains the rapid convergence obtained in previous example when using ω = 0.125

50 / 87

(51)

師大

Symmetric Successive Over Relaxation (SSOR) Method

Let A be symmetric and A = D + L + LT. The idea is in fact to implement the SOR formulationtwice,one forwardandone backward, at each iteration. That is, SSOR method defines

(D + ωL)x(k−12) = (1 − ω)D − ωLT x(k−1)+ ωb, (2) (D + ωLT)x(k) = [(1 − ω)D − ωL] x(k−12)+ ωb. (3) Define

 Mω: = D + ωL,

Nω: = (1 − ω)D − ωLT. Then from the iterations (2) and (3), it follows that

x(k) = Mω−TNωTMω−1Nω x(k−1)+ ω Mω−TNωTMω−1+ Mω−T b

≡ T (ω)x(k−1)+ M (ω)−1b.

(52)

師大

But

((1 − ω)D − ωL) (D + ωL)−1+ I

= (−ωL − D − ωD + 2D)(D + ωL)−1+ I

= −I + (2 − ω)D(D + ωL)−1+ I

= (2 − ω)D(D + ωL)−1. Thus

M (ω)−1 = ω D + ωLT−1

(2 − ω)D(D + ωL)−1, then the splitting matrix is

M (ω) = 1

ω(2 − ω)(D + ωL)D−1 D + ωLT . The iteration matrix is

T (ω) = (D + ωLT)−1[(1 − ω)D − ωL] (D + ωL)−1(1 − ω)D − ωLT .

52 / 87

(53)

師大

Error bounds and iterative refinement

Example 30

The linear system Ax = b given by

 1 2

1.0001 2

  x1

x2



=

 3

3.0001



has the unique solution x = [1, 1]T.

The poor approximation ˜x = [3, 0]T has the residual vector r = b − A˜x =

 3

3.0001



 1 2

1.0001 2

  3 0



=

 0

−0.0002

 , so krk= 0.0002. Although the norm of the residual vector is small, the approximation ˜x = [3, 0]T is obviously quite poor; in fact, kx − ˜xk= 2.

(54)

師大

The solution of above example represents the intersection of the lines

`1: x1+ 2x2 = 3 and `2 : 1.0001x1+ 2x2 = 3.0001.

`1 and `2 are nearly parallel. The point (3, 0) lies on `1 which implies that (3, 0) also lies close to `2, even though it differs significantly from the intersection point (1, 1).

54 / 87

(55)

師大

Theorem 31

Suppose that ˜xis an approximate solution of Ax = b, A is nonsingular matrix and r = b − A˜x. Then

kx − ˜xk ≤ krk · kA−1k and if x 6= 0 and b 6= 0,

kx − ˜xk

kxk ≤ kAk · kA−1kkrk kbk. Proof: Since

r = b − A˜x = Ax − A˜x = A(x − ˜x) and A is nonsingular, we have

kx − ˜xk = kA−1rk ≤ kA−1k · krk. (4) Moreover, since b = Ax, we have

kbk ≤ kAk · kxk.

(56)

師大

It implies that

1

kxk ≤ kAk

kbk. (5)

Combining Equations (4) and (5), we have kx − ˜xk

kxk ≤ kAk · kA−1k kbk krk.

Definition 32 (Condition number)

The condition number of nonsingular matrix A is κ(A) = kAk · kA−1k.

For any nonsingular matrix A,

1 = kIk = kA · A−1k ≤ kAk · kA−1k = κ(A).

56 / 87

(57)

師大

Definition 33

A matrix A iswell-conditionedifκ(A)is close to1, and is ill-conditionedwhen κ(A) is significantly greater than 1.

In previous example, A =

 1 2

1.0001 2

 . Since

A−1 =

 −10000 10000 5000.5 −5000

 , we have

κ(A) = kAk· kA−1k= 3.0001 × 20000 = 60002  1.

(58)

師大

How to estimate the effective condition number in t-digit arithmetic without having to invert the matrix A?

If the approximate solution ˜xof Ax = b is being determined using t-digit arithmetic and Gaussian elimination, then

krk = kb − A˜xk ≈ 10−tkAk · k˜xk.

All the arithmetic operations in Gaussian elimination technique are performed using t-digit arithmetic, but the residual vector r are done in double-precision (i.e., 2t-digit) arithmetic.

Use the Gaussian elimination method which has already been calculated to solve

Ay = r.

Let ˜ybe the approximate solution.

58 / 87

(59)

師大

Then

˜

y ≈ A−1r = A−1(b − A˜x) = x − ˜x and

x ≈ ˜x + ˜y.

Moreover,

k˜yk ≈ kx − ˜xk = kA−1rk

≤ kA−1k · krk ≈ kA−1k(10−tkAk · k˜xk) = 10−tk˜xkκ(A).

It implies that

κ(A) ≈ k˜yk k˜xk10t. Iterative refinement

In general, ˜x + ˜yis a more accurate approximation to the solution of Ax = b than ˜x.

(60)

師大

Algorithm 3 (Iterative refinement)

Given tolerance T OL, maximum number of iteration M , number of digits of precision t.

Solve Ax = b by using Gaussian elimination in t-digit arithmetic.

Set k = 1 while ( k ≤ M )

Compute r = b − Ax in 2t-digit arithmetic.

Solve Ay = r by using Gaussian elimination in t-digit arithmetic.

If kyk< T OL, then stop.

Set k = k + 1 and x = x + y.

End while

60 / 87

(61)

師大

Example 34

The linear system given by

3.3330 15920 −10.333 2.2220 16.710 9.6120 1.5611 5.1791 1.6852

x1

x2

x3

=

15913 28.544 8.4254

has the exact solution x = [1, 1, 1]T.

Using Gaussian elimination and five-digit rounding arithmetic leads successively to the augmented matrices

3.3330 15920 −10.333 15913

0 −10596 16.501 −10580

0 −7451.4 6.5250 −7444.9

and

3.3330 15920 −10.333 15913 0 −10596 16.501 −10580

0 0 −5.0790 −4.7000

.

參考文獻

Outline

相關文件

Proof. The proof is complete.. Similar to matrix monotone and matrix convex functions, the converse of Proposition 6.1 does not hold. 2.5], we know that a continuous function f

All steps, except Step 3 below for computing the residual vector r (k) , of Iterative Refinement are performed in the t-digit arithmetic... of precision t.. OUTPUT approx. exceeded’

Miroslav Fiedler, Praha, Algebraic connectivity of graphs, Czechoslovak Mathematical Journal 23 (98) 1973,

To improve the convergence of difference methods, one way is selected difference-equations in such that their local truncation errors are O(h p ) for as large a value of p as

request even if the header is absent), O (optional), T (the header should be included in the request if a stream-based transport is used), C (the presence of the header depends on

„ Indicate the type and format of information included in the message body. „ Content-Length: the length of the message

 It is worthwhile to sacrifice one person to save five.  Passser-by A has nothing to do with the incident. In the basic version, the worker on the side tracks also has nothing

grep - print lines matching a pattern. $ grep [OPTIONS]