• 沒有找到結果。

# Iterative Techniques in Matrix Algebra

N/A
N/A
Protected

Share "Iterative Techniques in Matrix Algebra"

Copied!
130
0
0

(1)

. . . .. .. .. .. .. .. . . . . .

## Iterative Techniques in MatrixAlgebra

### Hung-Yuan Fan (范洪源)

Department of Mathematics, National Taiwan Normal University, Taiwan

(2)

## Norms of Vectors and Matrices

(3)

. . . .. .. .. .. .. .. . . . . .

## Vector Norms (向量的範數)

Def 7.1

A vector norm onRn is a function∥ · ∥ : Rn→ R with the following properties:

(i) ∥x∥ ≥ 0 for all x ∈ Rn, (ii) ∥x∥ = 0 ⇐⇒ x = 0,

(iii) ∥αx∥ = |α| ∥x∥ for all α ∈ R and x ∈ Rn, (iv) ∥x + y∥ ≤ ∥x∥ + ∥y∥ for all x, y ∈ Rn.

### Note: The n-dimensional vector x is often denoted by

x =



 x1 x2 ... xn



= [x1, x2,· · · , xn]T= (x1, x2,· · · , xn)T.

(4)

## Useful Vector Norms

Def 7.2

The l2 and l norms for the vector x = [x1, x2,· · · , xn]T∈ Rn are defined by

∥x∥2 = (∑n

i=1

x2i)1/2

and ∥x∥= max

1≤i≤n|xi|.

The l1 norm of x∈ Rn is defined by

∥x∥1 =

n i=1

|xi|.

(5)

. . . .. .. .. .. .. .. . . . . .

## Distance between Vectors in R

n

Def 7.4

Let x = [x1,· · · , xn]T and y = [y1,· · · , yn]T be two vectors inRn. The l2 and l distances between x and y are defined by

∥x−y∥2 = {∑n

i=1

(xi−yi)2 }1/2

and ∥x−y∥= max

1≤i≤n|xi−yi|.

The l1 distance between x and y is given by

∥x − y∥1 =

n i=1

|xi− yi|.

(6)

Example 2, p. 435 The 3× 3 linear system

3.3330x1+ 15920x2− 10.333x3= 15913, 2.2220x1+ 16.710x2+ 9.6120x3 = 28.544, 1.5611x1+ 5.1791x21.6852x3 = 8.4254

has the exact sol. x = [1, 1, 1]T. If the system is solved by GE with patrial pivoting using 5-digit rounding arithnetic, we obtain the computed sol.

˜x = [1.2001, 0.99991, 0.92538]T. So, the l and l2 distances between x and ˜x are

(7)

. . . .. .. .. .. .. .. . . . . .

## Convergence for Sequences of Vectors

Def 7.5 (向量序列的收斂性)

A seq.{x(k)}k−1 of vectors in Rnis said to converge to x∈ Rn with respect to the norm∥ · ∥ if ∀ ϵ > 0, ∃ N(ϵ) ∈ N s.t.

∥x(k)− x∥ < ϵ ∀ k ≥ N(ϵ).

Thm 7.6

The seq. of vectors{x(k)}k=1 converges to x∈ Rn

### norm

⇐⇒ lim

k→∞x(k)i = xi for i = 1, 2, . . . , n.

### pf: It is easily seen that

∀ ϵ > 0, ∃ N(ϵ) ∈ N s.t.

∥x(k)− x∥< ϵ ∀ k ≥ N(ϵ)

⇐⇒|x(k)i − xi| < ϵ ∀ k ≥ N(ϵ) and 1 ≤ i ≤ n.

(8)

Example 3, p. 436

The sequence of vectors inR4 x(k)= [1, 2 + 1

k, 3

k2, e−ksin k]T

converges to x = [1, 2, 0, 0]T∈ R4 with respect to the l norm, since

klim→∞(2 +1

k) = 2, lim

k→∞

3

k2 = 0, lim

k→∞e−ksin k = 0.

Question

Does the given sequence converge to x with respect to the l2

(9)

. . . .. .. .. .. .. .. . . . . .

Thm 7.7 (The Equivalence of Vector Norms) For each x∈ Rn,

∥x∥≤ ∥x∥2 ≤√

n∥x∥.

In this case, we say that the l and l2 norms are equivalent.

### pf: For any x = [x

1,· · · , xn]T∈ Rn, let |xi0| = max

1≤i≤n|xi| = ∥x∥. Then we see that

1 ∥x∥=|xi0| =x2i

0

x21+· · · + x2n=∥x∥2.

2 ∥x∥2 (∑n i=1

x2i

0

)1/2

=

(n∥x∥2)1/2

=

n∥x∥. So, these prove the desired inequalities.

(10)

Example 4, p. 437

Show that the sequence of vectors in Example 3 x(k) = [1, 2 +1

k, 3

k2, e−ksin k]T∈ R4

converges to x = [1, 2, 0, 0]T∈ R4 with respect to the l2 norm.

### pf: In Example 3, we know that lim

k→∞∥x(k)− x∥= 0. So, for any ϵ > 0,∃ N0∈ N s.t.

∥x(k)− x∥< ϵ

2 ∀ k ≥ N0, and furthermore, it follows from Thm 7.7 that

(11)

. . . .. .. .. .. .. .. . . . . .

Remarks

Any two vector norms ∥ · ∥ and ∥ · ∥ onRn are equivalent, i.e., ∃ c1> 0 and c2> 0 s.t.

c1∥x∥≤ ∥x∥ ≤ c2∥x∥ ∀ x ∈ Rn.

A seq. {x(k)}k=1 converges to the limit x∈ Rn

### with respectto the norm

∥ · ∥ ⇐⇒ a seq. {x(k)}k=1 converges to the limit x∈ Rn

### with respect to the norm

∥ · ∥. (向量序列的收斂性 與範數無關!)

For any x∈ Rn, the relations between l1, l2 and l norms are

∥x∥2≤ ∥x∥1≤√ n∥x∥2,

∥x∥≤ ∥x∥1 ≤ n∥x∥.

(12)

## Matrix Norms and Distances

Def 7.8 (矩陣的範數)

A matrix norm onRn×n is a function ∥ · ∥ : Rn×n→ R satisfying for all A, B∈ Rn×n and all α∈ R :

(i) ∥A∥ ≥ 0;

(ii) ∥A∥ = 0 ⇐⇒ A = 0 (zero matrix);

(iii) ∥αA∥ = |α| ∥A∥;

(iv) ∥A + B∥ ≤ ∥A∥ + ∥B∥;

(v) ∥AB∥ ≤ ∥A∥ ∥B∥.

Definition (Distances of Two Matrices)

(13)

. . . .. .. .. .. .. .. . . . . .

Thm 7.9 (⾃然矩陣範數)

If∥ · ∥ is a vector norm on Rn, then

∥A∥ = max

∥x∥=1∥Ax∥

is amatrix norm on Rn×n. (See Exercise 13 for the proof.)

### pf: Only prove that

∥AB∥ ≤ ∥A∥ ∥B∥ for any A, B ∈ Rn×n here.

For any unit vector x∈ Rn, we have

∥A(Bx)∥ = ∥Bx∥ · ∥A( Bx

∥Bx∥

)∥ ≤ ∥A∥ · ∥Bx∥.

Thus, we conclude that

∥AB∥ = max

∥x∥=1∥(AB)x∥ = max

∥x∥=1∥A(Bx)∥

≤ max

∥x∥=1(∥A∥ ∥Bx∥) = ∥A∥ · max

∥x∥=1∥Bx∥ = ∥A∥ ∥B∥.

(14)

Remarks

Matrix norms defined by vector norms are called thenatural (or induced) matrix norm associated with the vector norm.

Since x = ∥z∥z is a unit vector for z̸= 0, Thm 7.9 can be rewritten as

∥A∥ = max

∥x∥=1∥Ax∥ = max

z̸=0 ∥A( z

∥z∥

)∥ =max

z̸=0

∥Az∥

∥z∥ .

(15)

. . . .. .. .. .. .. .. . . . . .

Cor 7.10

For any A∈ Rn×n, 0̸= z ∈ Rn and any natural norm∥ · ∥,

∥Az∥ ≤ ∥A∥ · ∥z∥.

Some Natural Matrix Norms

1 ∥A∥= max

∥x∥=1∥Ax∥= max

z̸=0

∥Az∥

∥z∥ . (the l norm)

2 ∥A∥2= max

∥x∥2=1∥Ax∥2 = max

z̸=0

∥Az∥2

∥z∥2 . (the l2 norm)

3 ∥A∥1= max

∥x∥1=1∥Ax∥1 = max

z̸=0

∥Az∥1

∥z∥1 . (the l1 norm)

(16)

Thm 7.11 (矩陣

### ∞-範數的計算公式)

If A = [aij]∈ Rn×n, then

∥A∥= max

1≤i≤n

n j=1

|aij|. (|A| = [ |aij| ] 的最⼤列和)

### pf: The proof is separated into two parts.

(1) Assume ∥A∥=∥Ax∥ for some x∈ Rn with

∥x∥= max

1≤j≤n|xj| = 1. Then we have

∥A∥=∥Ax∥= max

1≤i≤n|(Ax)i| = max

1≤i≤n

n j=1

aijxj

n ( ) ∑n

(17)

. . . .. .. .. .. .. .. . . . . .

(2) Let y = [y1, y2, . . . , yn]T∈ Rn, where each component yj=

{ 1, if aij ≥ 0,

−1, if aij< 0.

Then ∥y∥= 1 andaijyj=|aij| for all i, j. So, we get

∥Ay∥= max

1≤i≤n|(Ay)i| = max

1≤i≤n

n j=1

aijyj = max

1≤i≤n

n j=1

|aij|.

Furthermore, it follows that

∥A∥= max

∥x∥=1∥AX∥≥ ∥Ay∥= max

1≤i≤n

n j=1

|aij|.

From the parts (1) and (2) =⇒ these complete the proof.

(18)

Exercise 6, p. 442 (矩陣 1-範數的計算公式) If A = [aij]∈ Rn×n, then

∥A∥1 = max

1≤j≤n

n i=1

|aij|. (|A| = [ |aij| ] 的最⼤⾏和)

(19)

. . . .. .. .. .. .. .. . . . . .

Example 5, p. 441 For the 3× 3 matrix

A =

1 2 −1

0 3 −1

5 −1 1

 ,

it follows that

∥A∥= max

i=1,2,3

3 j=1

|aij| = max{4, 4, 7} = 7,

∥A∥1 = max

j=1,2,3

3 i=1

|aij| = max{6, 6, 3} = 6.

(20)

## Eigenvalues and Eigenvectors

(21)

. . . .. .. .. .. .. .. . . . . .

Def 7.12 (特徵多項式)

The characteristic polynomial of A∈ Rn×n is defined by p(λ) = det(A− λI),

where I is the n× n identity matrix.

### Note: The characteristic poly. p is an nth-degree poly. with real

coefficients. So, it has at most n distinct zeros inC.

(22)

Def 7.13 (特徵值與特徵向量)

Let p(λ) be the characteristic poly. of A∈ Rn×n.

The number λ∈ C is called an eigenvalue (or characteristic value) of A if p(λ) = 0.

The spectrum (譜) of A, denoted by σ(A), is the set of all eigenvalues of A.

If∃ 0 ̸= x ∈ Rn s.t. Ax = λx or (A− λI)x = 0 for λ∈ σ(A), then x is called an eigenvector (pr characteristic vector) of A corresponding to λ.

(23)

. . . .. .. .. .. .. .. . . . . .

Def 7.14

The spectral radius (譜半徑) of A∈ Rn×n is defined by ρ(A) = max{|λ| | λ ∈ σ(A)}.

(For complex λ = α + βi, we define|λ| =

α2+ β2.) Thm 7.15 (矩陣 2-範數的計算公式)

If A is an n× n matrix, then (i) ∥A∥2=√

ρ(ATA).

(ii) ρ(A)≤ ∥A∥ for any natural matrix norm ∥ · ∥.

(24)

Review from Linear Algebra

Let B = ATA with A∈ Rn×n and v∈ Rn.

BT= (ATA)T= AT(AT)T= ATA = B, i.e., B is symmetric.

For any λ∈ σ(B), λ ≥ 0.

B is orthogonally diagonalizable, i.e.,∃ orthog. Q ∈ Rn×n s.t.

QTBQ = QT(ATA)Q = diag(λ1, λ2,· · · , λn)≡ D, where λ1 ≥ λ2 ≥ · · · λn≥ 0.

Since ∥v∥22= vTv, we have

∥Ax∥22= (Ax)T(Ax) = xT(ATA)x = xTBx

(25)

. . . .. .. .. .. .. .. . . . . .

Proof of Thm 7.15 (1/2)

(i) Since ATA is symmetric, ∃ orthogonal Q ∈ Rn×n s.t.

QT(ATA)Q = diag(λ1, λ2,· · · , λn)≡ D, where λ1 ≥ λ2 ≥ · · · λn≥ 0. Hence,

∥A∥22= max

∥x∥2=1∥Ax∥22 = max

∥x∥2=1

xT(ATA)x

= max

∥x∥2=1

xTQDQTx = max

∥y∥2=1

yTDy, where we let y = QTx. So, ∥A∥22 ≤ λ1. Moreover, the maximum value of yTDy is achieved at the vector y= [1, 0,· · · , 0]T∈ Rn and thus ∥A∥22= λ1 or

∥A∥2=

λ1=√

ρ(ATA).

(26)

Proof of Thm 7.15 (2/2)

(ii) Let A∈ Rn×n and∥ · ∥ be any natural norm. For each λ∈ σ(A), ∃ 0 ̸= x ∈ Rns.t.

Ax = λx with ∥x∥ = 1.

Hence, we know that

|λ| = |λ| · ∥x∥ = ∥λx∥ = ∥Ax∥ ≤ ∥A∥ ∥x∥ = ∥A∥.

So, the spectral radius of A satisfies ρ(A)≤ ∥A∥.

(27)

. . . .. .. .. .. .. .. . . . . .

Rematks

If AT= A∈ Rn×n, then∥A∥2= ρ(A).

For any A∈ Rn×n and any ϵ > 0,∃ a natural norm ∥ · ∥ϵ s.t.

ρ(A) <∥A∥ϵ< ρ(A) + ϵ.

(28)

Def 7.16 (收斂矩陣的定義)

We day that a matrix A∈ Rn×n is convergent if

klim→∞(Ak)ij= 0, for i, j = 1, 2, . . . , n.

Example 4, p. 448 The 2× 2 matrix A =

[1

2 0

1 4

1 2

]

is a convergent matrix, since we have

Ak=

[(12)k 0

k

2k+1 (12)k ]

∀ k ≥ 1

(29)

. . . .. .. .. .. .. .. . . . . .

Thm 7.17 (收斂矩陣的等價條件)

Let A∈ Rn×n. The following statements are equivalent.

(i) A is a convergent matrix.

(ii) lim

n→∞∥An∥ = 0 for some natural norm.

(iii) lim

n→∞∥An∥ = 0 for all natural norms.

(iv) ρ(A) < 1.

(v) lim

n→∞Anx = 0 for every x∈ Rn.

(30)

## Techniques

(31)

. . . .. .. .. .. .. .. . . . . .

## Derivation of Jacobi Method

Basic Idea

From the ith eq. of a linear system Ax = b

ai1x1+ ai2x2+· · · +aiixi+· · · + ainxn= bi

for solving the ith componentxi, we get

xi= 1 aii



n

j = 1 j̸= i

(−aijxj) + bi



 ,

provided thataii̸= 0 for i = 1, 2, . . . , n.

(32)

## The Jacobi Method

Component Form (Jacobi 法的分量形式)

For each k≥ 1, we nay consider the Jacobi iterative method:

x(k)i = 1 aii



n

j = 1 j̸= i

(−aijx(kj −1)) + bi



 for i = 1, . . . , n, (1)

where an initial approx.x(0)= [x(0)1 ,· · · , x(0)n ]T∈ Rn is given.

(33)

. . . .. .. .. .. .. .. . . . . .

Example 1, p. 451 The following linear system

### 10x

1− x2+ 2x3 = 6

− x1+ 11x2− x3+ 3x4= 25 2x1− x2+ 10x3− x4=−11 3x2− x3+ 8x4= 15

has a unique solution x = [1, 2,−1, 1]T∈ R4. Use Jacobi’s iterative technique to find an approx. x(k) to x starting with x(0) = [0, 0, 0, 0]T∈ R4 until

∥x(k)− x(k−1)

∥x(k) < 10−3.

(34)

Solution (1/2)

The given linear system ca be rewritten as x1 = 1

10x2 1 5x3+3

5 x2 = 1

11x1+ 1

11x3 3

11x4+25 11 x3 = −1

5 x1+ 1

10x2+ 1

10x411 10 x4 = −3

8 x2+1

8x3+ 15 8 .

(35)

. . . .. .. .. .. .. .. . . . . .

Solution (2/2)

For each k≥ 1, we apply the Jacbi’s method:

x(k)1 = 1

10x(k2−1)1

5x(k3−1)+3 5 x(k)2 = 1

11x(k1−1)+ 1

11x(k3−1) 3

11x(k4−1)+ 25 11 x(k)3 = −1

5 x(k1−1)+ 1

10x(k2−1)+ 1

10x(k4−1)11 10 x(k)4 = −3

8 x(k2−1)+1

8x(k3−1)+15 8 with the initial guessx(0) = [0, 0, 0, 0]T∈ R4.

(36)

## Numerical Results

After 10 iterations of Jacobi method, we have

∥x(10)− x(9)

∥x(10) = 8.0× 10−4

1.9998 = 4.0× 10−4< 10−3. In fact, the absolute error is∥x(10)− x∥= 2× 10−4.

(37)

. . . .. .. .. .. .. .. . . . . .

Equivalent Matrix-Vector Forms

As in Chapter 2, every root-finding problem f(x) = 0 is converted into its equivalent fixed-point form

x = g(x), x∈ I = [a, b]

, for some differentiable function g.

Similarly, we also try to covert the original linear system Ax = b into its equivalent matrix-vector form

x = Tx + c, x∈ Rn, where T∈ Rn×n and c∈ Rn are fixed.

For k = 1, 2, . . ., compute

x(k) = Tx(k−1)+ c

with an initial approx. x(0) ∈ Rn to the unique sol. x.

(38)

A Useful Split of A

The iterative techniques for solving Ax = b will be derived by first splitting A into its diagonal and off-diagonal parts, i.e.,

A =



a11 0 · · · 0

0 a22

... .. . ..

.

... ... 0

0 · · · 0 ann

 −



0 · · · · · · 0

−a21 ...

.. . ..

.

... ...

.. .

−an1 · · · −an,n−1 0





0 −a12 · · · −a1n ..

.

... ... .. . ..

.

... −an−1,n

0 · · · · · · 0



≡ D −L−U. (2)

(39)

. . . .. .. .. .. .. .. . . . . .

## The Jacobi Method Revisited

From the splitting of A as in (2), the linear system Ax = b is transformed to

(D− L − U)x = b ⇔ Dx = (L + U)x + b ⇔ x =Tjx +cj, whereTj≡ D−1(L + U) andcj ≡ D−1b.

It is easily seen that the component form of Jacobi method

x(k)i = 1 aii



n

j = 1 j̸= i

(−aijx(kj −1)) +bi



fori = 1, . . . , n.

is equivalent to the following matrix-vector form x(k)=Tjx(k−1)+cj ∀ k ≥ 1.

(40)

Example 2, p. 453

The 4× 4 linear system in Example 1 can be rewritten in the form x1 = 1

10x2 1 5x3+3

5 x2 = 1

11x1+ 1

11x3 3

11x4+25 11 x3 = −1

5 x1+ 1

10x2+ 1

10x411 10 x4 = −3

8 x2+1

8x3+ 15 8 .

So, the unique sol. x∈ R4 satisfies x = Tjx + cj with



0 101 −15 0

1 1 −3





3 5 25



(41)

. . . .. .. .. .. .. .. . . . . .

Algorithm 7.1: Jacobi Method

INPUT dim. n; A = [aij]∈ Rn×n ; b∈ Rn; X0 = x(0) ∈ Rn; tol. TOL;

max. no. of iter. N0.

OUTPUT an approx. sol. x1, x2, . . . , xn to Ax = b.

Step 1 Set k = 1.

Step 2 While (k≤ N0) do Steps 3–6 Step 3 For i = 1, . . . , n set

xi= 1 aii

n

j = 1 j̸= i

(−aijX0j) + bi

 .

Step 4 If∥x − X0∥ < TOL then OUTPUT(x1,· · · , xn); STOP.

Step 5 Set k = k + 1.

Step 6 Set X0 = x.

Step 7 OUTPUT(‘Maximum number of iterations exceeded’); STOP.

(42)

If some aii = 0 and A is nonsingular, choose p̸= i s.t.

|api| is as large as possible,

and then perform (Ep)↔ (Ei) to ensure thatno aii= 0 before applying the Jacobi method.

In Step 4, a better stopping criterion should be

∥x(k)− x(k−1)

∥x(k) < TOL,

where the vector norm ∥ · ∥ is the l1, l2 or l norm.

(43)

. . . .. .. .. .. .. .. . . . . .

Jacobi Method v.s. Gauss-Seidel Metod

For the Jacobi’s method, the ith componentx(k)i of x(k) is determined by x(k1−1),· · · , x(ki−1−1) and x(ki+1−1),· · · , x(kn−1). At the kth step of Gauss-Seidel method, the ith component x(k)i is computed byx(k)1 ,· · · , x(k)i−1 and x(ki+1−1),· · · , x(kn−1). Notice that the recently computed values of x(k)1 ,· · · , x(k)i−1 are better approxs. to x than the values of x(k1−1),· · · , x(ki−1−1).

(44)

Component Form of Gauss-Seidel Method

For each k≥ 1, the ith component of x(k) is determined by

x(k)i = 1 aii

∑i−1

j=1

(−aijx(k)j ) +

n j=i+1

(−aijx(kj −1)) + bi

= 1 aii

i−1

j=1

(aijx(k)j )

n j=i+1

(aijx(kj −1)) + bi

 , (3)

where an initial vector x(0) is given and i = 1, 2, . . . , n.

(45)

. . . .. .. .. .. .. .. . . . . .

Algorithm 7.2: Gauss-Seidel Method

INPUT dim. n; A = [aij]∈ Rn×n ; b∈ Rn; X0 = x(0) ∈ Rn; tol. TOL;

max. no. of iter. N0.

OUTPUT an approx. sol. x1, x2, . . . , xn to Ax = b.

Step 1 Set k = 1.

Step 2 While (k≤ N0) do Steps 3–6 Step 3 For i = 1, . . . , n set

xi= 1 aii

i−1

j=1

(aijxj)

n

j=i+1

(aijX0j) + bi

 .

Step 4 If∥x − X0∥ < TOL then OUTPUT(x1,· · · , xn); STOP.

Step 5 Set k = k + 1.

Step 6 Set X0 = x.

Step 7 OUTPUT(‘Maximum number of iterations exceeded’); STOP.

(46)

Example 3, p. 455 The following linear system

### 10x

1− x2+ 2x3 = 6

− x1+ 11x2− x3+ 3x4= 25 2x1− x2+ 10x3− x4=−11 3x2− x3+ 8x4= 15

has a unique solution x = [1, 2,−1, 1]T∈ R4. Use Gauss-Seidel

### method to find an approx. x

(k) to x starting with

x(0) = [0, 0, 0, 0]T∈ R4 until

∥x(k)− x(k−1) −3

(47)

. . . .. .. .. .. .. .. . . . . .

Solution

For each k≥ 1, we apply the Gauss-Seidel method:

x(k)1 = 1

10x(k2−1)1

5x(k3 −1)+3 5 x(k)2 = 1

11x(k)1 + 1

11x(k3−1) 3

11x(k4−1)+25 11 x(k)3 = −1

5 x(k)1 + 1

10x(k)2 + 1

10x(k4−1)11 10 x(k)4 = −3

8 x(k)2 +1

8x(k)3 +15 8 with the initial guessx(0) = [0, 0, 0, 0]T∈ R4.

(48)

## Numerical Results of Example 3

After 5 iterations of Gauss-Seidel method, we have

∥x(5)− x(4)

∥x(5) = 8.0× 10−4

2.000 = 4.0× 10−4 < 10−3. In fact, the absolute error is∥x(5)− x∥= 1.0× 10−4. The numerical results are shown in the following table.

(49)

. . . .. .. .. .. .. .. . . . . .

Matrix-Vector Form of Gauss-Seidel Method (1/2) From the component form as in (3)

x(k)i = 1 aii

i−1

j=1

(aijx(k)j ) +

n j=i+1

(−aijx(kj −1)) + bi

 ,

we immediately obtain

ai1x(k)1 +· · · +aiix(k)i =−ai,i+1x(ki+1−1)− · · · − ainx(kn−1)+ bi

for each i = 1, 2, . . . , n.

Thus we have following matrix form for Gauss-Seidel method:



a11 0 · · · 0

a21 a22 . .. .. . ..

.

... ... 0

an1 · · · an,n−1 ann





x(k)1 x(k)2 .. . x(k)n

 =



0 −a12 · · · −a1n ..

. . .. . .. .. . ..

.

... −an−1,n

0 · · · · · · 0





x(k−1)1 x(k2−1) .. . x(k−1)n

+b.

(50)

Matrix-Vector Form of Gauss-Seidel Method (2/2)

For each k≥ 1, the above matrix equation can be rewritten as (D−L)x(k)=Ux(k−1)+ b

⇐⇒ x(k)= (D−L)−1Ux(k−1)+ (D−L)−1b

⇐⇒ x(k)= Tgx(k−1)+ cg,

where Tg≡ (D −L)−1U and cg ≡ (D −L)−1b.

Recall the Jacobi method given by

x(k)= Tjx(k−1)+ cj, k = 1, 2, . . . , where T = D−1(L + U) and c = D−1b.

(51)

. . . .. .. .. .. .. .. . . . . .

## General Iteration Methods

Some Questions

1 When does a general iteration of the form x(k)= Tx(k−1)+ c, k = 1, 2, . . . converge to a solution x∈ Rn of the matrix equation x = Tx + c?

2 What is the rate of convergence for this iterative method?

3 Does the Gauss-Seidel method always converge faster than the Jacobi method?

(52)

Lemma 7.18

If T∈ Rn×n satisfiesρ(T) < 1, the (I− T)−1 exists and

(I− T)−1= I + T + T2+· · · =

j=0

Tj

withT0≡ Ibeing defined conventionally.

(53)

. . . .. .. .. .. .. .. . . . . .

Proof of Lemma 7.18

If λ∈ σ(T), then ∃ 0 ̸= x ∈ Rn s.t.

Tx = λx or (I− T)x = (1 − λ)x.

So, 1− λ ∈ σ(I − T).

Because ρ(T) < 1,|λ| ≤ρ(T) < 1. This means that I− T does not have any zero eigenvalues and hence (I− T)−1 exists.

Let Sm=

m j=0

Tj. Then we have

(I− T)Sm=

m j=0

Tj

m j=0

Tj+1= I− Tm+1.

Since ρ(T) < 1, T is convergent, i.e., lim

m→∞Tm= 0 by Thm 7.17. Hence (I− T)−1= lim

m→∞Sm= ∑

j=0

Tj.

(54)

Thm 7.19 (廣義迭代法收斂性的充要條件)

For any x(0) ∈ Rn, the sequence{x(k)}k=0 defined by x(k)=Tx(k−1)+ c ∀ k ≥ 1

converges to the unique solution of x =Tx + c⇐⇒ρ(T) < 1.

### pf: The proof is illustrated as follows.

(⇐) Suppose that ρ(T) < 1. By induction =⇒ x(k)= Tx(k−1)+ c

= T(Tx(k−2)+ c) + c = T2x(k−2)+ (T + I)c ...

(55)

. . . .. .. .. .. .. .. . . . . .

Since ρ(T) < 1, lim

k→∞Tkx(0) = 0 by Thm 7.17. Thus, it follows from Lemma 7.18 that

x≡ lim

k→∞x(k)= 0 + (∑

j=0

Tj)

c = (I− T)−1c.

Hence, the limit x∈ Rnis the unique solution of the equation x = (I− T)−1c⇐⇒ (I − T)x = c ⇐⇒ x = Tx + c.

(⇒) Assume that lim

k→∞x(k) = x for any initial vector x(0), where x∈ Rnis the unique sol. of x = Tx + c. Now, we want to claim that

ρ(T) < 1⇐⇒ lim

k→∞Tkz = 0 ∀ z ∈ Rn by applying Thm 7.17.

(56)

For any z∈ Rn, let x(0)= x− z. Then by induction =⇒

x− x(k)= (Tx + c)− (Tx(k−1)+ c)

= T(x− x(k−1)) =· · · = Tk(x− x(0))

= Tkz, since z = x− x(0). So, it follows from the assumption that

klim→∞Tkz = x− lim

k→∞x(k)= x− x = 0.

Since z∈ Rnis given arbitrarily, we have ρ(T) < 1.

We plot the results with respect to all evaluation criteria except for the Hamming loss with M = 10% of K in Figure 3 on three datasets, and report the complete results in Appendix

• Step 2: Run DFS on the transpose

• Step 2: Run DFS on the transpose

Due to a definition of noise factor (in this case) as the ratio of noise powers on the output versus on the input, when a resistor in room temperature ( T 0 =290 K) generates the

independent of the parametrization.) Because the unit tangent vector has constant length, only changes in direction contribute to the rate of change of T...

▪ Step 2: Run DFS on the transpose

 The problem of basic arithmetic coding : the shrinking current interval requires t he use of high precision arithmetic..  IEEE 754

Biases in Pricing Continuously Monitored Options with Monte Carlo (continued).. • If all of the sampled prices are below the barrier, this sample path pays max(S(t n ) −

Biases in Pricing Continuously Monitored Options with Monte Carlo (continued).. • If all of the sampled prices are below the barrier, this sample path pays max(S(t n ) −

6 《中論·觀因緣品》，《佛藏要籍選刊》第 9 冊，上海古籍出版社 1994 年版，第 1

The t-submodule theorem says that all linear relations satisfied by a logarithmic vector of an algebraic point on t-module should come from algebraic relations inside the t-module

One way to select a procedure to accelerate convergence is to choose a method whose associated matrix has minimal spectral radius....

In this talk, we introduce a general iterative scheme for ﬁnding a common element of the set of solutions of variational inequality problem for an inverse-strongly monotone mapping

Then, it is easy to see that there are 9 problems for which the iterative numbers of the algorithm using ψ α,θ,p in the case of θ = 1 and p = 3 are less than the one of the

One way to select a procedure to accelerate convergence is to choose a method whose associated matrix has minimal spectral

For R-K methods, the relationship between the number of (function) evaluations per step and the order of LTE is shown in the following

Biases in Pricing Continuously Monitored Options with Monte Carlo (continued).. • If all of the sampled prices are below the barrier, this sample path pays max(S(t n ) −

Biases in Pricing Continuously Monitored Options with Monte Carlo (continued). • If all of the sampled prices are below the barrier, this sample path pays max(S(t n ) −

Biases in Pricing Continuously Monitored Options with Monte Carlo (continued).. • If all of the sampled prices are below the barrier, this sample path pays max(S(t n ) −

Biases in Pricing Continuously Monitored Options with Monte Carlo (continued).. • If all of the sampled prices are below the barrier, this sample path pays max(S(t n ) −

Biases in Pricing Continuously Monitored Options with Monte Carlo (continued).. • If all of the sampled prices are below the barrier, this sample path pays max(S(t n ) −

Bacterial, infection or inflammation → tissue dysfunction → blood (leukocyte) → expressed special adhesion molecules at endothelial surface → bind leukocyte → induced

Types I, II and III are the most abundant and form fibrils Type IV forms sheets (found in the basal lamina).. They form