Special types of matrices

In document Direct Methods for Solving Linear Systems (Page 66-86)

logo

logo

Since Ax = 0, for the fixed k, we have

n

X

j=1

akjxj = 0 ⇒ akkxk= −

n

X

j=1,j6=k

akjxj

⇒ |akk||xk| ≤

n

X

j=1,j6=k

|akj||xj|,

which implies

|akk| ≤

n

X

j=1,j6=k

|akj||xj|

|xk|≤

n

X

j=1,j6=k

|akj|.

But this contradicts the assumption that A is diagonally dominant. Therefore A must be nonsingular.

67 / 89

logo

Theorem 12

Gaussian elimination without pivoting preserve the diagonal dominance of a matrix.

Proof: Let A ∈ Rn×n be a diagonally dominant matrix and A(2)= [a(2)ij ]is the result of applying one step of Gaussian elimination to A(1)= Awithout any pivoting strategy.

After one step of Gaussian elimination, a(2)i1 = 0for i = 2, . . . , n, and the first row is unchanged. Therefore, the property

|a(2)11| >

n

X

j=2

|a(2)1j |

is preserved, and all we need to show is that

|a(2)ii | >

n

X

j=2,j6=i

|a(2)ij |, for i = 2, . . . , n.

logo

Using the Gaussian elimination formula (2), we have

|a(2)ii | =

a(1)ii a(1)i1 a(1)11

a(1)1i

=

aiiai1

a11a1i

|aii| − |ai1|

|a11||a1i|

= |aii| − |ai1| + |ai1| − |ai1|

|a11||a1i|

= |aii| − |ai1| + |ai1|

|a11|(|a11| − |a1i|)

>

n

X

j=2,j6=i

|aij| + |ai1|

|a11|

n

X

j=2,j6=i

|a1j|

=

n

X

j=2,j6=i

|aij| +

n

X

j=2,j6=i

|ai1|

|a11||a1j|

n

X

j=2,j6=i

aij ai1

a11

a1j

=

n

X

j=2,j6=i

|a(2)ij |.

69 / 89

logo

Thus A(2)is still diagonally dominant. Since the subsequent steps of Gaussian elimination mimic the first, except for being applied to submatrices of smaller size, it suffices to conclude that Gaussian elimination without pivoting preserves the diagonal dominance of a matrix.

Theorem 13

Let A be strictly diagonally dominant. Then Gaussian elimination can be performed on Ax = b to obtain its unique solution without row or column interchanges.

Definition 14

A matrix A ispositive definiteif it issymmetricandxTAx > 0

∀ x 6= 0.

logo

Theorem 15

If A is an n × n positive definite matrix, then (a) Ahas an inverse;

(b) aii> 0, ∀ i = 1, . . . , n;

(c) max1≤k,j≤n|akj| ≤ max1≤i≤n|aii|;

(d) (aij)2 < aiiajj, ∀ i 6= j.

Proof:

(a) If x satisfies Ax = 0, then xTAx = 0. Since A is positive definite, this implies x = 0. Consequently, Ax = 0has only the zero solution, and A is nonsingular.

(b) Since A is positive definite, aii= eTi Aei> 0,

where ei is the i-th column of the n × n identify

matrix. 71 / 89

logo

(c) For k 6= j, define x = [xi]by

xi =

0, if i 6= j and i 6= k, 1, if i = j,

−1, if i = k.

Since x 6= 0,

0 < xTAx = ajj+ akk− ajk− akj. But AT = A, so

2akj < ajj+ akk. (5) Now define z = [zi]by

zi =

 0, if i 6= j and j 6= k, 1, if i = j or i = k.

logo

Then zTAz > 0, so

−2akj < ajj+ akk. (6) Equations (5) and (6) imply that for each k 6= j,

|akj| < akk+ ajj

2 ≤ max

1≤i≤n|aii|, so

1≤k,j≤nmax |akj| ≤ max

1≤i≤n|aii|.

(d) For i 6= j, define x = [xk]by

xk=

0, if k 6= j and k 6= i, α, if k = i,

1, if k = j,

where α represents an arbitrary real number.

73 / 89

logo

Since x 6= 0,

0 < xTAx = aiiα2+ 2aijα + ajj ≡ P (α), ∀ α ∈ R.

That is the quadratic polynomial P (α) has no real roots. It implies that

4a2ij − 4aiiajj < 0 and a2ij < aiiajj. Definition 16 (Leading principal minor)

Let A be an n × n matrix. Theupper leftk × ksubmatrix, denoted as

Ak=

a11 a12 · · · a1k a21 a22 · · · a2k

... ... . .. ... ak1 ak2 · · · akk

 ,

is called theleading k × k principal submatrix, and the determinant of Ak,det(Ak), is called theleading principal

minor. 74 / 89

logo

Theorem 17

A symmetric matrix A is positive definite if and only if each of its leading principal submatrices has a positive determinant.

Theorem 18

The symmetric matrix A is positive definite if and only if Gaussian elimination without row interchanges can be performed on Ax = b with all pivot elements positive.

Corollary 19

The matrix A is positive definite if and only if A can be factored in the form LDLT, where L is lower triangular with 1’s on its diagonal and D is a diagonal matrix with positive diagonal entries.

75 / 89

logo

Theorem 20

If all leading principal submatrices of A ∈ Rn×nare nonsingular, then A has an LU -factorization.

Proof: Proof by mathematical induction.

1 n = 1, A1 = [a11]is nonsingular, then a116= 0. Let L1 = [1]

and U1 = [a11]. Then A1= L1U1. The theorem holds.

2 Assume that the leading principal submatrices A1, . . . , Ak are nonsingular and Akhas an LU -factorization

Ak= LkUk, where Lkis unit lower triangular and Ukis upper triangular.

3 Show that there exist an unit lower triangular matrix Lk+1 and an upper triangular matrix Uk+1such that

Ak+1= Lk+1Uk+1.

logo

Write

Ak+1=

 Ak vk

wkT ak+1,k+1

 , where

vk =

 a1,k+1 a2,k+1

... ak,k+1

and wk=

 ak+1,1 ak+1,2

... ak+1,k

 .

Since Akis nonsingular, both Lkand Uk are nonsingular.

Therefore, Lkyk= vkhas a unique solution yk∈ Rk, and ztUk = wTk has a unique solution zk∈ Rk. Let

Lk+1=

 Lk 0 zkT 1



and Uk+1 =

 Uk yk

0 ak+1,k+1− zkTyk

 .

77 / 89

logo

Then Lk+1 is unit lower triangular, Uk+1 is upper triangular, and Lk+1Uk+1 =

 LkUk Lkyk

zkTUk zkTyk+ ak+1,k+1− zTkyk



=

 Ak vk wTk ak+1,k+1



= Ak+1. This proves the theorem.

logo

Theorem 21

If A is nonsingular and the LU factorization exists, then the LU factorization isunique.

Proof: Suppose both

A = L1U1 and A = L2U2

are LU factorizations. Since A is nonsingular, L1, U1, L2, U2are all nonsingular, and

A = L1U1 = L2U2 =⇒ L−12 L1 = U2U1−1.

Since L1 and L2are unit lower triangular, it implies that L−12 L1

is also unit lower triangular. On the other hand, since U1 and U2 are upper triangular, U2U1−1 is also upper triangular. Therefore,

L−12 L1 = I = U2U1−1 which implies that L1 = L2 and U1 = U2.

79 / 89

logo

Lemma 22

If A ∈ Rn×n ispositive definite, then allleading principal submatricesof A arenonsingular.

Proof: For 1 ≤ k ≤ n, let

zk= [x1, . . . , xk]T ∈ Rk and x = [x1, . . . , xk, 0, . . . , 0]T ∈ Rn, where x1, . . . , xk∈ R are not all zero. Since A is positive definite,

zkTAkzk= xTAx > 0,

where Akis the k × k leading principal submatrix of A. This shows that Ak are also positive definite, hence Akare nonsingular.

logo

Corollary 23

The matrix A is positive definite if and only if

A = GGT, (7)

where G is lower triangular with positive diagonal entries.

Proof: “⇒” A is positive definite

⇒ all leading principal submatrices of A are nonsingular

⇒ A has the LU factorization A = LU , where L is unit lower triangular and U is upper triangular.

Since A is symmetric,

LU = A = AT = UTLT =⇒ U (LT)−1 = L−1UT. U (LT)−1 is upper triangular and L−1UT is lower triangular

⇒ U (LT)−1 to be a diagonal matrix, say, U (LT)−1= D.

⇒ U = DLT. Hence

A = LDLT.

81 / 89

logo

Since A is positive definite,

xTAx > 0 =⇒ xTLDLTx = (LTx)TD(LTx) > 0.

This means D is also positive definite, and hence dii> 0. Thus D1/2is well-defined and we have

A = LDLT = LD1/2D1/2LT ≡ GGT,

where G ≡ LD1/2. Since the LU factorization is unique, G is unique.

“⇐”

Since G is lower triangular with positive diagonal entries, G is nonsingular. It implies that

GTx 6= 0, ∀ x 6= 0.

Hence

xTAx = xTGGTx = kGTxk22> 0, ∀ x 6= 0 which implies that A is positive definite.

logo

The factorization (7) is referred to as theCholesky factorization.

Derive an algorithm for computing the Cholesky factorization:

Let

A ≡ [aij] and G =

g11 0 · · · 0 g21 g22 . .. ... ... ... . .. 0 gn1 gn2 · · · gnn

.

Assume the first k − 1 columns of G have been determined after k − 1 steps. Bycomponentwise comparisonwith

[aij] =

g11 0 · · · 0 g21 g22 . .. ... ... ... . .. 0 gn1 gn2 · · · gnn

g11 g21 · · · gn1

0 g22 · · · gn2

... . .. ... ... 0 · · · 0 gnn

,

one has

akk=

k

X

j=1

gkj2 ,

83 / 89

logo

which gives

gkk2 = akk

k−1

X

j=1

gkj2 .

Moreover,

aik=

k

X

j=1

gijgkj, i = k + 1, . . . , n,

hence the k-th column of G can be computed by

gik=

aik

k−1

X

j=1

gijgkj



gkk, i = k + 1, . . . , n.

logo

Algorithm 6 (Cholesky Factorization)

Given an n × nsymmetric positive definitematrix A, this algorithm computes the Cholesky factorization A = GGT.

Initialize G = 0 For k = 1, . . . , n

G(k, k) = q

A(k, k) −Pk−1

j=1G(k, j)G(k, j) For i = k + 1, . . . , n

G(i, k) =



A(i, k) −Pk−1

j=1G(i, j)G(k, j)



G(k, k) End For

End For

In addition to n square root operations, there are approximately

n

X

k=1

[2k − 2 + (2k − 1)(n − k)] = 1 3n3+1

2n2−5 6n floating-point arithmetic required by the algorithm.

85 / 89

logo

In document Direct Methods for Solving Linear Systems (Page 66-86)

Related documents