**logo**

**logo**

Since Ax = 0, for the fixed k, we have

n

X

j=1

a_{kj}xj = 0 ⇒ a_{kk}x_{k}= −

n

X

j=1,j6=k

a_{kj}xj

⇒ |a_{kk}||x_{k}| ≤

n

X

j=1,j6=k

|a_{kj}||x_{j}|,

which implies

|a_{kk}| ≤

n

X

j=1,j6=k

|a_{kj}||x_{j}|

|x_{k}|≤

n

X

j=1,j6=k

|a_{kj}|.

But this contradicts the assumption that A is diagonally dominant. Therefore A must be nonsingular.

**67 / 89**

**logo**

**Theorem 12**

Gaussian elimination without pivoting preserve the diagonal dominance of a matrix.

Proof: Let A ∈ R^{n×n} be a diagonally dominant matrix and
A^{(2)}= [a^{(2)}_{ij} ]is the result of applying one step of Gaussian
elimination to A^{(1)}= Awithout any pivoting strategy.

After one step of Gaussian elimination, a^{(2)}_{i1} = 0for i = 2, . . . , n,
and the first row is unchanged. Therefore, the property

|a^{(2)}_{11}| >

n

X

j=2

|a^{(2)}_{1j} |

is preserved, and all we need to show is that

|a^{(2)}_{ii} | >

n

X

j=2,j6=i

|a^{(2)}_{ij} |, for i = 2, . . . , n.

**logo**

Using the Gaussian elimination formula (2), we have

|a^{(2)}_{ii} | =

a^{(1)}_{ii} −a^{(1)}_{i1}
a^{(1)}_{11}

a^{(1)}_{1i}

=

aii−ai1

a_{11}a1i

≥ |aii| − |ai1|

|a11||a1i|

= |aii| − |ai1| + |ai1| − |a_{i1}|

|a11||a1i|

= |aii| − |ai1| + |ai1|

|a11|(|a11| − |a1i|)

>

n

X

j=2,j6=i

|a_{ij}| + |a_{i1}|

|a11|

n

X

j=2,j6=i

|a_{1j}|

=

n

X

j=2,j6=i

|aij| +

n

X

j=2,j6=i

|ai1|

|a11||a1j|

≥

n

X

j=2,j6=i

aij− ai1

a11

a1j

=

n

X

j=2,j6=i

|a^{(2)}_{ij} |.

**69 / 89**

**logo**

Thus A^{(2)}is still diagonally dominant. Since the subsequent
steps of Gaussian elimination mimic the first, except for being
applied to submatrices of smaller size, it suffices to conclude
that Gaussian elimination without pivoting preserves the
diagonal dominance of a matrix.

**Theorem 13**

Let A be strictly diagonally dominant. Then Gaussian elimination can be performed on Ax = b to obtain its unique solution without row or column interchanges.

**Definition 14**

A matrix A ispositive definiteif it issymmetricandx^{T}Ax > 0

∀ x 6= 0.

**logo**

**Theorem 15**

If A is an n × n positive definite matrix, then
**(a)** Ahas an inverse;

**(b)** a_{ii}> 0, ∀ i = 1, . . . , n;

**(c)** max_{1≤k,j≤n}|a_{kj}| ≤ max_{1≤i≤n}|a_{ii}|;

**(d)** (a_{ij})^{2} < a_{ii}a_{jj}, ∀ i 6= j.

Proof:

**(a)** If x satisfies Ax = 0, then x^{T}Ax = 0. Since A is
positive definite, this implies x = 0. Consequently,
Ax = 0has only the zero solution, and A is
nonsingular.

**(b)** Since A is positive definite,
a_{ii}= e^{T}_{i} Ae_{i}> 0,

where ei is the i-th column of the n × n identify

matrix. **71 / 89**

**logo**

**(c)** For k 6= j, define x = [x_{i}]by

xi =

0, if i 6= j and i 6= k, 1, if i = j,

−1, if i = k.

Since x 6= 0,

0 < x^{T}Ax = a_{jj}+ a_{kk}− a_{jk}− a_{kj}.
But A^{T} = A, so

2akj < ajj+ akk. (5)
Now define z = [z_{i}]by

z_{i} =

0, if i 6= j and j 6= k, 1, if i = j or i = k.

**logo**

Then z^{T}Az > 0, so

−2a_{kj} < ajj+ akk. (6)
Equations (5) and (6) imply that for each k 6= j,

|a_{kj}| < a_{kk}+ ajj

2 ≤ max

1≤i≤n|a_{ii}|,
so

1≤k,j≤nmax |a_{kj}| ≤ max

1≤i≤n|a_{ii}|.

**(d)** For i 6= j, define x = [x_{k}]by

x_{k}=

0, if k 6= j and k 6= i, α, if k = i,

1, if k = j,

where α represents an arbitrary real number.

**73 / 89**

**logo**

Since x 6= 0,

0 < x^{T}Ax = aiiα^{2}+ 2aijα + ajj ≡ P (α), ∀ α ∈ R.

That is the quadratic polynomial P (α) has no real roots. It implies that

4a^{2}_{ij} − 4a_{ii}ajj < 0 and a^{2}_{ij} < aiiajj.
**Definition 16 (Leading principal minor)**

Let A be an n × n matrix. Theupper leftk × ksubmatrix, denoted as

A_{k}=

a_{11} a_{12} · · · a_{1k}
a21 a22 · · · a2k

... ... . .. ... ak1 ak2 · · · akk

,

is called theleading k × k principal submatrix, and the
determinant of A_{k},det(A_{k}), is called theleading principal

minor. ^{74 / 89}

**logo**

**Theorem 17**

A symmetric matrix A is positive definite if and only if each of its leading principal submatrices has a positive determinant.

**Theorem 18**

The symmetric matrix A is positive definite if and only if Gaussian elimination without row interchanges can be performed on Ax = b with all pivot elements positive.

**Corollary 19**

The matrix A is positive definite if and only if A can be factored
in the form LDL^{T}, where L is lower triangular with 1’s on its
diagonal and D is a diagonal matrix with positive diagonal
entries.

**75 / 89**

**logo**

**Theorem 20**

If all leading principal submatrices of A ∈ R^{n×n}are nonsingular,
then A has an LU -factorization.

Proof: Proof by mathematical induction.

**1** n = 1, A_{1} = [a_{11}]is nonsingular, then a_{11}6= 0. Let L_{1} = [1]

and U_{1} = [a_{11}]. Then A_{1}= L_{1}U_{1}. The theorem holds.

**2** Assume that the leading principal submatrices A_{1}, . . . , A_{k}
are nonsingular and A_{k}has an LU -factorization

A_{k}= L_{k}U_{k}, where L_{k}is unit lower triangular and U_{k}is
upper triangular.

**3** Show that there exist an unit lower triangular matrix L_{k+1}
and an upper triangular matrix U_{k+1}such that

Ak+1= Lk+1Uk+1.

**logo**

Write

Ak+1=

Ak vk

w_{k}^{T} a_{k+1,k+1}

, where

v_{k} =

a_{1,k+1}
a_{2,k+1}

...
a_{k,k+1}

and w_{k}=

a_{k+1,1}
a_{k+1,2}

...
a_{k+1,k}

.

Since A_{k}is nonsingular, both L_{k}and U_{k} are nonsingular.

Therefore, L_{k}y_{k}= v_{k}has a unique solution y_{k}∈ R^{k}, and
z^{t}Uk = w^{T}_{k} has a unique solution z_{k}∈ R^{k}. Let

L_{k+1}=

Lk 0
z_{k}^{T} 1

and U_{k+1} =

Uk yk

0 a_{k+1,k+1}− z_{k}^{T}y_{k}

.

**77 / 89**

**logo**

Then L_{k+1} is unit lower triangular, U_{k+1} is upper triangular, and
Lk+1Uk+1 =

L_{k}U_{k} L_{k}y_{k}

z_{k}^{T}Uk z_{k}^{T}yk+ ak+1,k+1− z^{T}_{k}yk

=

A_{k} v_{k}
w^{T}_{k} a_{k+1,k+1}

= A_{k+1}.
This proves the theorem.

**logo**

**Theorem 21**

If A is nonsingular and the LU factorization exists, then the LU factorization isunique.

Proof: Suppose both

A = L_{1}U_{1} and A = L_{2}U_{2}

are LU factorizations. Since A is nonsingular, L_{1}, U_{1}, L_{2}, U_{2}are
all nonsingular, and

A = L_{1}U_{1} = L_{2}U_{2} =⇒ L^{−1}_{2} L_{1} = U_{2}U_{1}^{−1}.

Since L1 and L2are unit lower triangular, it implies that L^{−1}_{2} L1

is also unit lower triangular. On the other hand, since U_{1} and U_{2}
are upper triangular, U_{2}U_{1}^{−1} is also upper triangular. Therefore,

L^{−1}_{2} L1 = I = U2U_{1}^{−1}
which implies that L_{1} = L_{2} and U_{1} = U_{2}.

**79 / 89**

**logo**

**Lemma 22**

If A ∈ R^{n×n} ispositive definite, then allleading principal
submatricesof A arenonsingular.

Proof: For 1 ≤ k ≤ n, let

z_{k}= [x1, . . . , x_{k}]^{T} ∈ R^{k} and x = [x1, . . . , x_{k}, 0, . . . , 0]^{T} ∈ R^{n},
where x_{1}, . . . , xk∈ R are not all zero. Since A is positive
definite,

z_{k}^{T}A_{k}z_{k}= x^{T}Ax > 0,

where A_{k}is the k × k leading principal submatrix of A. This
shows that A_{k} are also positive definite, hence A_{k}are
nonsingular.

**logo**

**Corollary 23**

The matrix A is positive definite if and only if

A = GG^{T}, (7)

where G is lower triangular with positive diagonal entries.

Proof: “⇒” A is positive definite

⇒ all leading principal submatrices of A are nonsingular

⇒ A has the LU factorization A = LU , where L is unit lower triangular and U is upper triangular.

Since A is symmetric,

LU = A = A^{T} = U^{T}L^{T} =⇒ U (L^{T})^{−1} = L^{−1}U^{T}.
U (L^{T})^{−1} is upper triangular and L^{−1}U^{T} is lower triangular

⇒ U (L^{T})^{−1} to be a diagonal matrix, say, U (L^{T})^{−1}= D.

⇒ U = DL^{T}. Hence

A = LDL^{T}.

**81 / 89**

**logo**

Since A is positive definite,

x^{T}Ax > 0 =⇒ x^{T}LDL^{T}x = (L^{T}x)^{T}D(L^{T}x) > 0.

This means D is also positive definite, and hence d_{ii}> 0. Thus
D^{1/2}is well-defined and we have

A = LDL^{T} = LD^{1/2}D^{1/2}L^{T} ≡ GG^{T},

where G ≡ LD^{1/2}. Since the LU factorization is unique, G is
unique.

“⇐”

Since G is lower triangular with positive diagonal entries, G is nonsingular. It implies that

G^{T}x 6= 0, ∀ x 6= 0.

Hence

x^{T}Ax = x^{T}GG^{T}x = kG^{T}xk^{2}_{2}> 0, ∀ x 6= 0
which implies that A is positive definite.

**logo**

The factorization (7) is referred to as theCholesky factorization.

**Derive an algorithm for computing the Cholesky factorization:**

Let

A ≡ [a_{ij}] and G =

g_{11} 0 · · · 0
g_{21} g_{22} . .. ...
... ... . .. 0
gn1 gn2 · · · gnn

.

Assume the first k − 1 columns of G have been determined after k − 1 steps. Bycomponentwise comparisonwith

[aij] =

g11 0 · · · 0 g21 g22 . .. ... ... ... . .. 0 gn1 gn2 · · · gnn

g11 g21 · · · gn1

0 g22 · · · gn2

... . .. ... ... 0 · · · 0 gnn

,

one has

akk=

k

X

j=1

g_{kj}^{2} ,

**83 / 89**

**logo**

which gives

g_{kk}^{2} = akk−

k−1

X

j=1

g_{kj}^{2} .

Moreover,

aik=

k

X

j=1

gijgkj, i = k + 1, . . . , n,

hence the k-th column of G can be computed by

g_{ik}=

a_{ik}−

k−1

X

j=1

g_{ij}g_{kj}

g_{kk}, i = k + 1, . . . , n.

**logo**

**Algorithm 6 (Cholesky Factorization)**

Given an n × nsymmetric positive definitematrix A, this
algorithm computes the Cholesky factorization A = GG^{T}.

Initialize G = 0 For k = 1, . . . , n

G(k, k) = q

A(k, k) −Pk−1

j=1G(k, j)G(k, j) For i = k + 1, . . . , n

G(i, k) =

A(i, k) −Pk−1

j=1G(i, j)G(k, j)

G(k, k) End For

End For

In addition to n square root operations, there are approximately

n

X

k=1

[2k − 2 + (2k − 1)(n − k)] = 1
3n^{3}+1

2n^{2}−5
6n
floating-point arithmetic required by the algorithm.

**85 / 89**

**logo**