• 沒有找到結果。

By induction, we obtain that for any k ∈ N, xk(t

N/A
N/A
Protected

Academic year: 2022

Share "By induction, we obtain that for any k ∈ N, xk(t"

Copied!
11
0
0

加載中.... (立即查看全文)

全文

(1)

1. Operator Norms on Space of linear maps

Let A be an n × n real matrix and x0 be a vector in Rn. We would like to use the Picard iteration method to solve for the following system of ordinary linear differential equation

(1.1) x0(t) = Ax(t)

x(0) = x0

By integrating the differential equation (1.1), we obtain that x(t) = x0+

Z t 0

Ax(s)ds.

Let x0(t) = x0 for t ∈ [0, b] be the constant function and define x : [0, b] → Rn recursively by

xn(t) = x0+ Z t

0

Axn−1(s)ds.

By induction, we obtain that for any k ∈ N, xk(t) = x0+(tA)

1! x0+ · · · +(tA)k k! x0. We can rewrite xk(t) as

xk(t) =



In+(tA)

1! + · · · +(tA)k k!



x0, for any k ≥ 1.

Here In denotes the n × n identify matrix. It is natural for us to ask: can we define In+

X

k=1

(tA)k k! ?

To define an infinite series of matrices, we need to define a norm on Mn(R), where Mn(R) is the space of n × n real matrices. Let us define a norm on the space Mmn(R) of m × n matrices where m and n are not necessarily equal.

Let A be an m × n matrix. We define a function LA : Rn → Rm by LA(x) = Ax. It is routine to check that LAis a linear map. Conversely, given a linear map T : Rn→ Rm, can we find a unique m × n matrix A such that T = LA?

Let T : Rn→ Rmbe a linear map. For each x ∈ Rn, we write T (x) = (T1(x), · · · , Tm(x)).

Since T is linear, Ti : Rn→ R is linear for each 1 ≤ i ≤ m.

Lemma 1.1. Let S : Rn→ R be a linear map. Then there exists a vector a ∈ Rnsuch that S(x) = ha, xi for any x ∈ Rn.

Proof. Let β = {e1, · · · , en} be the standard basis for Rn. We write x = Pn

i=1xiei. By linearity of S, S(x) = x1S(e1) + · · · + xnS(en). Let ai = S(ei) for 1 ≤ i ≤ n. For a = (a1, · · · , an), we have

S(x) = a1x1+ · · · + anxn= ha, xi.



1

(2)

Since Ti : Rn→ R is linear for each 1 ≤ i ≤ m, we can choose bi = (ai1, · · · , ain) ∈ Rn such that Ti(x) = hbi, xi for any x ∈ Rn. Let A be the m × n matrix

A =

a11 a12 · · · a1n a21 a22 · · · a2n

... ... . .. ... am1 · · · amn

 ,

i.e. A is the matrix whose i-th row vector is bi. By definition, T = LA.

Proposition 1.1. Let L(Rn, Rm) be the space of linear maps from Rn to Rm. Then L(Rn, Rm) is a vector subspace of the space of functions F (Rn, Rm) from Rn to Rm. Theorem 1.1. Define φ : Mm×n(R) → L(Rn, Rm) by A 7→ LA. Then φ is a linear isomor- phism.

Proposition 1.2. Let T : Rn→ Rm be a linear map. Then there exists M > 0 such that kT(x)kRm ≤ M kxkRn for any x ∈ Rn.

Hence T is a Lipschitz function.

Proof. Let us write T = (T1, · · · , Tm). For each 1 ≤ i ≤ m, we choose bi so that Ti(x) = hbi, xi for all x ∈ Rn. For any x ∈ Rn,

kT (x)k2

Rm =p|T1(x)|2+ · · · + |Tm(x)|2

= q

|hb1, xi|2+ · · · + |hbm, xi|2 By Cauchy Schwarz inequality, |hbi, xi|2 ≤ kbik2

Rnkxk2

Rn. For any x ∈ Rn, kT (x)k2Rm ≤ (kb1k2Rn+ · · · + kbmk2Rn)kxk2Rn. Let M =

q kb1k2

Rn+ · · · + kbmk2

Rn. This proves the assertion. 

Let T : Rn→ Rm be a linear map and M > 0 be as above. For kxkRn = 1, we find kT (x)kRm≤ M.

This allows us to define the operator norm of T by kT kop= sup

kxkRn=1

kT (x)kRm.

Theorem 1.2. The space L(Rn, Rm) of linear maps from Rnto Rm together with k · kopis a real Banach space.

Proof. We leave it to the reader to verify that k · kop is a norm on L(Rn, Rm). The com- pleteness of L(Rn, Rm) wth respect to k · kopwill be proved later. 

It is not difficult for us to prove the following properties.

Proposition 1.3. Let T ∈ L(Rn, Rm) and S ∈ L(Rm, Rp). Then (1) kT (x)kRm ≤ kT kopkxkRn, and

(2) kS ◦ T kop≤ kSkopkT kop.

(3)

Proof. Let us prove (1). When x = 0, the statement is obvious. When x 6= 0, let y = x/kxkRn. Then kT (y)kRm ≤ kT kop. By linearity of T,

kT (y)kRm = T

 x

kxkRn



= 1

kxkRnkT (x)kRn ≤ kT kop. Multiplying the above inequality both side by kxkRn, we obtain (1).

For (2), we use (1):

k(S ◦ T )(x)kRp ≤ kSkopkT (x)kRm

≤ kSkopkT kopkxkRm.

Hence kS ◦ T kop≤ kSkopkT kop. 

Corollary 1.1. If T : Rn→ Rn is linear, then kTkkop≤ kT kkop for any k ∈ N.

Proof. This can be proved by induction. 

For A ∈ Mmn(R), we define the matrix norm of A to be the operator norm of LA, i.e.

kAk = kLAkop.

Theorem 1.3. The space Mmn(R) of m × n real matrices together with the matrix norm k · k is a real Banach space.

Proof. Let us denote by

kAk2 = v u u t

m

X

i=1 n

X

j=1

|aij|2

kAk= max{|aij| : 1 ≤ i ≤ m, 1 ≤ j ≤ n}, for each A ∈ Mmn(R). For any A ∈ Mmn(R),

kAk≤ kAk ≤ kAk2.

(See class note for the detail of the proof of this inequality) Let (Ak) be a Cauchy sequence in Mmn(R). For any  > 0, there exists K ∈ N such that kAk− Alk < /4mn whenever k, l ≥ K. Denote Ak by [aij(k)]. For any k, l ≥ K,

|aij(k) − aij(l)| ≤ kAk− Alk <  4mn.

This implies that (aij(k)) is a Cauchy sequence in R for 1 ≤ i ≤ m and 1 ≤ j ≤ n. By completeness of R, (aij(k)) is convergent in R. Denote aij = limk→∞aij(k). When k ≥ K,

|aij(k) − aij| = lim

l→∞|aij(k) − aij(l)| ≤  4mn. For k ≥ K,

kAk− Ak ≤ kAk− Ak2 =

sX

ij

|aij(k) − aij|2≤ r2

4 =  2 < .

This shows that limk→∞Ak= A in Mmn(R). Hence Mmn(R) is a real Banach space. 

(4)

This theorem implies that L(Rn, Rm) forms a Banach space with respect to the operator norm.

Given an n × n real matrix A, it is make sense for us to ask if the limit (sk) exists in Mn(R), where

(1.2) sk= In+A

1!+ · · · + Ak

k!, k ≥ 1.

Theorem 1.4. Let (V, k · k) be a real Banach space and (vn) be a sequence of vectors in V. Suppose thatP

n=1kvnk is convergent in R. Then P

n=1vnis convergent in (V, k · k).

Proof. Exercise. 

Theorem 1.5. (Comparison Test) Let (an) and (bn) be a sequence of nonnegative real numbers. Suppose that

(1) 0 ≤ an≤ bn for any n ≥ 1, (2) P

n=1bn is convergent in R.

Then P

n=1an is convergent.

Proof. Exercise. 

Corollary 1.2. (Comparison Test in Banach Space) Let (V, k · k) be a real Banach space and (vn) be a sequence of vectors in V. Suppose that there exists a sequence of nonnegative real numbers (bn) such that

(1) kvnk ≤ bnfor any n ≥ 1, (2) P

n=1bn is convergent in R.

Then P

n=1vn is convergent in (V, k · k).

Let v1 = In and vk = Ak−1/(k − 1)! for k ≥ 2 and b1 = 1 and bk = kAkk−1/(k − 1)!

for k ≥ 2. Since kAkk ≤ kAkk for any k ≥ 1, we find kvkk ≤ bk for any k ≥ 1. Using elementary calculus, we know P

k=1bk is convergent in R (to ekAk). By comparison test in Banach space, we find P

k=1vk is convergent in (Mn(R), k · k), i.e. (sk) is convergent in (Mn(R), k · k). We define eA to be the limit of (sk) in (Mn(R), k · k), i.e.

eA= lim

k→∞sk = In+

X

k=1

Ak k!.

Here the right hand side is the expression for limk→∞sk. Similarly, we can define cos A and sin A for any n × n real matrix A by

cos A =

X

k=0

(−1)k

(2k)! A2k, sin A =

X

k=0

(−1)k

(2k + 1)!A2k+1.

We leave it to the reader to verify that cos A and sin A are defined. (The infinite series of n × n matrices are convergent in Mn(R).)

Theorem 1.6. Let x(t) = etAx0 for t ∈ [0, b]. Then x : [0, b] → Rn is the unique differen- tiable function which solves (1.1).

To prove this theorem, we need to introduce the space of vector valued continuous func- tions. This theorem will be proved later. Let us study more about the space Mn(R).

In calculus, we have seen that for each x ∈ R with |x| < 1, 1

1 − x =

X

k=0

xk.

(5)

We may ask ourself that whether the equality holds for any n × n matrix A with kAk < 1.

Since kAk < 1, P

k=0kAkk is convergent in R. Here A0 = In the identity matrix. Since Mn(R) is a real Banach space, P

k=0Ak is convergent in Mn(R).

Proposition 1.4. Let A ∈ Mn(R) with kAk < 1 and B =P

k=0Ak. Then B(In− A) = (In− A)B = In.

This implies that In− A is invertible with B = (In− A)−1.

Proof. Let sk= In+ A + · · · + Ak−1 for each k ≥ 1. For each k ≥ 1, Ask= A + · · · + Ak= sk+1− In= skA.

Using the inequality,

kAsk− ABk = kA(sk− B)k ≤ kAkksk− Bk kskA − BAk = kskA − BAk ≤ ksk− BkkAk,

we know limk→∞Ask = AB and limk→∞skA = BA. On the other hand, limk→∞(sk+1− In) = B − In. We find that AB = B − In= BA. This implies that

(In− A)B = B(In− A) = In.

This proves our assertion. 

As an application to this proposition, let us prove the following.

Theorem 1.7. Let GLn(R) be the set of all real n × n invertible matrices. Then GLn(R) forms an open subset of Mn(R).

Proof. Let A ∈ GLn(R). Choose  = 1/kA−1k. Claim that B(A, ) ⊆ GLn(R). This is equivalent to say that if kB − Ak < , i.e. B ∈ B(A, ), then B is invertible. Observe that

B = B − A + A = (B − A)A−1+ I A.

Let C = (B − A)A−1 Then B = (I + C)A and kCk ≤ kB − AkkA−1k < 1

kA−1k· kA−1k = 1.

By the previous proposition, I + C is invertible. Since A is invertible and product of any two invertible matrices is again invertible, B = (I + C)A is invertible. Hence B ∈ GLn(R).

We proved that B(A, ) ⊆ GLn(R), and hence A is an interior point of GLn(R). Since A is

arbitrarily in GLn(R), GLn(R) is open. 

Theorem 1.8. Let φ : GLn(R) → GLn(R) be the map φ(A) = A−1 for A ∈ GLn(R). Then φ is continuous.

Proof. Let A ∈ GLn(R) and choose d = 1/2kA−1k. For any B ∈ B(A, d), kB − Ak < d < 1

kA−1k.

Hence B is invertible; hence B(A, d) ⊆ GLn(R). Observe that for B ∈ B(A, d), φ(B) − φ(A) = B−1(A − B)A−1

and hence

kφ(B) − φ(A)k ≤ kB−1kkA − BkkA−1k.

(6)

Let us estimate kB−1k when B ∈ B(A, d). For each y ∈ Rn, kykRn = kA−1(Ay)kRn≤ kA−1kkAykRn.

Hence kA−1k−1kykRn ≤ kAykRn. By triangle inequality and the norm inequality,

kAykRn = k(A − B)y + BykRn ≤ k(A − B)ykRn+ kBykRn ≤ kA − BkkykRn+ kBykRn. This shows that for any y ∈ Rn,

(kA−1k−1− kA − Bk)kykRn ≤ kBykRn.

Since kA − Bk < d = kA−1k−1/2, (kA−1k−1− kA − Bk) > kA−1k−1/2. This shows that for any y ∈ Rn,

kA−1k−1

2 kykRn ≤ kBykRn.

Since B is invertible, for any x ∈ Rn, there exists a unique y ∈ Rnso that By = x. We find that y = B−1x and hence kB−1xkRn ≤ 2kA−1kkxkRn. We find that kB−1k ≤ 2kA−1k. Thus for any B ∈ B(A, ),

kφ(B) − φ(A)k ≤ 2kA−1k2kA − Bk.

For any  > 0, we choose

δA, = min

 

2kA−1k2, d

 . Then for any B ∈ B(A, δA,),

kφ(B) − φ(A)k < 2kA−1k2δ ≤ 2kA−1k2· 

2kA−1k2 = .

This proves that φ is continuous. 

(7)

2. Spectral Theorem for symmetric matrices with an application to the computation of Operator Norms

Let A : Rn→ Rm be a linear map. The operator norm of A is defined to be kAk = sup

x∈Sn−1

kAxk, where Sn−1 = {x ∈ Rn: kxk = 1}. Then

kAk2 = sup

x∈Sn−1

kAxk2

= sup

x∈Sn−1

hAx, Axi

= sup

x∈Sn−1

hAtAx, xi.

Denote T = AtA. Then T : Rn→ Rn is a linear map such that (1) T is symmetric, i.e. Tt= T, and

(2) hT x, xi = kAxk2 ≥ 0 for any x ∈ Rn.

Definition 2.1. A linear map T : Rn → Rn is nonnegative definite if it satisfies (1) and (2).

For any nonnegative definite linear map Q : Rn→ Rn, we define a function QT : Rn→ R by QT(x) = hT x, xi, x ∈ Rn.

If we denote T by [Tij] and x = (x1, · · · , xn), then QT(x) =

n

X

i,j=1

Tijxixj.

We see that QT is a real valued continuous function. Since Sn−1 is closed and bounded, by Bolzano-Weierstrass Theorem, Sn−1 is sequentially compact. By Extreme value Theorem, QT attains its maximum (and also minimum) on Sn−1. Let λ1 be the maximum of QT on Sn−1 and v1 ∈ Sn−1 so that QT(v1) = λ1. Since QT(x) ≥ 0 for any x ∈ Rn, λ1 ≥ 0. Since hT v1, v1i = λ1, h(λ1I − T )v1, v1i = 0. Let us prove that, in fact, we have

h(λ1I − T )v1, wi = 0 for any w ∈ Rn. If the statement is true, then

T v1 = λ1v1,

i.e. λ1 is an eigenvalue of T and v1 is an eigenvector of T corresponding to eigenvalue λ1. Let B = λ1I − T. Then hBv1, v1i = 0. We want to show that hBv1, wi = 0 for any w ∈ Rn. Since any w ∈ Rn can be written uniquely as w = av1+ y for a ∈ R and y ∈ {v1},

hBv1, wi = ahBv1, v1i + hBv1, yi = hBv1, yi.

Hence if we can show that for any y ∈ {v1}, hBv1, yi = 0, then hBv1, wi = 0 for any w ∈ Rn. Thus we only need to prove that hBv1, wi = 0 is true for w ∈ {v1}. Since QT(v) ≤ λ1

for any v ∈ Sn−1, QT(x) ≤ λ1kxk2 for any x ∈ Rn. This implies that hBx, xi ≥ 0 for any x ∈ Rn. For any t ∈ R, and any w ∈ Rn,

0 ≤ hB(v1+ tw), (v1+ tw)i

= hBv1, v1i + 2hBv1, wit + hBw, wit2

= 2hBv1, wit + hBw, wit2.

(8)

This shows that hBv1, wi2 ≤ 0. This shows that hBv1, wi = 0.

Theorem 2.1. Let T : Rn→ Rn be a nonnegative definite linear map. The number λ1 = max

x∈Sn−1

hT x, xi

is an eigenvalue of T and any v ∈ Sn−1with hT v, vi = λ is an eigenvector of T corresponding to λ.

Lemma 2.1. Let T, λ1 and v1 be as above. Let V1 = span{v1} and W1 = V1. Then Rn= V1⊕ W1 and T (W1) ⊆ W1.

Proof. If w ∈ W1, then hv1, wi = 0 and hence

hv1, T wi = hT v1, wi = λ1hv1, wi = 0.

This shows that T w ∈ V1= W1. 

For any x ∈ Rn, we can write x = av1+ w for w ∈ W1. Hence T (x) = aλ1v1+ T (w).

Let T1 : W1 → W1 be the map defined by T1(w) = T (w). Since T is linear, T1 is linear.

Furthermore, Since T is symmetric, for any w1, w2 ∈ W1,

hT1w1, w2i = hT w1, w2i = hw1, T w2i = hw1, T1(w2)i.

This shows that T1 is also symmetric. For any w ∈ W1, hT1w, wi = hT w, wi ≥ 0.

This shows that T1 is nonnegative definite. Set λ2 = max

{w∈W1:kwk=1}

hT1w, wi.

Remark. The set {w ∈ W1 : kwk = 1} is the intersection of W1 and Sn−1. If we denote v1

by (a1, · · · , an), then

W1= {(x1, · · · , xn) ∈ Rn: a1x1+ · · · + anxn= 0}.

Hence W1 is a closed subset of Rn. Therefore W1∩ Sn−1 is closed. Since Sn−1 is bounded, W1 ∩ Sn−1 is bounded. The intersection {w ∈ W1 : kwk = 1} is closed and bounded. By Bolzano-Weierstrass Theorem, {w ∈ W1 : kwk = 1} is sequentially compact. We apply the extreme value theorem to find λ2.

Then λ1 ≥ λ2 ≥ 0. Choose v2 ∈ W1 so that kv2k = 1 and λ2 = hT1v2, v2i. Then v1 ⊥ v2 and T1v2 = λ2v2. This shows that

T v2 = λ2v2.

By induction, we can find a finite nonincreasing nonnegative real numbers λ1 ≥ λ2≥ · · · λn≥ 0

and an orthonormal basis {vi: 1 ≤ i ≤ n} for Rn such that T vi = λivi for 1 ≤ i ≤ n.

Since {vi : 1 ≤ i ≤ n} is an orthonormal basis for Rn, for any x ∈ Rn, we can write x = Pn

i=1hx, viivi. By linearity of T, we obtain T x =Pn

i=1λihx, viivi. Let Λ = diag(λ1, · · · , λn) and V be the n × n matrix whose i-th column vector is vi. Then T V = V Λ which implies that T = V ΛVt by VtV = I.

(9)

Theorem 2.2. (Spectral Theorem for nonnegative definite linear map) Let T : Rn → Rn be a nonnegative definite linear map. Then there exist a finite nonincreasing nonnegative real numbers

λ1≥ λ2 ≥ · · · ≥ λn≥ 0 and an orthonormal basis {vi: 1 ≤ i ≤ n} for Rn such that

(1) T vi= λivi for 1 ≤ i ≤ n.

(2) T x =Pn

i=1λihx, viivi.

(3) T = V ΛVtwhere Λ = diag(λ1, · · · , λn) and U is the n × n matrix whose i-th column vector is vi.

Let S : Rn→ Rn be any symmetric linear map. We consider m = min

x∈Sn−1

hSx, xi.

Set T = S − mI. Then T is nonnegative definite. By spectral theorem for T, we choose λi ∈ R and vi ∈ Rn such that T vi = λivi as above, Then Svi = (λi+ m)vi for 1 ≤ i ≤ n.

Let µi = λi+ m. Then

Svi= µivi for 1 ≤ i ≤ n.

We see that vi is also an eigenvector of S with eigenvalue µi. Furthermore, Sx =

n

X

i=1

µiλihx, viivi. Thus we prove that

Corollary 2.1. Spectral Theorem holds for any symmetric linear map.

Let us go back to the computation of kAk. It follows from the definition that kAk2 is the largest eigenvalue of AtA. Let us denote λ1(AtA) the largest eigenvalue of AtA. Then kAk =pλ1(AtA).

Definition 2.2. Let λi(AtA) be the eigenvalue of AtA such that λ1(AtA) ≥ · · · ≥ λn(AtA) ≥ 0.

The i-th singular value of A is defined to be si(A) =p

λi(AtA).

Corollary 2.2. Let A : Rn→ Rm be a linear map. Then kAk = s1(A).

Now let us introduce the singular value decomposition for the any linear map. Let us recall a basic theorem in linear algebra.

Proposition 2.1. Let A : Rn→ Rm be a linear map. Then (1) (Im A)= ker At

(2) (Im At)= ker A.

Proof. Let z ∈ (Im A). Then hAx, zi = 0 for any x ∈ Rn. hx, Atzi = hAx, zi = 0 for any x ∈ Rn. Therefore Atz = 0. We see that z ∈ ker At. If z ∈ ker At, then Atz = 0. Hence hx, Atzi = 0 for any x ∈ Rn which implies that hAx, zi = 0 for any x ∈ Rn. Therefore

z ∈ (Im A). 

(10)

Choose an orthonormal basis {vi : 1 ≤ i ≤ n} such that AtAvi = s2ivi for 1 ≤ i ≤ n. Let us assume that si= 0 for i > k and sk6= 0. Let zi= Avi for 1 ≤ i ≤ k. We find Atzi = s2ivi

for 1 ≤ i ≤ k. Let us compute hzi, zji for any 1 ≤ i, j ≤ k :

hzi, zji = hAvi, Avji = hAtAvi, vji = s2ihvi, vji = s2iδij.

Hence {zi: 1 ≤ i ≤ k} forms an orthogonal subset of Im A. Since Avj = 0 for k + 1 ≤ j ≤ n, nullity A = dim ker A = n − k. By rank-nullity lemma,

rank A + nullity A = n

which shows that rank A = k. Since {zi : 1 ≤ i ≤ k} is orthogonal, it is linearly independent.

We see that {zi : 1 ≤ i ≤ k} forms an orthogonal basis for Im A. Let ui= zi/sifor 1 ≤ i ≤ k.

Then {ui : 1 ≤ i ≤ k} forms an orthonormal basis for Im A. We write Ax =

k

X

i=1

hAx, uiiui=

k

X

i=1

hx, Atuiiui. Since Atui= At(zi/si) = sivi, we find

Ax =

k

X

i=1

sihx, viiui.

We can extend {ui : 1 ≤ i ≤ k} to an orthonormal basis {ui : 1 ≤ i ≤ m} for Rm. Let U be the m × m matrix whose i-th column vector is ui and V be the n × n matrix whose j-th column vector is vi. Then AV = U Σ. Since VtV = In(V is an orthogonal matrix), then we obtain the matrix form of the singular value decomposition

A = U ΣVt. Here Σ = diag(s1, · · · , sn).

Theorem 2.3. (Singular Value Decomposition for any linear map) Let A : Rn → Rm be any linear map. Suppose k = rank A. There exists an orthonormal basis {vi} for Rn and orthonormal basis {ui} for Im A and a finite nonnegative nonincreasing sequence of real numbers s1≥ · · · ≥ sk such that

(1) AtAvi= s2ivi for 1 ≤ i ≤ k and AtAvj = 0 for k + 1 ≤ j ≤ n and

(2) Avi = siui for 1 ≤ i ≤ k and Avj = 0 for k +1 ≤ j ≤ n and Atui = sivifor 1 ≤ i ≤ k, and

(3) Ax =Pk

i=1sihx, viiui for any x ∈ Rn.

(4) A = U ΣVt, where U is the m × m matrix whose i-th column vector is ui and V is the n × n matrix whose j-th column vector is vi and Σ = diag(s1, · · · , sn).

Example 2.1. Let A =1 1 2 2

 . (1) Find kAk.

(2) Find the singular value decomposition of A.

The rank of A is one and the image of A is spanned by 1 2



. Take u1 = 1

5

1 2

 and v1 = 1

2

1 1



. Then Av1 =√

10u1. Take v2 = 1

2

−1 1



and u2 = 1

5

−2 1



. Then Av2 = 0.

(11)

The larges singular value of A is√

10 and hence kAk =√

10. We see that A

" 1

21

1 2

2

1 2

#

=

" 1

52

2 5

5

1 5

#√10 0

0 0

 .

Hence we obtain the singular value decomposition of A (in the matrix form) A =

" 1

52

2 5

5

1 5

#√10 0

0 0

" 1

2

1 2

1

2

1 2

# .

參考文獻

相關文件

As in the proof of Green’s Theorem, we prove the Divergence Theorem for more general regions by pasting smaller regions together along common faces... Thus, when we add the

As in the proof of Green’s Theorem, we prove the Divergence Theorem for more general regions by pasting smaller regions together along common faces... Thus, when we add the

Later we will apply this theorem to prove existence and uniqueness of solutions to ODE’s, and also to prove inverse and implicit func­.. tion

We will prove the statement by induction on deg Q... Here C is the boundary

(6) Prove that the following sequences have a convergent subsequence in the given spaces3. We use the Bolzano-Weierstrass Theorem to prove

(1) In this exercise, you are going to prove the fundamental Theorem of algebra: any nonconstant complex polynomial must have a root in C. Let us prove it by

So we suspect that the limit exists and equals 0; we use the Squeeze Theorem to prove

So we suspect that the limit exists and equals 0; we use the Squeeze Theorem to prove our assertion.. All