• 沒有找到結果。

The Study of the Kronecker Product

N/A
N/A
Protected

Academic year: 2021

Share "The Study of the Kronecker Product"

Copied!
60
0
0

加載中.... (立即查看全文)

全文

(1)國立臺灣師範大學數學系碩士班碩士論文. 指導教授:. 謝 世 峰. 博士. The Study of the Kronecker Product. 研 究 生:鄭育雯. 中 華 民 國 一 百 年 八 月.

(2)

(3) 感. 謝. 詞. 在數學所兩年的學習過程,隨著論文的付梓,即將劃上句點。這段時間以 來的點點滴滴,有回憶,有不捨;這些回憶和不捨之心將使我的人生成就勇氣。 我的論文能順利完成,幸蒙謝世峰老師的指導與教誨,對於觀念的啟迪、 架構的匡正、資料的提供與求學的態度逐一斧正與細細教導,在此獻上我最深 的敬意與謝意。老師,真的很謝謝你。論文口試期間,承蒙口試委員王偉仲老 師與黃聰明老師的鼓勵與疏漏處之指正,使得本論文更臻完備,在此謹深致謝。 在研究所修業期間,感謝陳賢修老師、林延輯老師與朱亮儒老師、胡舉卿 老師在課業知識的傳授。以及 M408 和 M410 的同窗伙伴兩年來的切磋討論與鼓 勵。 最後,特將本文獻給我最摯愛的媽媽-胡姊。感謝您無怨無悔的養育與無時 無刻關懷我的弟弟-友奕。還有系隊女籃精神上的支持與陪伴,讓我能專注於課 業研究中之外是我開心主要的來源之一。. 中華民國 一百 年 八 月.

(4) 中 文 摘 要 在這篇文章裡,主要是認識 Kronecker Product 以及探討 Kronecker Product 的細節。我們將闡述如何利用 Kronecker Product 來解釋某些矩陣性質。 總體而言,這篇文章可分為三個部分。在文章一開始,我們簡單提到 Kronecker Product 在科學和工程計算中扮演著極重要的角色。 在第二部分當中,我們清楚地描述 Kronecker Product 的運算而它不能混同 於普通一般的矩陣乘法。另外我們也逐步介紹 Kronecker Product 的完整性,並 嘗試了解 Kronecker Product 在矩陣運算中衍生出別於一般的矩陣性質。 在最後的部分,我們利用 Kronecker Product 的性質來證明一些我們感興趣 的矩陣方程。我們會發現 Kronecker Product 可以幫我們解決一些數學上比較棘 手的矩陣運算。. 關鍵字:克羅內克積, Kronecker Product,Kronecker Sum,Sylvester equation.

(5) The Study of the Kronecker Product. By Yu-Wen Cheng. Advisor Shih-Feng Shieh. Department of Mathematics National Taiwan Normal University, Taipei, Taiwan August, 2011.

(6) Contents Abstract ------------------------------------------------------------------1 Introduction -------------------------------------------------------------1 Known properties of Kronecker product The Kronecker product -----------------------------------------------------------------2 Linear matrix equations and Kronecker products -------------------------------15 Kronecker sums and the equation AX+BX=C ------------------------------------28 Additive and multiplicative commutators and linear preservers --------------39. Application to Kronecker product ------------------------------------45 References ---------------------------------------------------------------53.

(7) The Study of the Kronecker Product Yu-Wen Cheng. ∗. August 26, 2011. Abstract Usually when we do two matrix operations, the two matrices must be the same size. In this thesis, we study the Kronecker product in detail. It should not be confused with the ordinary matrix multiplication because they are two completely dierent mathematical operation. In this thesis we will explain how to use the Kronecker product to calculate or to prove some properties of matrices. Overall, this thesis can be divided into three parts. In the rst part, we will briey elaborate on the Kronecker product. In the second part, we clearly describe the operation on the Kronecker product and its nature, and try to understand the dierent kinds of changes related to the Kronecker product. In the nal part, we use the Kronecker product to prove the matrix equation which we are interest in.. 1. Introduction. In the history of science and engineering, matrix analysis plays an important role in many research elds. In this thesis, we survey an uncommon matrix multiplication for two matrices A and B , in which the row size of A and column size of B are restricted to be the same, the Kronecker product of A nad B are allowed to be arbitary. The Kronecker product of A and B , denoted by A ⊗ B , is dened by   a11 B a12 B · · · a1n B .. ..  ..  . .  .  A⊗B ≡ . . . . ..  ..  .. am1 B am2 B · · · amn B ∗. Department of Mathematics, National Taiwan Normal University, 88 SEC. 4, Ting Chou Road,. Taipei 11677, Taiwan. E-mail:. yuwen0614@gmail.com. 1.

(8) The Kronecker product is a tool for solving matrix equations, such as the Lyapunov equation AX + XAH + Q = 0 the Riccati equation. X 00 − AX 0 + BX = 0,. and the Sylvester equation. AX + XB + CXD + P XQ = F. In our well-known the Kronecker product then we can deal with the solutions of such as the above matrix equation. In addition, we also discuss the rank and the nullspace, and the determint and the trace and eigenvalues, and eigenspace. Solve the abovementioned matrix equation, the Kronecker product plays a very eventful role. This thesis is organized as follows. We will introduce the Kronecker product of the operation and basic properties. Furthermore, we will introduce the Kronecker sum and what is not the same as the Direct sum and the vector operator. Even more, the Kronecker product and the vector operator are used in conjunction with, the matrix equation for solving some problems has a signicant role. Thence, we will progressively introduce the Kronecker product integrity. In this thesis, in the beginning we will introduce a complete denition and nature of the Kronecker product. Next, we introduce the Linear matrix equations and the Kronecker products. Moreover, we also introduced Kronecker sums and the equation AX + BX = C . Finally, we will introduce additive and multiplicative commutators and linear preservers.. 2. Known properties of Kronecker product. 2.1 The Kronecker product Denition 2.1.. The Kronecker productor of A = (aij ) ∈ Mm,n (F) and B = (bij ) ∈ Mp,q (F) is denoted by A tensor B , denoted by A ⊗ B and is dened to be the block matrix   a11 B a12 B · · · a1n B .. ..  ..  . .  .  A⊗B ≡ .  ∈ Mmp,nq (F) .. ..  ..  . . am1 B am2 B · · · amn B. Lemma 2.1. A ⊗ B 6=B ⊗ A. 2.

(9) Denition 2.2. A ∈ Mn (F). The kth Kronecker prower A⊗k is dened inductively for. all positive integers k by. A⊗1 ≡ A ⊗ A⊗(k−1) , k = 2, 3 . . .. Proposition 2.2. For all α ∈ F, A ∈ Mm,n (F), B ∈ Mp,q (F), C ∈ Mr,s (F) (a) (αA) ⊗ B = A ⊗ (αB). (b) (A ⊗ B)T = AT ⊗ B T . (c) (A ⊗ B)∗ = A∗ ⊗ B ∗ . (d) (A ⊗ B) ⊗ C = A ⊗ (B ⊗ C). (e) (A + B) ⊗ C = A ⊗ C + B ⊗ C for all A, B ∈ Mm,n (F) and C ∈ Mp,q (F). (f) A ⊗ (B + C) = A ⊗ B + A ⊗ C for all A ∈ Mm,n (F) and B, C ∈ Mp,q (F). Denition 2.3.. Each matrix A = (aij ) ∈ Mm,n (F) we associate the vector opertor which is dened by vec(A). vec(A) ≡ (a11 · · · am1 , a12 · · · am2 , · · · , a1n · · · amn )T ∈ F mn .. Denition 2.4.. For given A ∈ Mn1 and B ∈ Mn2 the direct sum of A and B is the matrix which is denoted by   A 0 A⊕B ≡ ∈ Mn1 +n2 (2.1) 0 B. Lemma 2.3. Let A ∈ Mm,n (F), B ∈ Mp,q (F), C ∈ Mn,k (F), D ∈ Mq,r (F) then (A ⊗ B)(C ⊗ D) = AC ⊗ BD. (2.2). . Proof. Let A = (aih ). 1≤i≤m , 1≤h≤n. and C = (chj ) 1≤h≤n , 1≤j≤k then A ⊗ B = (aih B), C ⊗ D = (chj D). We look at i × j block of (A ⊗ B)(C ⊗ D). ! n n X X (aih Bchj D) = aih chj BD h=1. h=1. = AC ⊗ BD.. 3.

(10) Corollary 2.4. If A ∈ Mm (F) and B ∈ Mn (F) are nonsingular, then so is A ⊗ B and (A ⊗ B)−1 = A−1 ⊗ B −1 .. (2.3). Proof. The condition is that A and B are nonsingular and as a result A−1 and B −1 exist. By (2.2) it seems from (2.3) we can see that. (A−1 ⊗ B −1 ) (A ⊗ B) = (A−1 A) ⊗ (B −1 B) =I ⊗I = I. Consequently, (A ⊗ B) is nonsingular and (A−1 ⊗ B −1 ) = (A ⊗ B)−1 .. Theorem 2.5. Let A ∈ Mn and B ∈ Mm . If λ ∈ σ(A) = {λ1 . . . λn } and x ∈ Cn. is a corresponding eigenvector of A, and if µ ∈ σ(B) = {µ1 · · · µm } and y ∈ Cm is a corresponding eigenvector of B , then λµ ∈ σ(A ⊗ B) = {λi µj , i = 1 . . . n, j = 1 . . . m} and x ⊗ y ∈ Cnm is a corresponding eigenvector of A ⊗ B . Proof. Assume Ax = λx and By = µy with x, y 6= 0 then by (2.2) (A ⊗ B)(x ⊗ y) = Ax ⊗ By = λx ⊗ µy = λµ(x ⊗ y).. Theorem 2.6. If A ∈ Mm,n and B ∈ Mp,q have SVD. A = V1 Σ1 W1∗ and B = V2 Σ2 W2∗ ,. and let rank(A) = r1 and rank(B) = r2 then (A ⊗ B) = (V1 ⊗ V2 )(Σ1 ⊗ Σ2 )(W1 ⊗ W2 )∗ .. Theorem 2.7. Let A ∈ Mm , B ∈ Mn then (a) F (A ⊗ B)⊇ Co(F (A)F (B)) ⊇ F (A)F (B). (b) If A is normal, then F (A ⊗ B) = Co(F (A)F (B)). (c) If eiθ A is positive semidenite for some θ ∈ [0, 2π), then Co(A ⊗ B) = F (A)F (B). Proposition 2.8. Let A ∈ Mn and B ∈ Mm be given. We know that det(A ⊗ B) = (detA)m (detB)n = det(B ⊗ A).. Thus, A ⊗ B is nonsingular if and only if both A and B are nonsingular. 4.

(11) Proof. Let A ∈ Mn , B ∈ Mm . And we use the property of (2.2) we can get A ⊗ B = (A ⊗ Im )(In ⊗ B). First, we prove det(In ⊗ B) = (det(B))n . Since In ⊗ B = diag [B, B, · · · , B], it has det(In ⊗ B) = det (diag [B, B, . . . , B]) = det(B) × det(B) × · · · × det(B) = (det (B))n . Second, since P T (A ⊗ B)P = B ⊗ A and det(P ) = ±1 we have. det(A ⊗ Im ) = det(P T )det(Im ⊗ A)det(P ) = det(P T )det(Im ⊗ A)det(P ) = det(P T P )det(Im ⊗ A) = det(diag [A, A, · · · , A]) = det(A) × det(A) × · · · × det(A) = (det(A))m . Finally, we can know that by the rst and second A ⊗ B is invertible if and only if both A and B are invertible.. Proposition 2.9. If A ∈ Mn is similar to B ∈ Mn by passing through a nonsingular. matrix S , and C ∈ Mm is similar to E ∈ Mm by passing through a nonsingular matrix T , then show that A ⊗ C is similar to B ⊗ E by passing through S ⊗ T .. Proof. Since A is similar to B , there is a nonsingular matrix S such taht A = S T BS . Similarly, C is similar to E so there is a nonsingular matrix T such that C = T T ET .. A ⊗ C = (S T BS) ⊗ (T T ET ) = (S T ⊗ T T )(B ⊗ E)(S ⊗ T ) = (S ⊗ T )T (B ⊗ E)(S ⊗ T ). Thus A ⊗ C is similar to B ⊗ E .. Proposition 2.10. Let A, B ∈ Mm,n be given.We can learn that A ⊗ B = B ⊗ A if and only if either A = cB or B = cA for some c ∈ C. 5.

(12) Proof. (Necessary condition) The first assumption, if A = cB, c ∈ C then A ⊗ B = cB ⊗ B = c(B ⊗ B) = (B ⊗ cB) = B ⊗ A. The second assumption, if B = cA, c ∈ C then A ⊗ B = A ⊗ cA = c(A ⊗ A) = (cA ⊗ A) = B ⊗ A. (Sucient condition) For given A ∈ Mm,n = (aij ) 1 ≤ i ≤ m, 1 ≤ j ≤ n, B ∈ Mm,n = (bkl ) 1 ≤ k ≤ m, 1 ≤ l ≤ n. The condition is that A ⊗ B = B ⊗ A and that means (aij B) = (bkl A). It implies that. . a11 B. a12 B. ···. ···.   a21 B a22 B  . .. A⊗B = .  ..  . ..  .. . am1 B am2 B · · · · · ·  b11 A b12 A · · · · · ·   b21 A b22 A  . .. = .  ..  . ..  .. . bm1 A bm2 A.  a1n B ..   .     ..  . amn B  b1n A ..  .     ..  .  bmn A. =B⊗A We look h by t matrix where 1 ≤ h ≤m, 1≤ t ≤ n. We give an example for  a11 B a12 B b11 A b12 A h = 3, t = 2. It becomes  a21 B a22 B  =  b21 A b22 A  and it implies that b31 A b32 A  a31 B a32 B a11 B = b11 A a12 B = b12 A  a21 B = b21 A a22 B = b22 A . Obviously, we can discover that a31 B = b31 A a32 B = b32 A 6.

(13) ( A= B=. auv B buv bxy A axy. where 1 ≤ u ≤ m, 1 ≤ v ≤ n it can choose c = where 1 ≤ x ≤ m, 1 ≤ y ≤ n it can choose c =. auv buv bxy axy. such that A = cB. such that B = cA.. Proposition 2.11. A matrix A ∈ Mn is congruent to B ∈ Mn by passing through a. nonsingular matrix U . Similarly C ∈ Mm is congruent to E ∈ Mm by passing through a nonsingular matrix T . It knows that A ⊗ C is congruent to B ⊗ E passes through U ⊗ T. Proof. If A is congruent to B then there exists a nonsingular matrix U such that U T AU = B . Similarly, C is also congruent to E so there exists a nonsingular matrix T such that T T CT = E , then A ⊗ C = (U BU T ) ⊗ (T ET T ) = (U ⊗ T )(B ⊗ E)(U T ⊗ T T ) = (U ⊗ T )(B ⊗ E)(U ⊗ T )T implies. (U ⊗ T )T (A ⊗ C)(U ⊗ T ) = B ⊗ E. Thus, A ⊗ C is congruent to B ⊗ E .. Proposition 2.12. If A ∈ Mn and B ∈ Mm are normal then A ⊗ B is normal. Proof. Because A and B are normal, we have AAT = AT A and BB T = B T B . Now we. have this. (A ⊗ B)T (A ⊗ B) = (AT ⊗ B T )(A ⊗ B) = (AT A ⊗ B T B) = (AAT ⊗ BB T ) = (A ⊗ B)(AT ⊗ B T ) = (A ⊗ B)(A ⊗ B)T . Thus, A ⊗ B is normal. 7.

(14) Proposition 2.13. Two matrices A ∈ Mm,n and B ∈ Mp,q be given with mp = nq.. It can be very obvious that A ⊗ B and B ⊗ A are square, so their trace are dened. By example that tr(A ⊗ B) and tr(B ⊗ A) need not be equal under these conditions. If m = n and p = q , however, by direct calculation knowing that tr(A⊗B) = tr(A)tr(B) = tr(B ⊗ A). .    1 0 1 2 0   Proof. (a)If mp = nq, we let A ∈ M3×2 = 1 1 B ∈ M2×3 = then 1 4 1 2 0   1 2 0 0 0 0  1 4 1 0 0 0     1 2 0 1 2 0    tr(A ⊗ B) = tr   1 4 1 1 4 1  = 6    2 4 0 0 0 0  2 8 2 0 0 0 and.      tr(B ⊗ A) = tr    . 1 1 2 1 1 2. 0 1 0 0 1 0. 2 2 4 4 4 8. 0 2 0 0 4 0. 0 0 0 1 1 2. 0 0 0 0 1 0.      = 7.   . We outcrop that 6 6= 7 and it implies that tr(A ⊗ B) 6= tr(B ⊗ A). (b)If m = n and p = q we have.  A ∈ Mn,n.    =  . a11 a12 · · · a21 a22 .. .. . . an1 an2 · · ·. ... a1n a2n .. . .. . .        , B ∈ Mp,p =     . b11 b12 · · · b21 b22 .. .. . . bp1 bp2 · · ·. ann. then. 8. ... b1p b2p .. . .. bpp.       .

(15)     tr(A ⊗ B) = tr   . = a11. a11 B a12 B · · · a21 B a22 B .. .. . .. a1n B a2n B .. .. ...       . . an1 B an2 B · · · ann B ! ! ! p p p X X X bii bii + · · · + ann bii + a22 i=1. i=1. i=1. p. = (a11 + a22 + · · · + ann ). !. X. bii. i=1. = tr(A)tr(B) ! p X = bii (a11 + a22 + · · · + ann ) i=1. = b11. n X. ! ajj. + b22. n X j=1. j=1. ! ajj. · · · + bpp. n X. ! ajj. j=1. = tr(B ⊗ A).. Proposition 2.14. Let A ∈ Mm,n and B ∈ Mp,q be given with mp = nq. (A ⊗ B and B ⊗ A are square, so their determinants are dened.) We know that det(A ⊗ B) = det(B ⊗ A) under the condition, but note that A ⊗ B can be nonsingular only if both A and B are square. Proof. If A⊗B is nonsingular then rank(A⊗B) = rank(A)rank(B) ≤ min(m, p)min(n, q). We divided into two cases. Case 1. If rank(A ⊗ B) = mn then it means that m ≤ p, n ≤ q . Case 2. If rank(A ⊗ B) = pq then it means that p ≤ m, q ≤ n. By the case 1 and case 2 we can know that p = m, q = n and that is A and B are both square.. Proposition 2.15. Let A ∈ Mn and B ∈ Mm , and let k · k2 be denoted as the usual. Euclidean norm. We can know two things k x ⊗ y k2 =k x k2 k y k2 andk (A ⊗ B)(x ⊗ y) k2 =k Ax k2 k By k2 for all x ∈ Cn and y ∈ Cm . 9.

(16) Proof. We use that Euclidean norm: x ∈ Rn = (x1 , x2 , · · · , xn ) then k x k2 =. x ∈ Cn , y ∈ Cm . And it has. p x21 + x22 + · · · + x2n .. p | x1 |2 + · · · + | xn |2 √ = x1 x1 + · · · + xn xn. k x k2 =. p k y k2 = | y1 |2 + · · · + | ym |2 √ = y1 y1 + · · · + ym ym. .  x1 y 1  ..   .     x 1 ym     x2 y 1   .   .   .  x⊗y = ,  x 2 ym   .   ..     x y   n 1   ..   .  x n ym then. p | x1 y1 |2 + · · · + | x1 ym |2 + · · · + | xn y1 |2 + · · · + | xn ym |2 q = | x1 |2 (k y k22 ) + | x2 |2 (k y k22 ) + · · · + | xn |2 (k y k22 ) q = k y k22 (| x1 |2 + | x2 |2 + · · · + | xn |2 ) q p 2 2 = | x1 | + · · · + | xn | k y k22. k x ⊗ y k2 =. =k x k2 k y k2 . Thus, it has this k (A ⊗ B)(x ⊗ y) k2 =k (Ax) ⊗ (By) k2 =k Ax k2 · k By k2 for all x ∈ C n y ∈ Cm .. Proposition 2.16. If A ∈ Mn , B ∈ Mm and C ∈ Mp then (A ⊕ B) ⊗ C = (A ⊗ C) ⊕ (B ⊗ C) but it need not be correct that A ⊗ (B ⊕ C) = (A ⊗ B) ⊕ (A ⊗ C). 10.

(17) Proof. Because of (2.1) The first part,  (A ⊕ B) ⊗ C =.  ⊗C. . A⊗C 0⊗C 0⊗C B⊗C. . . A⊗C 0 0 B⊗C. . = =. A 0 0 B. = (A ⊗ C) ⊕ (B ⊗ C). The second part,  we show  that A ⊗ (B ⊕C) = (A  ⊗ B)⊕ (A ⊗C) by example. It 1 1 2 1 1 0 gives A ∈ M2×2 = , B ∈ M2×2 = ,C∈ to see 1 −1 2 3 1 1 .  2 1 0 0  2 3 0 0   A ⊗ (B ⊕ C) = A ⊗   0 0 1 0  0 0 1 1  2 1 0 0 2 1 0 0  2 3 0 0 2 3 0 0   0 0 1 0 0 0 1 0   0 0 1 1 0 0 1 1 =  2 1 0 0 −2 −1 0 0   2 3 0 0 −2 −3 0 0   0 0 1 0 0 0 −1 0 0 0 1 1 0 0 −1 −1 ∈ M8×8 . but. 11.            .

(18) . 2  2 (A ⊗ B) ⊕ (A ⊗ C) =   2 2  2  2   2   2 =  0   0   0 0.   1 2 1 1   3 2 3   1 ⊕ 1 −2 −1   1 3 −2 −3 1 1 2 1 0 0 3 2 3 0 0 1 −2 −1 0 0 3 −2 −3 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 1 0 0 0 0 1 1.  0 1 0 1 1 1   0 −1 0  1 −1 −1  0 0 0 0   0 0   0 0   1 0   1 1   −1 0  −1 −1. ∈ M8×8 . Evidently, they are not the same.. Theorem 2.17. Let A ∈ Mm,n and B ∈ Mn,k be given. There is a very eventful result vec(AB) = (I ⊗ A)vecB. (2.4). .. Proposition 2.18. For given A ∈ Mm,n B ∈ Mp,q X ∈ Mq,n , and Y ∈ Mp,m , we can learn that (vecY )T (A ⊗ B) (vecX) = tr(AT Y T BX). Proof. As a result of tr(AB) = vec(AT )T vec(B) so we have   tr(AT Y T BX) = tr (Y A)T BX  T T T = vec (Y A) vec (BX)   = vec (Y A)T vec (BX)  T = AT ⊗ I vec (Y ) ((I ⊗ B) vec (X))  T  T T = (vec (Y )) A ⊗ I ((I ⊗ B) vec (X))  = vec (Y )T A ⊗ I T (I ⊗ B) (vec (X)) = vec (Y )T (A ⊗ B) vec (X) . 12.

(19) It has completed the proof.. Proposition 2.19. Let A ∈ Mn , B ∈ Mm be given.. (a) It is obvious to know (A ⊗ I)k = Ak ⊗ I and (I ⊗ B)k = I ⊗ B k , k = 1, 2, · · · (b) For any polynormial p(t), we can learn of p(A ⊗ I) = p(A) ⊗ I and p(I ⊗ B) = I ⊗ p(B). (c) The power series denition of the matrix exponential is that eX = I + X + 2 X /2! + · · · thus it can imply eA⊗I = eA ⊗ I and eI⊗B = I ⊗ eB . (d) It is a fact that if C, D ∈ Mn commute, then eC+D = eC eD . This fact shows that eA⊗I+I⊗B = eA ⊗ eB . Proof. (a)(A ⊗ I)k = Ak ⊗ I for all k = 1, 2, · · ·. For k = 1we have (A ⊗ I) = A ⊗ I Assume k = n and (A ⊗ I)n = An ⊗ I always holds. For k = n + 1,. (A ⊗ I)n+1 = (A ⊗ I)n (A ⊗ I)1 = (An ⊗ I)(A ⊗ I)1 = (An ⊗ A)I = An+1 ⊗ I for all k . Thus, by Mathematical induction (A ⊗ I)k = Ak ⊗ I for all k . (I ⊗ B)k = I ⊗ B k for all k = 1, 2, · · · For k = 1we have (I ⊗ B)1 = I ⊗ B 1 Assume k = n and (I ⊗ B)n = I ⊗ B n always holds. For k = n + 1,. (I ⊗ B)n+1 = (I ⊗ B)n (I ⊗ B)1 = (I ⊗ B n )(I ⊗ B 1 ) = I(B n ⊗ B) = I ⊗ B n+1 for all k . Thus, by Mathematical induction , (I ⊗ B)k = I ⊗ B k for all k . (b)Assume p(t) = a0 + a1 t + a2 t2 + · · · + an−1 tn−1 + an tn then 13.

(20) P (A ⊗ I) = a0 + a1 (A ⊗ I) + a2 (A ⊗ I)2 + · · · + an−1 (A ⊗ I)n−1 + an (A ⊗ I)n = a0 + a1 (A ⊗ I) + a2 (A2 ⊗ I) + · · · + an−1 (An−1 ⊗ I) + an (An ⊗ I) = (a0 (A0 ) + a1 (A) + a2 (A2 ) + · · · + an−1 (An−1 ) + an (An )) ⊗ I = P (A) ⊗ I.. P (I ⊗ B) = a0 + a1 (I ⊗ B) + a2 (I ⊗ B)2 + · · · + an−1 (I ⊗ B)n−1 + an (I ⊗ B)n = a0 + a1 (I ⊗ B) + a2 (I ⊗ B 2 ) + · · · + an−1 (I ⊗ B n−1 ) + an (I ⊗ B n ) = I ⊗ (a0 (B 0 ) + a1 (B 1 ) + a2 (B 2 ) + · · · + an−1 (B n−1 ) + an (B n )) = I ⊗ P (B). (c)eX = I + X +. X2 + ··· 2! 1 (A ⊗ I)2 + 2! 1 = I + (A ⊗ I) + (A2 ⊗ I) + 2! 1 2 = (I + A + A + · · · ) ⊗ I 2! = eA ⊗ I.. 1 (A ⊗ I)3 + · · · 3! 1 3 (A ⊗ I) + · · · 3!. 1 (I ⊗ B)2 + 2! 1 = I + (I ⊗ B) + (I ⊗ B 2 ) + 2! 1 2 = I ⊗ (I + B + B + · · · ) 2! B =I ⊗e .. 1 (I ⊗ B)3 + · · · 3! 1 (I ⊗ B 3 ) + · · · 3!. eA⊗I = I + (A ⊗ I) +. eI⊗B = I + (I ⊗ B) +. (d)If C, D are commute then CD = DC. Prove that eC+D = eC eD .. 14.

(21) e. C+D. = = =. ∞ X 1. k! k=0. (C + D)k. ∞   ∞ X 1X k. k! j=0 j k=0. C j Dk−j. ∞ X ∞ X. 1 C j Dk−j j!(k − j)! k=0 j=0. let p = k − j , q = j then ∞ X ∞ X 1 1 q p C D = q! p! q=0 p=0. ∞ X 1. q! q=0. ! Cq. ∞ X 1. p! p=0. ! Dp. = eC eD Now to prove that e(A⊗I+I⊗B) = eA ⊗ eB. e(A⊗I+I⊗B) = eA⊗I eI⊗B = (eA ⊗ I)(I ⊗ eB ) = eA eB .. 2.2 Linear matrix equations and Kronecker products. Theorem 2.20. Three matrices A ∈ Mm,n (F), B ∈ Mp,q (F) and C ∈ Mm,q (F) be. given and let X ∈ Mn,p (F) be unknown. The matrix equation AXB = C is equivalent to the system of qm equations in np unknowns given by (B T ⊗ A)vec(X) = vec(C). Proof. Firstly, we give two vectors a and b and let b = (b1 · · · bn )T then vec(abT ) = vec(b1 a, b2 a, · · · , bm a)   b1 a  ..   .  = .   ..  bm a = b ⊗ a. 15. (2.5).

(22) Secondly, we also let X = (x1 · · · xn ) ∈ Mm,n , In = (e1 · · · en ) and we know one n P thing which is X = xi eTi . Furthermore, i=1. vec(AXB) = vec(A. n X. xi eTi B). i=1. = =. n X i=1 n X. vec(Axi )(B T ei )T (B T ei ) ⊗ (Axi ). i=1 n X = (B ⊗ A) ei ⊗ xi T. i=1 n X T = (B ⊗ A) vec(xi eTi ) i=1 T. = (B ⊗ A)vec(X). Conclunsion: We may rewrite each of the linear matrix equations (a)∼(e) using Kronecker productor and the vec(·) notation. (a) AX = B infers (I ⊗ A)vec(X) = vec(B). (b) AX + XB = C infers ((I ⊗ A) + (B T ⊗ I))vec(X) = vec(C). (c) AXB = C infers (B T ⊗ A)vec(X) = vec(C). (d) A1 XB1 + · · · + Ak XBk = C infers (B1 ⊗ A1 + · · · + Bk ⊗ Ak )vec(X) = vec(C). (e) AX + Y B = C infers (I ⊗ A)vec(X) + (B T ⊗ I)vec(Y ) = vec(C).. Lemma 2.21. The mapping vec : Mm,n −→ Cmn given by X −→ vec(X) is an isomorphism.. Lemma 2.22. Let T : Mm,n −→ Mp,q be a linear transformation, and there is a unique. matrix K(T ) ∈ Mpq,mn such that. vec(T (X)) = K(T )vec(X). (2.6). for all X ∈ Mm,n .. Denition 2.5. If we say that a linear transformation T : Mn −→ Mn is derivation then it means that T (XY ) = T (X)Y + XT (Y ) (2.7) for all X, Y ∈ Mn . 16.

(23) Theorem 2.23. We let T : Mn −→ Mn be a linear transformation then T is derivation if and only if there is some C ∈ Mn such that T (X) = CX − XC. (2.8). for all X ∈ Mn . Proof. (Necessary condition) By (2.7) we know that T (X) = CX − XC for all X, Y ∈ Mn then. T (XY ) = C(XY ) − (XY )C = (CX)Y − X(Y C) = (CX)Y − (XC)Y = (CX)Y − (XC)Y + X(CY ) − X(Y C) = (CX − XC)Y + X(CY − Y C) = T (X)Y + XT (Y ). This says that T is derivation. (Sucient condition) Before the questions, we know that T is derivation i.e. T (XY ) = T (X)Y + XT (Y ) for all X, Y ∈ Mn . And we focus on the equation. vec(T (XY )) = vec(T (X)Y ) + vec(XT (Y )).. (2.9). Owing to (2.6), the (2.9) becomes respectively,. vec(T (XY )) = K(T )vec(XY ) = K(T )(I ⊗ X)vec(Y ). (2.10). vec(T (X)Y )(I ⊗ T (X))vec(Y ). (2.11). vec(XT (Y )) = (I ⊗ X)vec(T (Y )) = (I ⊗ X)K(T )vec(Y ).. (2.12). We view from (2.9) and it turns into 17.

(24) K(T )(I ⊗ X)vec(Y ) = (I ⊗ T (X))vec(Y ) + (I ⊗ X)K(T )vec(Y ). (2.13). for all X, Y ∈ Mn. K(T )(I ⊗ X) = (I ⊗ T (X)) + (I ⊗ X)K(T ). (2.14). The equation (2.14) changes into. K(T )(I ⊗ X) − (I ⊗ X)K(T ) = I ⊗ T (X) for all X ∈ Mn . We let K(T ) = (Kij ) for each Kij ∈ Mn thus. (Kij ) (I ⊗ X) − (I ⊗ X) (Kij ) = I ⊗ T (X) The equation (2.15) tells us that ( Kii X − XKii = T (X) Kij X − XKij = 0. (2.15). i = 1, 2, · · · , n i, j = 1, 2, · · · , n i 6= j. Thus, T (X) = CX − XC for all X ∈ Mn .. Theorem 2.24. We let m, n be positive integers after there is a unique matrix P (m, n) ∈. Mm,n such that. vec(X T ) = P (m, n)vec(X). (2.16). for all X ∈ Mm,n . It given by P (m, n) =. m X n X. Eij ⊗ EijT. i=1 j=1. for each Eij ∈ Mm,n . Moreover, P (m, n) = P (n, m)T = P (n, m)−1 . Proof. Above all, we let X = (xij ) ∈ Mm,n , Eij ∈ Mm,n and we can nd that EijT XEijT = Xij EijT i = 1, · · · , m j = 1, · · · , n then it has. 18. (2.17).

(25) T. X =. m X n X. Xij EijT. i=1 j=1. =. m X n X. EijT XEijT .. i=1 j=1. Thus. T. vec(X ) = = =. m X n X i=1 j=1 m X n X i=1 j=1 m X n X. vec(EijT XEijT ) ((EijT )T ⊗ EijT )vec(X) (Eij ⊗ EijT )vec(X). i=1 j=1. = P (m, n)vec(X). On account of (X T )T = X and X T ∈ Mn,m we have. vec(X) = P (n, m)vec(X T ) = P (n, m)P (m, n)vec(X). It means that P (n, m) = P (m, n)−1 and. P (n, m) =. n X m X. Eij ⊗ EijT. i=1 j=1. =. n X m X. T Eji ⊗ Eji. i=1 j=1. =. m X n X. (Eij ⊗ EijT )T. i=1 j=1. = P (m, n)T Wherefore this shows that P (n, m) = P (m, n)−1 = P (m, n)T . 19.

(26) Corollary 2.25. For any four positive integers m, n, p, q and P (m, p) ∈ Mp,m P (n, q) ∈ Mn,q belongs to the permutation matrix then. B ⊗ A = P (m, p)T (A ⊗ B)P (n, q). for all A ∈ Mm,n B ∈ Mp,q . When m = n, p = q it becomes B ⊗ A = P (n, q)T (A ⊗ B)P (n, q). It means that (B ⊗ A) is permutation similar to (A ⊗ B) when both A and B are square. Generally, let A1 · · · Ak ∈ Mm,n B1 · · · Bk ∈ Mp,q then B1 ⊗ A1 + · · · + Bk ⊗ Ak = P (m, p)T (A1 ⊗ B1 + · · · + Ak ⊗ Bk )P (n, q). (2.18). .. Corollary 2.26. Let A1 · · · Ar and B1 · · · Bs be given square complex matrices then. (A1 ⊕ A2 ⊕ · · · ⊕ Ar ) ⊗ (B1 ⊕ B2 ⊕ · · · ⊕ Bs ) is permutation similar to the direct sum of Ai ⊗ Bj i = 1, · · · , r j = 1, · · · , s.. Proof. We use the example r = s = 2 to illustrate. Due to (2.18) we have (A1 ⊕ A2 ) ⊗ (B1 ⊕ B2 ) = (A1 ⊗ (B1 ⊕ B2 )) ⊕ (A2 ⊗ (B1 ⊕ B2 )) = ((B1 ⊕ B2 ) ⊗ A1 ) ⊕ ((B1 ⊕ B2 ) ⊗ A2 ) = (B1 ⊗ A1 ) ⊕ (B2 ⊗ A1 ) ⊕ (B1 ⊗ A2 ) ⊕ (B2 ⊗ A2 ) It implies that (A1 ⊕ A2 ) ⊗ (B1 ⊕ B2 ) is similar to (A1 ⊗ B1 ) ⊕ (A1 ⊗ B2 ) ⊕ (A2 ⊗ B1 ) ⊕ (A2 ⊗ B2 ).. Proposition 2.27. Given matrices A , B , C ∈ Mn , and we can learn two things.. The rst thing is that the equation AXB = C has a unique solution X ∈ Mn for every given C if and only if both A and B are nonsingular. The second thing is that  if T either A or B is singular, then there is a solution X if and only if rank B ⊗ A =  rank B T ⊗ A vec(C) . Proof. The first part,. (Sufficientcondition) AXB = C has a unique solution X ∈ Mn for all C . Since A−1 AXB = A−1 C =implies that XBB −1 = A−1 CB −1 , we get that X = A−1 CB −1 . Thus, both A and B are nonsingular. (Necessarycondition) Both A and B are nonsingular. We assume that there is e such that AXB e = C then we have AXB − AXB e = C − C = 0. It another solution X −1 −1 e implies that A(X − X)B = 0. Since A and B both exist and A, B are not zero e = 0 implies X = X e. matrix, it has X − X 20.

(27) The second part, For A ∈ Mm,n , b ∈ F m×1 we have three characteristics   rank(A) 6= rank ((A | b)) if and only if Ax = b has no solutions rank(A) = rank ((A | b)) = n if and only if Ax = b has unique solutions   rank(A) = rank ((A | b)) < n if and only if Ax = b has infinite solutions.. (2.19). We let B T ⊗ A = G and vec(C) = b, and vec(X) = x. We can rewrite AXB = C to Gx = b. By (2.19) Gx = b has solution x if and only if rank(G) = rank(G | b) so it means that AXB = C has a solution x if and only if rank(B T ⊗ A) = rank(B T ⊗ Avec(C)).. Proposition 2.28. We give a matrix S ∈ Mn which is nonsingular. We can learn that the similarity map T : Mn −→ Mn given by T (X) ≡ S −1 XS is a linear transformation on Mn . It can be very clear that {1, · · · , 1} are the eigenvalues of the linear transformation T . Proof. The part one:. First, for X , Y ∈ Mn , it has T (X + Y ) = S −1 (X + Y )S = S −1 (XS + Y S) = S −1 XS + S −1 Y S = T (X) + T (Y ).. Second, for all a ∈ F X ∈ Mn it has T (aX) = S −1 (aX)S = aS −1 XS = aT (X). Thus, T is a linear transformation. The part two: we begin with. vec(T (X)) = K(T )vec(X) = K(T )vec(S −1 XS) 21.

(28)  and T (X) = S −1 XS then we have T β = S T ⊗ S −1 where β = {Eij } i, j = 1, · · · , n. We can nd that if {λ1 , · · · , λn } are eigenvalues of S then eigenvalues of S −1 are 1 1 1 1 { , · · · , } so it implies that {λ1 · , · · · , λn · } = {1, · · · , 1} are eigenvalues of λ1 λn λ1 λn S T ⊗ S −1 .. Proposition 2.29. Let T : Mn −→ Mn be a given linear derivation. We observe that (2.7) and (2.8) then get the following three things (a) T (I) = 0 (b) T (A−1 ) = −A−1 T (A)A−1 for all nonsingular A ∈ Mn (c) T (ABC) = T (A)BC + AT (B)C + ABT (C) for all A, B , C ∈ Mn .. Proof. We prove these in direct. (a) T (I) = T (I · I) = T (I) · I + I · T (I) = (CI − IC)I + I(CI − IC) = CI 2 − ICI + ICI − I 2 C =C −C = 0. (b) T (A−1 I) = T (A−1 )I + A−1 T (I) = T (A−1 )I + 0 = (CA−1 − A−1 C)I = CA−1 I − A−1 CI = ICA−1 I − A−1 CI = A−1 ACA−1 − A−1 CAA−1 = −A−1 CAA−1 + A−1 ACA−1 = −A−1 CAA−1 − (−A−1 ACA−1 ) = −A−1 (CA − AC)A−1 = −A−1 T (A)A−1 22.

(29) for all nonsingular matrices A ∈ Mn . (c). T (ABC) = T (ABCI) = T (ABC)I + (ABC)T (I) = T (ABC)I e e = (CABC − ABC C)I e e = CABCI − ABC CI e e = CABC − ABC C e e e e + AB CC e − ABC C e = CABC − ACBC + ACBC − AB CC e − AC)BC e e − B C)C e + AB(CC e − C C) e = (CA + A(CB = T (A)BC + AT (B)C + ABT (C) e ∈ Mn . for all C. Proposition 2.30. Let A ∈ Mn and B ∈ Mm be given square matrices. We can know that (a) If A ⊗ B is normal, then so is B ⊗ A. (b) If A ⊗ B is unitary, then so is B ⊗ A. Proof. We know two things.. The first thing is that if A ⊗ B is normal, then (A ⊗ B)T (A ⊗ B) = (A ⊗ B)(A ⊗ B)T . The second thing is that if A ⊗ B is unitary, then (A ⊗ B)∗ (A ⊗ B) = I. (a) (B ⊗ A)T (B ⊗ A) = (P T (A ⊗ B)P )T (P T (A ⊗ B)P ) = (P T (A ⊗ B)T P )(P T (A ⊗ B)P ) = P T ((A ⊗ B)T (A ⊗ B))P = P T (A ⊗ B)(A ⊗ B)T P = (P T (A ⊗ B)P )(P T (A ⊗ B)P )T = (P T (A ⊗ B)P )(P T (A ⊗ B)P )T = (B ⊗ A)(B ⊗ A)T . (b)B ⊗ A = P T (A ⊗ B)P implies that 23.

(30) (B ⊗ A)∗ (B ⊗ A) = (P T (A ⊗ B)P )∗ (P T (A ⊗ B)P ) = (P T (A ⊗ B)∗ P )(P T (A ⊗ B)P ) = (P T (A ⊗ B)∗ P )(P T (A ⊗ B)P ) = P T ((A ⊗ B)∗ (A ⊗ B))P = P T IP = PTP = I. Thus, B ⊗ A is unitary.. Proposition 2.31. Let A ∈ Mm and B ∈ Mn be given and suppose neither A nor B is the zero matrix. It is clear that A ⊗ B is diagonalizable if and only if both A and B are diagonalizable. Proof. (Sucient condition) A ⊗ B is diagonalizable that is there exists a nonsingular. matrix (T ⊗ S) such that (T ⊗ S)−1 (A ⊗ B)(T ⊗ S) = DA⊗B where DA⊗B is a diagonal matrix. It has the following. (T −1 ⊗ S −1 )(A ⊗ B)(T ⊗ S) = (T −1 AT ) ⊗ (S −1 BS) = DA ⊗ DB where DA and DB are diagonal matrices. Thus, both A and B are diagonalizable. (Necessary condition) Both A and B are diagonalizable. Thus, there exist nonsingular matrices T and S such that T −1 AT = DA and S −1 BS = DB where DA and DB are diagonal matrices. Clearly, we have T DA T −1 = A and SDB S −1 = B .. (A ⊗ B) = (T DA T −1 ) ⊗ (SDB S −1 ) = (T ⊗ S)(DA ⊗ DB )(T −1 ⊗ S −1 ) = (T ⊗ S)(DA ⊗ DB )(T ⊗ S)−1 . Thus, A ⊗ B is diagonalizable.. Proposition 2.32. When m = n, the basic permutation matrix P (n, n) which is de1 2. ned is symmetric and it has eigenvalues ±1 with respective multiplicities n(n ± 1). It is concluded that tr (P (n, n)) = n and det (P (n, n)) = (−1) 24. n(n−1) 2. ..

(31) m P n P. Proof. We briey prove. It gives P (m, n) =. Eij ⊗ EijT , where i = 1, 2, · · · , m, j =. i=1j=1. 1, 2, · · · , n. As m = n we have P (n, n) = =. n X n X i=1 j=1 n X n X. Eij ⊗ EijT T ⊗ Eji Eji. (2.20). i=1 j=1. Because of (A ⊗ B)T = AT ⊗ B T , (2.20) turns into. P (n, n) =. n X n X. T Eji ⊗ Eji. T. i=1 j=1. = P (n, n)T n(n ± 1) . 2 If λ = 1, then the eigenspace of P (n, n) is symmetric matrix. If λ = 1, then the eigenspace of P (n, n) is skew-symmetric matrix. n(n − 1) n(n + 1) + (−1) × = n. First, tr(P (n, n)) = 1 × 2 2 n(n−1) n(n−1) n(n+1) Second, det(P (n, n)) = 1 2 · (−1) 2 = (−1) 2 .. and the eigenvalues are 1, −1 with respective multiplicities. Proposition 2.33. For two vectors x ∈ Cm and y ∈ Cn it follows that x ⊗ yT = xyT =. y T ⊗ x and P (m, 1) = I ∈ Mm , and P (1, n) = I ∈ Mn .     y1 x1  y2   x2   .   .      Proof. We give x ∈ Cm =  ..  , y ∈ Cn =  ..   .   .   ..   ..  xm yn The first part,. 25.

(32)   x ⊗ y = xi y T   = x1 y T , x2 y T , · · · , xn y T   x1 y 1 x1 y 2 · · · · · · x1 y n  x2 y 1 x2 y 2 x2 y n   .  ..  .  . = .   .  . ..  ..  xm y 1 xm y 2 · · · · · · xm y n   x1  x2   .     =  ..  y1 y2 · · · · · · yn  .   ..  xm = xy T .. 26.

(33) Identically, it has.     xy T =        =  . x1 x2 .. . .. . xm.      y1 y2 · · ·  . · · · yn. x1 y1 x1 y2 · · · · · · x1 yn x2 y1 x2 y2 x 2 yn .. .. . . .. .. . . xm y1 xm y2 · · · · · · xm yn. .       .  y 1 x1 y 2 x1 · · · · · · y n x1  y 1 x2 y 2 x2 y n x2    . ..   . =  ..    . ..   .. . y1 xm y2 xm · · · · · · yn xm   x1  x2       .  = y1 y2 · · · · · · yn ⊗  ..   .   ..  xm . = y T ⊗ x. It shows that x ⊗ y T = xy T = y T ⊗ x. The second part, it follows that by (2.16) and (2.17) for all X ∈ Mm,n . We have m P T P (m, 1) = Ei1 ⊗ Ei1 for all x ∈ Cm then the equation vec(X T ) = P (m, 1)vec(X) i=1. for all x ∈ Cm . Because of vec(X T ) ∈ Mm,1 and vec(X) ∈ Mm,1 , it means that P (m, 1) ∈ Im . Similary, P (1, n) ∈ In .. Proposition 2.34. Let A = (aij ) , B = (bij )∈ Mm,n be given. We know that X  tr P (m, n)(AT ⊗ B) = tr(AT B) = aij bij i,j. where P (m, n) is the permutation matrix. 27.

(34) Proof. Thanks to (2.17) we show it in direct. P (m, n)(AT ⊗ B) = ( = =. m X n X. Eij ⊗ EijT )(AT ⊗ B). i=1 j=1 m n XX. (Eij ⊗ EijT )(AT ⊗ B). i=1 j=1 m X n X.  (Eij AT ) ⊗ (EijT B) .. i=1 j=1. It implies that. ! m X n X  tr P (m, n)(AT ⊗ B) = tr (Eij ⊗ EijT )(AT ⊗ B) =. i=1 j=1 m n XX.  tr (Eij ⊗ Eji )(AT ⊗ B). i=1 j=1. =. X. =. X. tr (Eij AT ) ⊗ (Eji B). . i,j.  tr(Eij AT )tr(Eji B). i,j. = tr(AT B). We look at i, j which are xed and we can nd that.  X tr P (m, n)(AT ⊗ B) = aij bij . i,j. 2.3 Kronecker sums and the equation AX+BX=C Denition 2.6.. Given any two matrices A ∈ Mn , B ∈ Mm saving the Kronecker sum of A and B , denoted by A  B , is dened by. (Im ⊗ A) + (B T ⊗ In ). 28. (2.21).

(35) Theorem 2.35. Given any two matrices A ∈ Mn and B ∈ Mm . If λ ∈ σ(A) and x ∈ Cn is a corresponding eigenvector of A and if µ ∈ σ(B) and y ∈ Cm is a corresponding eigenvector of B , then λ+µ is an eigenvalue of the Kronecker sum (Im ⊗A)+(B T ⊗In ) and y ⊗ x ∈ Cnm is a corresponding eigenvector. If σ(A) = {λ1 · · · λn } and σ(B) = {µ1 · · · µm } then σ((Im ⊗ A) + (B T ⊗ In )) = {λi + µj }. Proof. On account of (2.21) it has (A  B)(y ⊗ x) = ((Im ⊗ A) + (B ⊗ In )) (y ⊗ x) = (y ⊗ Ax) + (By ⊗ x) = (y ⊗ λx) + (µy ⊗ x) = λ(y ⊗ x) + µ(y ⊗ x) = (λ + µ)(y ⊗ x).. Theorem 2.36. Let A ∈ Mn and B ∈ Mm be two matrices. The equation AX +XB = C has a unique solution X ∈ Mn,m for each C ∈ Mn,m if and only if σ(A)∩σ(−B) = ∅. Proof. AX + XB = C we can rewrite into  (Im ⊗ A) vec(X) + B T ⊗ In vec(X) = vec(C) = (Im ⊗ A) + B T ⊗ In. . vec(X). and as a result of (2.21) it becomes (2.22).  A  B T vec(X) = vec(C).. Let G = (A  B T ) and c = vec(C), x = vec(X) so (2.22) turns into Gx = c which has a unique solution if and only if the eigenvalues of G are all nonzero. Because of the eigenvalues of G are {λi + µj } where λi are eigenvalues of A, and µj are eigenvalues of B T (B and B T have the same eigenvalues).. Lemma 2.37. Let Jr (0) ∈ Mr and Js (0) ∈ Ms be singular Jordan blocks. And X ∈ Mr,s is a solution to Jr (0)X − XJs (0) = 0 if and only if  , Y ∈ Mr , 0 ∈ Mr,s−r  X = (0, Y )! Y  , Y ∈ Ms , 0 ∈ Mr−s,s X = 0 29. if r ≤ s if r ≥ s.

(36) . . a0 a1 a2 · · · a0 a1.    where Y ≡   . a0.  ...   ... a2   = (yij ) . In either case, is an arbitrary upper  ... . a1 a0 triangular Toeplitz matrix with yij = ai−j . The dimension of the nillspace of the linear transformation X −→ Jr (0)X − XJs (0) is min{r, s}.. Theorem 2.38. Let A ∈ Mn and B ∈ Mm and let . Jn1 (λ1 ).    A =S  . 0 Jn2 (λ2 ). Jm1 (µ1 ).    B = R  . ... ···. 0 . ···. 0 Jm2 (µ2 ). ···. 0.    −1  S , n1 + n2 + · · · + np = n;  . ... 0. Jnp (λp ). ···. .... . 0. 0.    −1  R , m1 + m2 + · · · + mq = m  . ... 0. . Jmq (µq ). be the respective Jordan canonical forms of A and B . The dimension of the nullspace of the linear transformation L : Mn,m −→ Mn,m given by L : X −→ AX + XB. is. p P q P. (2.23). νij where. i=1j=1. (. νij = 0 , λ 6= −µj . νij = min{ni , mj } , λi = −µj. (2.24). Denition 2.7.. We say A ∈ Mn is nonderogatory if every eigenvalues of A has geometric multiplic is 1.. Corollary 2.39. Let A ∈ Mn be a given matrix. The set of matrices in Mn that. commute with A is a subspace of Mn and it has dimension at least n; the dimension is equal to if and only if A is nonderogatory. 30.

(37) Proof. (Sucient condition)By (2.23) let B = −A and p = q and ni = mi , and µi = −λi , i = 1, 2, · · · , p so the linear transformation becomes (2.25). L : X −→ AX − XA. Because of (2.23) the dimension of the nullspace of (2.24) is p X. νij ≥. p X. νii. i=1 p. i,j=1. =. X. ni. i=1. = n. When it is equal to the establishment: Since λi = −λj and νij = min{ni , mi } = ni , it implies that. X. νij =. i,j. X. νii. i. =. X. ni. i. = n. It just has one Jordan block. P P When it is established less than: One of i is equal to at certain j thus, νii < νij . i. i,j. That is every eigenvalues of A has geometric multiplic 1, therefor A is nonderogatory. (Necessary condition)Without loss of general hypothetical n1 > n2 , we assume A is nonderogatory with every eigenvalue which has geometric multiplicity which is 1 and there is only two eigenvalues λ1 and λ2 . However, we can nd that X ni ≥ n. i. Thence, it has proved by the evidence to the contrary.. Denition 2.8.. Let A ∈ Mn be a given matrix. The centralizer of A is the set. C(A) ≡ {B ∈ Mn : AB = BA}. (2.26). of all matrices that commute with A. The set of all polynormials in A is the set P (A) ≡ {p(A) : p(t) is a polynormial}. 31.

(38) Theorem 2.40. Let A ∈ Mn be a given matrix and let qA (t) denote the minimal polynormial of A. Then we have the following (a) P (A) and C(A) are both subspaces of Mn (b) P (A) ⊆ C(A) (c) degree(qA (t)) = dim(P (A)) ≤ n (d) dim(C(A))≥ n with equality if and only if A is nonderogatory. Proof. (a) We can easily know P (A) and C(A) are both subspaces ofMn . (b) Since p(t) = a0 + a1 t + · · · + an tn is a polynomial it has. AP (A) = A(a0 I + a1 A + · · · + an An ) = (a0 A + a1 A2 + · · · + an An+1 ) = (a0 I + a1 A + · · · + an An )A = P (A)A. (c) First, we assume that p(t) is a polynomial then we can rewrite p(t) = qA (t)f (t)+ r(t) where deg (r(t)) < deg (qA (t)) = m. It implies that P(A) = qA (A)f (A) + r(A) = r(A) ∈ span {I, A, A2 , · · · , Am−1 } . Because of qA (t) is a minimum polynomial such that qa (A) = 0 thus, dim(P (A)) 6 m 6 n. Second, p(t) = a0 + a1 t + · · · + an tn by Cayley-Hamilton Theorem P (A) = 1 (a0 I + a1 A + · · · + an−1 An−1 ) thus a0 I + a1 A + · · · + an An = 0 implies An = an dim(P (A)) 6 n. (d) Because of the Corollary 2.39 we can learn that dim(C(A)) = n if and only if A is nonderogatory.. Corollary 2.41. A matrix A ∈ Mn is nonderogatory if and only if every matrix that commutes with A is a polynormial in A.. Theorem 2.42. For any matrices A ∈ Mm , B ∈ Mn , and C ∈ M m,n be given. There . is some X ∈ Mm,n such that AX − XB = C if and only if . to. . A 0 . 0 B. 32. A C 0 B. is similar.

(39) Proof. (Sucient condition) . . I −X 0 I. . A 0 0 B. . I X 0 I. . I −X 0 I. . A AX = 0 B   A AX − XB = 0 B   A C = 0 B.   A 0 I X intimates is similar to , where 0 I  0 B A C A (Necessary condition) Since is similar to 0 0 B   A 0 A C matrix S ∈ Mm+n such that S −1 S= . 0 B 0 B A C 0 B. . . . −1.  =. . I −X 0 I.  .. . 0 , there is a nonsingular B Dene. Ti : Mm+n −→ Mm+n i = 1, 2 by.  T1 (X) ≡. A 0 0 B. .  X −X. A 0 0 B. . .    A C A 0 T2 (X) ≡ X −X 0 B 0 B     A C A 0 then T2 (P X) ≡ PX − PX implies 0 B 0 B P. −1.    A C A 0 T2 (P X) ≡ P PX − X 0 B 0 B     A 0 A 0 = X −X 0 B 0 B −1. . = T1 (X). Markedly, we have T1 (X) = 0 if and only if T2 (P X) = 0, and Ker(T2 ) = {P X : X ∈ Mm+n and X ∈ Ker(T1 )}. This means that T1 and T2 have the same nullspace x11 x12 dimension. For X = and it has this x21 x22 33.

(40)  T1 (X) =. AX11 − X11 A AX12 − X12 B BX21 − X21 A BX22 − X22 B. . .  AX11 − X11 A + CX21 AX12 − X12 B + CX22 T2 (X) = . BX21 − X21 A BX22 − X22 B   X11 X12 Consider nullspace of T2 which contains the form = Z , then 0 −I  T1 (Z) =  =  =  T2 (Z) =  =  =  =  =.  X11 X12 0 −I  AX11 X12 B 0 −B  AX11 − X11 A AX12 − X12 B 0 0     A C X11 X12 X11 X12 − 0 B 0 −I 0 −I    AX11 AX12 − C X11 A X12 B − 0 −B 0 −B  AX11 − X11 A AX12 − X12 B − C 0 0  AX11 − X11 A C − C 0 0  AX11 − X11 A 0 . 0 0. A 0 0 B. .   X11 X12 − 0 −I   AX12 X11 A − −B 0. A 0 0 B. . A 0 0 B. . It implies that X = X12 . Now we have to prove a matrix of this special form in nullspace of T2 . Dene the two linear transformation. ςi : nullspace(Ti ) −→ Mn,m+n i = 1, 2 by.  ςi :. X11 X12 X21 X22.  −→. . then we can nd that 1.X ∈ nullspace(ς1 ) if and only if 34. X21 X22. . i = 1, 2.

(41) X21 X22 AX11 − X11 A AX12 − X12 B. =0 =0 =0 =0. 2.X ∈ nullspace(ς2 ) if and only if X21 X22 AX11 − X11 A AX12 − X12 B. =0 =0 =0 =0.  3.range(ς1 ) = { X21 X22  : BX21 − X21 A = 0 and BX22 − X22 B = 0} 4.range(ς2 ) = { X21 X22 : BX21 − X21 A = 0 and BX22 − X22 B = 0 and there exist X11 , X12 such that AX11 − X11 A = CX and AX12 − X12 B = −CX22 }. Because of the condition of ς2 is more than ς1 so range(ς2 ) ⊆ range(ς1 ) Also,. nullspace(ς2 ) = nullspace(ς1 ) and. dim(nullspace(T1 )) = dim(nullspace(T2 )) by Dimension Theorem. dim(nullspace(ςi )) + dim(range(ςi )) = dim(ςi ) = dim(nullspace(Ti )).. Proposition 2.43. Consider A = I ∈ Mn and a matrix B that commutes with A need. not be a polynormial in A, but if B commutes with every matrix that commutes with A, then B must be a polynormial in A. 35.

(42) Proof. The rst part, because of A = I ∈ Mn so we know that P (I) = P (1)I. And. P (I) = a0 I + a1 I + · · · + am I m = (a0 + a1 + · · · + am )I. We know that for any matrix that commutes with A need not be a polynormial in A. The second part, if A ∈ Mn,n P (t) = a0 + a1 t + · · · + am tm is a polynormial, then A commutes with P (A). We have. P (A) = a0 I + a1 A + · · · + am Am AP (A) = A(a0 I + a1 A + · · · + am Am ) = a0 A + a1 A2 + · · · + am Am+1 = (a0 I + a1 A + · · · + am Am )A = P (A)A.. Proposition 2.44. Let A, B ∈ Mn be given matrices. The equation AX − XB = 0 has a nonsingular solution X ∈ Mn if and only if A and B are similar.. Proof. (Sucient condition ) AX−XB = 0 implies AX = XB and X −1 AX = X −1 XB. and it also implies XAX = B for all X is nonsingular. Thus, A is similar to B . (Necessary condition) Since A and B are similar, there exists a nonsingular matrix X such that X −1 AX = B . It implies that XX −1 AX = XB then AX = XB . So we have AX − XB = 0.. Proposition 2.45. For any A ∈ Mn be given matrix, and let λ1 · · · λn be the eigenvalues of A. The matrix equation AX − XA = λX has a nontrivial (X 6= 0) solution X ∈ Mn if and only if λ = λi − λj for some i, j . Proof. Because of AX − XA = µX it implies  (In ⊗ A) − (AT ⊗ In ) vec(X) = µ (vec(X)) .. (2.27). Let H = (In ⊗ A − AT ⊗ In ) and x = vec(X) then (2.27) becomes Hx = µx and implies (µI − H)x = 0. Since the eigenvalues of H are {(λi − λj )}, AX − XA = µX has a nontrivial solution if and only if µ = λi − λj , for some i, j . 36.

(43) Lemma 2.46. Let A ∈ Mn be a given matrix with p distinct eigenvalues {λ1 , λ2 , · · · λp }, and suppose that the Jordan form of A with respect to each eigenvalue λi is a direct sum of the form Jni,1 (λi ) ⊗ · · · ⊗ Jni,ki (λi ) which is arranged such that ni,1 ≥ ni,2 ≥ · · · ≥ ni,ki ≥ 1. The set C(A) which collects matrices X ∈ Mn that commute with A is a subspace of Mn of dimension ν=. p kr X X. min{nr,i , nr,j }. r=1 i,j=1. =. p kr X X. nr,max{i,j}. r=1 i,j=1 p. =. X. (nr,1 + 3nr,2 + 5nr,3 + · · · + (2kr − 1)nr,kr ) .. r=1. Notice that the values of the distinct eigenvalues of A play no role in this formula, and the dimension of C(A) is determined only by the sizes of the Jordan blocks of A associated with the eigenvalues of A. Proof. For λi , i = 1, 2, · · · , p and the matrix equation Jni,1 (λi ) ⊗ · · · ⊗ Jni,ki (λi ), ni,1 ≥ ni,2 ≥ · · · ≥ ni,ki ≥ 1 means that. The first layer of eigenspace ⊗ The second layer of eigenspace ⊗ · · · ⊗ The k layer of eigenspace. Thus, by (2.24) we can get. ν=. p kr X X. min{nr,i , nr,j }. r=1 i,j=1. =. p kr X X. nr,max{i,j}. r=1 i,j=1 p. =. X. (nr,1 + 3nr,2 + 5nr,3 + · · · + (2kr − 1)nr,kr ) .. r=1. 37.

(44) Corollary 2.47. For any matrix A ∈ Mn is normal if and onl y if C(A) = C(A∗ ). Proposition 2.48. Let A ∈ Mn be a given nonderogatory matrix. We simply to know that C(A) = P (A). Now we are interestested in an upper bound on the dimension of the nullspace of the linear transformation T (X) = AX − XA, which is at least n since the independent set {I, A, A2 , · · · , An−1 } is in the nullspace, and which is the same as the dimension of the nullspace of the Kronecker sum I ⊗ A − AT⊗ I . Since A is similar  to 0 ···.    the companion matrix of its characteristic polynomial C =    . 1 0. .. .. 0. 0. .. ... . ... 0 .... −a0. .. . .. .. 1 0 −an−2 0 1 −an−1.       . it suces to consider I ⊗ C − C ⊗ I , which by explicit calculation, is easily seen to have rank at least n(n − 1) (focus on −I blocks in the n − 1 upper-diagonal positions). Thus, the dimension of the nullspace of I ⊗ C − C T ⊗ I is at most n2 − n(n − 1). T. Proof. Let A ∈ Mn and the centralizer of A is the set C(A) ≡ {B ∈ Mn : AB = BA}. which collects all matrices that commute with A, and the set of all polynomials in A is the set P (A) ≡ {p(A) : p(t) is a polynomial}. Since A is nonderogatory and pA (t) = a0 + a1 t + · · · + an tn by minimum polynomial. PA (A) = a0 I + a1 A + · · · + an An = 0 implies. An =. 1 (a0 I + a1 A + · · · + an−1 An−1 ) an. thus {I, A, A2 , · · · , An−1 } is the nullspace of T (X) = AX − XA and it is also the nullspace of I ⊗ A − AT ⊗ I because of AX − XA = I ⊗ A − AT ⊗ I . Now we want to illustrate C(A) = P (A). (The right contained in the left ) It is obvious to know. (The left contained in the right) C(A) ≡ {X ∈ Mn : AX = XA} infers C(A) is a nullspace of T that is dim(C(A)) = n and dim(P (A)) = n by Remainder Theorem and P (A) ⊆ C(A) dim(P (A)) = n implies C(A) ⊆ P (A) . That summarizes thst C(A) = P (A). The next step should be noted that (I ⊗ A − AT ⊗ I) is similar to (I ⊗ C − C T ⊗ I). Since A is similar to C , there exists a nonsingular matrix S such that A = SCS −1 then. 38.

(45) I ⊗ A − AT ⊗ I = I ⊗ SCS −1 − (SCS −1 )T ⊗ I. . = S −T S T ⊗ SCS −1 − S −T C T S T ⊗ SS −1 = S −T ⊗ S(I ⊗ C − C T ⊗ I)S T ⊗ S −1 . Thus, (I ⊗ A − AT ⊗ I) is similar to (I ⊗ C − C T ⊗ I).. 2.4 Additive and multiplicative commutators and linear preservers. Denition 2.9. For given A, B ∈ Mn and a matrix of the form AB − BA is called an additive commutator. If A and B are nonsingular then a matrix of the form ABA−1 B −1 is called a multiplicative commutator. Lemma 2.49. For each A ∈ Mn there is a unitary matrix U ∈ Mn such that all the diagonal entries of U ∗ AU have the same value tr(A) . n. (2.28). .. Theorem 2.50. A matrix C ∈ Mn may be written as C = XY − Y X for some X , Y ∈ Mn if and only if tr(C) = 0.. Proof. (Sucient condition) since tr(AB) = tr(BA) when A, B are square. Thus, tr(C) = tr(XY − Y X) = tr(XY ) − tr(Y X) = tr(XY ) − tr(Y X) = 0. (Necessary condition) In case 1, that tr(C) = 0: by (2.28), there exists a unitary matrix U such that U ∗ CU = W and there exist two matrices X , Y such that W = XY − Y X and implies e and Ye such that C = X e Ye − Ye X e. U ∗ CU = XY − Y X . Now we want to nd X. 39.

(46) C = U (XY − Y X)U ∗ = U XY U ∗ − U Y XU ∗ = U XU ∗ U Y U ∗ − U Y U ∗ U XU ∗ . e = U XU ∗ Ye = U Y U ∗ . We get this X In case 2, that diagonal of C are all 0: since tr(C) = 0 and by (2.28), there exists an unitary matrix U such that U CU ∗ = W and diagonal of tr(C) n 0 = n P cii i = . n Thus Cii = 0 for all i. Wherefore, under case 1 and case 2 we may assume without loss of generality that C = (Cij ) with c11 = c22 = · · · cnn = 0 and X = diag [X1 , X2 , · · · , Xn ] in which X1 , X2 , · · · Xn are pairwise distinct. With X xed and C given and let Y = (yij ) then W =. XY − Y X = ((xi − xj )yij ) =C = (Cij ) . Thus,. ( yij =. cij xi −xj. if. i 6= j. arbitrary if. i=j. .. Lemma 2.51. Suppose that A ∈ Mn isn't a scalar matrix, and let α ∈ C be given. Then there is a matrix similar to A that has α in its 1,1 position and at least one nonzero entry below the diagonal in its rst column. Theorem 2.52. For any A ∈ Mn be given and suppose rankA = k ≤ n. If k = n, as-. sume that A is not a scalar matrix. Let b1 , b2 , · · · , bn and c1 , c2 , · · · , cn be given complex numbers, exactlyn − k of which are zero. If k = n, assume that b1 b2 · · · bn c1 c2 · · · cn = det(A). Then there is a matrix B ∈ Mn with eigenvalues b1 , b2 ,· · · bn and a matrix C ∈ Mn with eigenvalues c1 , c2 , · · · cn such that A = BC . 40.

(47) Proof. If k = 0, then A = 0 and c1 , c2 , · · · , cn may be ordered such that bi cji = 0 for i = 0, 1, · · · , n and the choice is B = diag [b1 · · · bn ], C = diag [cj1 , cj2 , · · · , cjn ]. If k ≥ 1 then rank(A) ≥ 1 without loss of generality b1 c1 6= 0. n = 1 it is clearly. n = 2 it is a straight calculation: . b1 c1 x y z. . P −1    b1 0 c1 c12 =P P −1 b21 b2 0 c2   b1 c1 b1 c12 =P P −1 c1 b21 b21 c21 + b2 c2. A=P. with x = b1 c12 , y = c1 b21 and c12. y x b21 = thus , B = P = b1 c1. ". b1 0 y b2 c1. # P −1 ,. x # b1 P −1 . C=P 0 c2 Now for n ≥ 3 without loss of general hypothetical we assume a11 = b11 c11 and (a1 , a2 , · · · , an )T 6= 0. Let us take a look on n = 3. In advance, it hasproved.  b1 0 c1 C12 The matrix A = BC we guess B = ,C = such that B21 B22 0 C22     b1 c1 b1 C12 b1 c1 A12 = c1 B21 B21 C12 + B22 C22 A21 A22 ". c1. implies. A12 = b1 C12 A21 = c1 B21 A22 = B22 C22 + B21 C12 with b2 , b3 are eigenvalues of B22 and c2 , c3 are eigenvalues of C22 . And it infers that. 41.

(48) C12. A12 b1 A21 = c1 = A22 − B21 C12 A12 A21 = A22 − . b1 c1 =. B21 B22 C22. We need to check three things. But now we wantto introduce a character. We let 1 xT x, y ∈ Cn and B ∈ Mn be given, and A ≡ ∈ Mn+1 then y B.  det(A) = det. 1 0 y B − yxT. . = det(B − yxT ) rank(A) = rank(B − yxT ) + 1. The rst thing is that. det(A22 −. b1 b2 b3 c1 c2 c3 A21 A12 )= b 1 c1 b1 c1 det(A) = . b1 c1. The second thing is that. rank(B22 C22 ) = rank(A22 −. A21 A12 ) b1 c1. = n − 1. Clearly, the two things are easily. The third thing has to be divided into two parts. A12 A21 The rst part, if B22 C22 = A22 − 6= αI where α ∈ C then the proof is b1 c1 complete. That means Mathematical induction will go on. A12 A21 The second part, if B22 C22 = A22 − = αI where α ∈ C then the Mathematb1 c1 ical induction can not be. Therefore, we need to use similar methods to make it. We 42.

(49) choose w ∈ Cn−1 such that wT A21 = 0, wT A22 6= 0. Simply, if B22 C22 = A22 −   1 wT such that is a scalar matrix we use a similar matrix S = 0 I. S. −1.  AS =. b1 c1 A12 + b1 c1 wT − wT A22 A21 A22 + A21 wT. A12 A21 b1 c1. . = Z. Since Z11 = b1 c1 we dismantle B = 0. . b1 0 0 0 B21 B22 0. 0. B 0 C 0 . But we still have to explain whether B22 C22 matrix or not..  0 c1 C12 ,C = such that Z 0 = 0 0 C22 Z21 Z12 = Z22 − is equal to a scalar b1 c1. . . Z21 Z12 A21 (A12 + b1 c1 wT − wT A22 ) T Z22 − = A22 + A21 w − b1 c1 b1 c1 A21 wT A22 A A 21 12 T T − A21 w + = A22 + A21 w − b1 c1 b1 c1 A21 A12 A21 wT A22 = A22 − + b1 c1 b1 c1 = αI + Y A21 wT A22 and because of A21 6= 0 and wT A22 6= 0 we have rank(Y ) = 1 b1 c1 Z21 Z12 thus Z22 − is not equal to a scalar matrix. Hence S −1 AS = Z = B 0 C 0 implies b1 c1 A = SB 0 C 0 S −1 = SB 0 S −1 SC 0 S −1 . We let B = SB 0 S −1 with eigenvalues b1, b2, b3 and C = SC 0 S −1 with eigenvalues c1 , c2 , c3 . Therefore, we completed the proof. where Y =. Theorem 2.53. A matrix A ∈ Mn may be written as A = XY X −1 Y −1 for some. nonsingular X ,Y ∈ Mn if and only if det(A) = 1.. Proof. (Sucient condition) Since det(AB) = det(A)det(B) for A , B ∈ Mn , thus. 43.

(50) det(A) = det(XY X −1 Y −1 ) = det(X)det(Y )det(X −1 )det(Y −1 ) = det(X)det(X −1 )det(Y )det(Y −1 ) = det(XX −1 )det(Y Y −1 ) = 1 × 1. (Necessary condition) In case 1: If A isn't a scalar matrix, let b1 , b2 , · · · bn be distinct nonzero scalars and let ci = b−1 i i = 1, 2, · · · , n. Let A = XZ with σ(X) = {b1 , · · · bn } and σ(Z) = {c1 , · · · , cn } then Z ∼ X −1 . Thus, there exists a nonsingular Y ∈ Mn such that Z = Y X −1 Y −1 and A = XY X −1 Y −1 . In case 2: If A is a scalar matrix. Let A = αI for some α ∈ C with αn = 1. And. . α.    det(A) = det    . 0. ···. 0 α .. . .. . 0 ···. ···. ···. ···. ... .. ···. 0 .. . .. ..       .. . 0  0 α. = αn =1 then we let.     X=        Z =   . α. 0. 0 α2 .. . .. . 0 ··· 1. 0. 0 α .. . .. . 0 ···. 0 .. . .. .. .    ..  .   .. . 0  · · · 0 αn  ··· ··· 0 ..  .  ..  .. . .    .. . 0  · · · 0 αn−1 44. .

(51) implies Z ∼ X −1 . Therefor, there exists a nonsingular matrix Y such that Z = Y X −1 Y −1 with A = XZ = XY X −1 Y −1 .. 3. Application to Kronecker product. In this section the matrix which we are interested in is introduced in [3], we shall give the proof of propositions. Let's focus on matrices 1 K h3 3. . 0 −1  K3 C= h3 1 diag (K2 , · · · , K2 ) h2. −1 diag h1. 0 (K1 , · · · , K1 ). −1 diag (K2 , · · · h2 1 diag (K1 , · · · h1.  , K2 ) , K1 ) . (3.1). 0. where C ∈ M3N ×3N.  D=. 1 diag(K1 , · · · h1 1 diag(K2 , · · · h2 1 K h3 3. Then D has full column rank and CD = 0. Here,. 45.  , K1 ) , K2 )  .. (3.2).

(52)     K1 =         K2 =         K3 =    .  0 ..  0 1 −1 .   .. .. .. . . 0  . 0  ∈ MN1 ×N1  .. .. .. . . −1  0 . −eiN1 h1 k1 0 · · · 0 1  IN1 −IN1 0 ··· 0 ..  . 0 IN1 −IN1 . . .   .. .. .. . . . 0 0   ∈ MN1 N2 ×N1 N2  .. .. .. . . −IN1  0 . 0 ··· 0 IN1 −eiN2 h2 k2 IN1  IN1 ×N2 −IN1 ×N2 0 ··· 0 ..  .  0 IN1 ×N2 −IN1 ×N2 . . .  .. .. ..  ∈ MN ×N . . 0 . 0   .. .. .. . −IN1 ×N2  . 0 . −eiN3 h3 k3 IN1 ×N2 0 ··· 0 IN1 ×N2 1. −1. 0. ··· .. .. (3.3). (3.4). (3.5). where k = (k1 , k2 , k3 )T , h1 , h2 , and h3 denote the mesh length along the x, y and z axial directions, respectively. The constants N1 , N2 and N3 are numbers of grid points in x, y and z directions, respectively, with N = N1 N2 N3 . What we are interested in discussing is when k = 0. In our analysis, we will quote some notations and corollaries to facilitate the later can be used. When k = 0, the matrices C and D dened in (3.1) (3.2), respectively, can be rewritten as   1 Y ⊗ IN2 ⊗ IN1 −1 I ⊗ Y2 ⊗ IN1 0 h3 3 h2 N3 1 Y ⊗ IN2 ⊗ IN1 I ⊗ IN2 ⊗ Y1  0 (3.6) C =  −1 h3 3 h1 N3 −1 1 I ⊗ Y2 ⊗ IN1 h1 IN3 ⊗ IN2 ⊗ Y1 0 h2 N3 and.  D=. 1 I ⊗ IN2 ⊗ Y1 h1 N3 1 I ⊗ Y2 ⊗ IN1 h2 N3 1 Y ⊗ IN2 ⊗ IN1 h3 3. where 46.  . (3.7).

(53) . 1. −1.    Yi =    . 0 .. .. 1. 0 −1. 0 .. . 0. ··· 0 . . −1 . . .. .. .. . . 0 .. .. . . −1 ··· 0 1 0.      ∈ RNi ×Ni , i = 1, 2, 3.   . (3.8). Next, we let ~1k be the vector (1, · · · , 1)T ∈ Rk . Can immediately be aware of is that. rank(Yi ) = Ni − 1. (3.9). Yi~1Ni = 0. (3.10). and. for i = 1, 2, 3.. Proposition 3.1. Let P ∈ Rn×n , Q ∈ Rm×m and R ∈ Rl×l . Suppose Q, R are invertible. If P is singular and the columns of V form a basis of the null space of P , then (a)the columns of V ⊗ Im form a basis of the null space of P ⊗ Q; (b)the columns of Im ⊗ V form a basis of the null space of Q ⊗ P ; (c)the columns of Im ⊗ V ⊗ Il form a basis of the null space of Q ⊗ P ⊗ R.. Proof. (a) Because of the columns of V are the basis of nullspace of P i.e. P V = 0. We look at this. (P ⊗ Q) (V ⊗ Im ) = (P V ) ⊗ (QIm ) = 0 ⊗ (QIm ) = 0. It has completed the columns of V ⊗ Im form a basis of the null space of P ⊗ Q. (b) By using the same reasoning and we have this. (Q ⊗ P ) (Im ⊗ V ) = (QIm ) ⊗ (P V ) = (QIm ) ⊗ 0 = 0. 47.

(54) It has completed the proof. (c) By part (a) and part (b) we look at this equation. (Q ⊗ P ⊗ R) (Im ⊗ V ⊗ Il ) = (QIm ) ⊗ (P V ) ⊗ (RIl ) = (QIm ) ⊗ 0 = 0. It is easy to know that the columns of Im ⊗ V ⊗ Il form a basis of the null space of Q ⊗ P ⊗ R.. Proposition 3.2. If we give any u ∈ Cn and v ∈ Cm then (a)span {u ⊗ v} = span {In ⊗ v} ∩ span {u ⊗ Im } , (b)span {Ik ⊗ u ⊗ v} = span {Ik ⊗ In ⊗ v} ∩ span {Ik ⊗ u ⊗ Im } .. (3.11) (3.12). Proof. (a) Assume for any c ∈ span {In ⊗ v}∩span {u ⊗ Im }. Then there exists a ∈ Cn and b ∈ Cm such that. (In ⊗ v)a = c = (u ⊗ Im )b. Additionally, we can see that. a ⊗ v = u ⊗ b. That means a = βu and b = βv for some β ∈ C. Hence, c = βu ⊗ v . (b) Because of. {Ik ⊗ In ⊗ v} = {(e1 ⊗ In ⊗ v, e2 ⊗ In ⊗ v, · · · , ek ⊗ In ⊗ v)} so we can see. span {Ik ⊗ In ⊗ v} = span {(e1 ⊗ In ⊗ v, e2 ⊗ In ⊗ v, · · · , ek ⊗ In ⊗ v)} span {Ik ⊗ u ⊗ Im } = span {(e1 ⊗ u ⊗ Im , e2 ⊗ u ⊗ Im , · · · , ek ⊗ u ⊗ Im )} . And we can nd that for i 6= j ,. span {ei ⊗ In ⊗ v} ∩ span {ej ⊗ In ⊗ v} = {0} , span {ei ⊗ u ⊗ Im } ∩ span {ej ⊗ u ⊗ Im } = {0} , span {ei ⊗ In ⊗ v} ∩ span {ej ⊗ u ⊗ Im } = {0} . 48.

(55) We have. span {Ik ⊗ In ⊗ v} ∩ span {Ik ⊗ u ⊗ Im } k. = ⊕ span {ei ⊗ In ⊗ v} ∩ span {ej ⊗ u ⊗ Im } i=1 k. = ⊕ span {ei ⊗ u ⊗ v} by (3.11) i=1. =span {Ik ⊗ u ⊗ v} .. Proposition 3.3. For the matrix D which is dened in. N − 1.. (3.7), we have rank(D) =. Proof. For the matrix D which is dened in (3.7), we have DT =. . 1 h1. (IN3 ⊗ IN2 ⊗ Y1 ). 1 h2. (IN3 ⊗ Y2 ⊗ IN1 ). 1 h3. (Y3 ⊗ IN2 ⊗ IN1 ). . It can see very clearly. DT D =. 1 1 1 I ⊗ IN2 ⊗ Y1T Y1 + 2 IN3 ⊗ Y2T Y2 ⊗ IN1 + 2 Y3T Y3 ⊗ IN2 ⊗ IN1 2 N3 h1 h2 h3. and by the equations (3.9), (3.10) and Proposition3.1 we can nd that. IN3 ⊗ IN2 ⊗ ~1N1 form the bases of the nullspaces of IN3 ⊗ IN2 ⊗ Y1T Y1 , IN3 ⊗ ~1N2 ⊗ IN1 form the bases of the nullspaces of IN3 ⊗ Y2T Y2 ⊗ IN1 , ~1N3 ⊗ IN2 ⊗ IN1 form the bases of the nullspaces of Y3T Y3 ⊗ IN2 ⊗ IN1 . And as a result of DT D is semi-positive dene, we have n o n o n o span IN3 ⊗ IN2 ⊗ ~1N1 ∩ span IN3 ⊗ ~1N2 ⊗ IN1 ∩ span ~1N3 ⊗ IN2 ⊗ IN1 n o n o =span IN3 ⊗ ~1N2 ⊗ ~1N1 by (3.12) ∩ span ~1N3 ⊗ IN2 ⊗ IN1 n o =span ~1N3 ⊗ ~1N2 ⊗ ~1N1 , which is the nullspaces of DT D. Thence, rank(D) = rank(DT D) = N − 1. 49.

(56) Theorem 3.4. Let U=. . D I3 ⊗ ~1N. . (3.13). .. We have rank(U ) = N + 2 as well as CU = 0, where C and D are dened in (3.6) and (3.7), respectively. Proof. On account of . 0 −1  Y ⊗ IN2 ⊗ IN1 C= h3 3 1 I ⊗ Y2 ⊗ IN1 h2 N3. −1 I h2 N3 1 I h1 N3. 1 Y h3 3. ⊗ IN2 ⊗ IN1 0 −1 I ⊗ IN2 ⊗ Y1 h1 N3.  ⊗ Y2 ⊗ IN1 ⊗ IN2 ⊗ Y1  0. and.  D=. 1 I ⊗ IN2 ⊗ Y1 h1 N3 1 I ⊗ Y2 ⊗ IN1 h2 N3 1 Y ⊗ IN2 ⊗ IN1 h3 3.  . we have. U=.  .  =. D I3 ⊗ ~1N. . 1 I ⊗ IN2 ⊗ Y1 h1 N3 1 I ⊗ Y2 ⊗ IN1 h2 N3 1 Y ⊗ IN2 ⊗ IN1 h3 3. . ~1N ~1N ~1N.  .. Thus, we directly calculate. CU . 0 −1  Y ⊗ IN2 ⊗ IN1 = h3 3 1 I ⊗ Y2 ⊗ IN1 h2 N3. 1 Y h3 3. ⊗ IN2 ⊗ IN1 0 −1 I ⊗ IN2 ⊗ Y1 h1 N3. −1 I h2 N3 1 I h1 N3.  ⊗ Y2 ⊗ IN1  ⊗ IN2 ⊗ Y1   0. 1 I ⊗ IN2 ⊗ Y1 h1 N3 1 I ⊗ Y2 ⊗ IN1 h2 N3 1 Y ⊗ IN2 ⊗ IN1 h3 3. . ~1N ~1N ~1N. = 0. Moreover,.   DT I3 ⊗ ~1N  =. . 1 h1. (IN3 ⊗ IN2 ⊗ Y1 ). 1 h2. (IN3 ⊗ Y2 ⊗ IN1 ). =0. 50. 1 h3. . (Y3 ⊗ IN2 ⊗ IN1 ) . ~1N.  ~1N.  ~1N.  .

(57) It represents the column of D are orthogonal to I3 ⊗ ~1N . By the Proposition3.3 rank(D) = N − 1 we have. rank(U ) = rank(D) + 3 infers. rank(U ) = N + 2.. Theorem 3.5. The matrix B −1 C T is B − orthogonal to U and the rank of B −1 C T is. 2N − 2. Here, the matrices C and U are dened in (3.6) and (3.13), respectively.. Proof. The Theorem3.4 tells us that CU = 0 thus, we are absorbed in the matrix. B −1 C T .. B −1 C T. T. BU = CB −1 BU = CU = 0.. Namely, B −1 C T is B − orthogonal to U . Next we let   C e C= ∈ R4N ×3N . DT Owing to the Proposition3.3 and the Theorem3.4 we know that. e = rank(C) + rank(DT ) rank(C) = rank(C) + rank(D) = rank(C) + (N − 1). The results obtained using the following.  (IN3 ⊗ IN2 ⊗ Y1 )T (IN3 ⊗ Y2 ⊗ IN1 ) = INT 3 ⊗ INT 2 ⊗ Y1T (IN3 ⊗ Y2 ⊗ IN1 )    = INT 3 IN3 ⊗ INT 2 Y2 ⊗ Y1T IN1    = IN3 INT 3 ⊗ Y2 INT 2 ⊗ IN1 Y1T  = (IN3 ⊗ Y2 ⊗ IN1 ) INT 3 ⊗ INT 2 ⊗ Y1T = (IN3 ⊗ Y2 ⊗ IN1 ) (IN3 ⊗ IN2 ⊗ Y1 )T . 51. (3.14).

(58) Similary,. (IN3 ⊗ IN2 ⊗ Y1 )T (Y3 ⊗ IN2 ⊗ IN1 ) = (Y3 ⊗ IN2 ⊗ IN1 ) (IN3 ⊗ IN2 ⊗ Y1 ) ,T (IN3 ⊗ Y2 ⊗ IN1 )T (Y3 ⊗ IN2 ⊗ IN1 ) = (Y3 ⊗ IN2 ⊗ IN1 ) (IN3 ⊗ Y2 ⊗ IN1 )T . We get. eT C e = diag (Q1 , Q2 , Q3 ) , C. where. 1 IN ⊗ IN2 ⊗ Y1 Y1T + h21 3 1 Q2 = 2 IN3 ⊗ IN2 ⊗ Y1T Y1 + h1 1 Q3 = 2 IN3 ⊗ IN2 ⊗ Y1T Y1 + h1 Q1 =. 1 1 I ⊗ Y2T Y2 ⊗ IN1 + 2 Y3T Y3 ⊗ IN2 ⊗ IN1 , 2 N3 h2 h3 1 1 I ⊗ Y2 Y2T ⊗ IN1 + 2 Y3T Y3 ⊗ IN2 ⊗ IN1 , 2 N3 h2 h3 1 1 I ⊗ Y2T Y2 ⊗ IN1 + 2 Y3 Y3T ⊗ IN2 ⊗ IN1 . 2 N3 h2 h3. Using the results of (3.9), (3.10) and the Proposition3.1, we get. rank(Q1 ) = rank(Q2 ) = rank(Q3 ) = N − 1. n o ~ ~ ~ And they can discover that the basis 1N3 ⊗ 1N2 ⊗ 1N1 is the nullspace of Q1 , Q2 and Q3 . That is Q1 , Q2 and Q3 have the same nullspace. This means that e = rank(C eT C) e rank(C) = 3(N − 1) = 3N − 3 as well as (3.14),. e − (N − 1) rank(C) = rank(C) = 3N − 3 − N + 1 = 2N − 2. Additionally, since B is positive diagonal matrix, we have the rank(B −1 C T ) = 2N − 2.. 52.

參考文獻

相關文件

In fact, while we will be naturally thinking of a one-dimensional lattice, the following also holds for a lattice of arbitrary dimension on which sites have been numbered; however,

we often use least squares to get model parameters in a fitting problem... 7 Least

Boston: Graduate School of Business Administration, Harvard University.. The Nature of

The presentation or rebranding by a company of an established product in a new form, a new package or under a new label into a market not previously explored by that company..

• A delta-gamma hedge is a delta hedge that maintains zero portfolio gamma, or gamma neutrality.. • To meet this extra condition, one more security needs to be

在這一節中,我們將學習如何利用 變數類 的「清 單」來存放資料(表 1-3-1),並學習應用變數的特

William and McCarthy (1997) claimed that the product’s life cycle - period usually consists of five major steps or phases: Product development, Product

From literature review, the study obtains some capability indicators in four functional areas of marketing, product design and development, manufacturing, and human