• 沒有找到結果。

A Numerical Method for Quadratic Eigenvalue Problems of Gyroscopic Systems

N/A
N/A
Protected

Academic year: 2021

Share "A Numerical Method for Quadratic Eigenvalue Problems of Gyroscopic Systems"

Copied!
15
0
0

加載中.... (立即查看全文)

全文

(1)

A Numerical Method for Quadratic Eigenvalue Problems of

Gyroscopic Systems

Jiang Qian Wen-Wei Lin

Abstract

We consider the quadratic eigenvalues problem (QEP) of gyroscopic systems (λ2M +

λG + K)x = 0, with M = M> being positive definite, G = −G>, and K = K> being

negative semidefinite. In [1], it is shown that all eigenvalues of the QEP can be found by

finding the maximal solution of a nonlinear matrix equation Z + A>Z−1A = Q under the

assumption that the QEP has no eigenvalues on the imaginary axis. Although for some cases when the QEP has eigenvalues on the imaginary axis, the algorithm proposed in [1] also works, but the convergence is much slower. In this paper, we consider the general case when the QEP has eigenvalues on the imaginary axis. We propose an eigenvalue shifting technique to modify the original gyroscopic system to a new gyroscopic system, which changes the eigenvalues on the imaginary axis to eigenvalues with nonzero real parts, while keeps other eigenpairs unchanged. This ensures that the algorithm for the maximal solution of the nonlinear matrix equation converges quadratically. Numerical examples illustrate the efficiency of our method.

Key words. quadratic eigenvalues problem, gyroscopic system, nonlinear matrix equa-tion, eigenvalue shifting

1

Introduction

The quadratic eigenvalue problem (QEP) is to find scalars λ ∈ C and nonzero vectors x ∈ Cn

satisfying

Q(λ)x = (λ2M + λC + K)x = 0, (1.1) where M, C, K are n × n matrices. The scalar λ and the nonzero vector x are called, re-spectively, the eigenvalue and the (right) eigenvector corresponding to λ. The nonzero vector

School of Sciences, Beijing University of Posts and Telecommunications, Beijing, 100876, China; Depart-ment of Mathematics, National Tsinghua University, Hsinchu, 300, Taiwan (jqian104@ gmail.com). This author’s research was partly supported by Project NSFC 10571007.

Department of Mathematics, National Tsinghua University, Hsinchu, 300, Taiwan (wwlin@am.nthu.edu.tw).

(2)

y ∈ Cn satisfying

y∗Q(λ) = y∗(λ2M + λC + K) = 0

is called the left eigenvector corresponding to λ. The quadratic matrix polynomial Q(λ) in (1.1) is generally known as a quadratic pencil. The QEP arises in a wide variety of applications, such as dynamics analysis of mechanical systems, fluid mechanics and so on, and has received much attention in recent years. A good survey of applications, spectral theory, perturbation analysis and numerical approaches can be found in [2] and the references therein.

In this paper, we study a numerical method for a special QEP of the form

G(λ)x = (λ2M + λG + K)x = 0, (1.2) where M = M>, G = −G>, K = K> ∈ Rn×n with M being positive definite. There is a

second-order differential equation:

q(t) + G ˙q(t) + Kq(t) = f (t),

associated with (1.2), which is known as a gyroscopic system. Gyroscopic systems correspond to spinning structures where the Coriolis inertia forces are taken into account, and they are widely known to exhibit instabilities [3, 4].

It is well known that the 2n eigenvalues of G(λ) has Hamiltonian properties, i.e., they are symmetrically located with respect to the real and imaginary axes. Furthermore, if x is the eigenvector corresponding to an eigenvalue λ, then ¯x is the eigenvector (or left eigenvector) corresponding to ¯λ (or −λ). When K is positive definite, all the eigenvalues of G(λ) are purely

imaginary and semisimple. In this case the system is stable. Strong stability, which refers to a system and all its neighbours being stable, has been investigated in [4]. When K is negative definite, the eigenvalues of G(λ) are not necessarily purely imaginary and semisimple, hence the system is not guaranteed to be stable [2].

In this paper, we consider the QEP (1.2) with K being negative semidefinite, and we are interested in finding all 2n eigenvalues and the associated eigenvectors. A standard approach for finding all eigenpairs of the QEP is to work with a 2n × 2n linearized form and compute its generalized Schur form. However, such approaches cannot generally keep the Hamiltonian structure. To this end, we can transform the QEP (1.2) to a Hamiltonian eigenvalue problem [2], and then apply the structure-preserving methods [5, 6]. The skew-Hamiltonian structure is preserved in both methods by the use of symplectic orthogonal transformations. Note that these methods work on matrices of dimension 2n and produce no eigenvectors. Mehrmann and Watkins [7] proposed a structure-preserving shift-and-invert Krylov subspace method for the computation of a few eigenvalues and eigenvectors of large, sparse skew-Hamiltonian/Hamiltonian pencils, and applied the method to the QEP (1.2). Recently, Guo [1] shows that all the eigenpairs of the QEP (1.2) can be found by finding a proper solvent S of the quadratic matrix equation MS2+ GS + K = 0, which is equivalent to

(3)

find the maximal solution Z+of a nonlinear matrix equation (NME) Z + A>Z−1A = Q. The

numerical approach for the NME developed in [1] is based on the cyclic reduction method [8]. The method is quadratically convergent if ψ(λ) ≡ λ2A>+ λQ + A has no unimodular

eigenvalues, and is linearly convergent if all unimodular eigenvalues of Z−1+ A are semisimple. A more efficient method for solving the NME based on the SDA algorithm [9] is shown to be linearly convergent if the unimodular eigenvalues of Z−1+ A have half of the partial multiplicity of the associated unimodular eigenvalue of ψ(λ).

Linear convergence is poor. The main purpose of this paper is to provide a new technique to shift the purely imaginary eigenvalues, while keeping the other eigenpairs unchanged. We then apply the structure-preserving methods in [1] or [9] to find the maximal solution of the resulted NME with quadratic convergence.

The paper is organized as follows. In Section 2, we review some results in [1] on how to solve the QEP by the solvent approach, and the SDA algorithm and the related convergence analysis in [10, 9]. The main technique of shifting the purely imaginary eigenvalues of the QEP is developed in Section 3. Some numerical examples are shown in Section 4 to illustrate the efficiency of our approach. Some conclusions are given in Section 5.

2

Solving the QEP

As described in Section 1, the classical approach for finding all 2n eigenpairs of the QEP is to use linearizations and solve the resulting 2n × 2n standard or generalized eigenvalue problem. These methods operate in dimensions twice that of the original problem. Another approach is to factorize G(λ) in (1.2). It is well known that G(λ) has the factorization

G(λ) = (λM + MS + G)(λI − S), (2.1) if and only if S is a solution of the quadratic matrix equation

MS2+ GS + K = 0. (2.2) Such an S is called a solvent of (2.2) [3]. If (2.2) has a solvent S, then the eigenvalues of

G(λ) are those of S and those of the matrix pencil λM + MS + G. However, (2.2) may

not have any solvent. Even if the solvent exists, the computation involved may be difficult. Fortunately, for the special QEP (1.2), Guo ([1], Lemma 1) proved that if the QEP (1.2) has no eigenvalues on the imaginary axis, then the matrix equation (2.2) has a real solvent whose eigenvalues are in the right half plane. Therefore, if we can find such S, the remaining

n eigenvalues of G(λ) are obtained by symmetry without any computation. Moreover, since

S is real, the complex eigenvalues of S must appear in complex conjugate. Together with the eigenvalues in the left half plane obtained by symmetry, the Hamiltonian structure for the eigenvalues of the QEP (1.2) is preserved. The eigenvectors can be obtained from the factorization (2.1) and the properties of eigenvectors of the QEP (1.2). If xi and yi are,

(4)

S, or

Sxi = λixi, y∗iS = λiy∗i,

then xi and (λiM + MS + G)−>i are eigenvectors corresponding to ±λi, both eigenvalues

of the QEP (1.2).

It seems difficult to find the solvent of (2.2) whose eigenvalues are in the right half plane directly. Instead, the Cayley transformation S = (I + Y)(I − Y)−1 is used (see [1]).

Equation (2.2) then becomes

A>Y2+ QY + A = 0, (2.3) where

A = M + K + G, Q = 2(M − K) is positive definite. (2.4) Recall that the eigenvalues of S lie in the right half plane. With the Cayley transformation, we are now interested in the solution Y of (2.3) whose eigenvalues are inside the unit circle. With Y = −Z−1A in (2.3), we arrive at the NME:

Z + A>Z−1A = Q. (2.5) The solution Z of (2.5) then satisfies ρ(Z−1A) < 1 or ρ(Z−1A) ≤ 1 under appropriate

assumptions. Here ρ(·) denotes the spectral radius of a matrix.

Equation (2.5) has been well studied [11, 12, 8, 10]. We are interested in finding the maximal symmetric positive definite solution of (2.5). A symmetric positive definite solution Z+ is called a maximal solution if Z+ ≥ Z for any symmetric positive definite solution Z of

(2.5). Here Z1 ≥ Z2 (Z1 > Z2), where Z1, Z2 are symmetric, means that Z1− Z2 is positive

semidefinite (definite). The following result about the maximal solution was given in [11, 12]: Theorem 2.1. Equation (2.5) has a symmetric positive definite solution if and only if ψ(λ) =

λA+Q+λ−1A>is regular (i.e., the determinant of ψ(λ) is not identically zero) and ψ(λ) ≥ 0

for all λ on the unit circle. Furthermore, if (2.5) has a symmetric positive definite solution, it has a maximal solution Z+ with ρ(Z−1+ A) ≤ 1. For any other symmetric positive definite

solution Z, it holds ρ(Z−1A) > 1. Moreover, ρ(Z−1

+ A) < 1 if and only if ψ(λ) > 0 for all λ

on the unit circle.

The following two theorems in [1] show the relations between the NME (2.5) and the QEP (1.2).

Theorem 2.2. ([1, Theorem 6]) The QEP (1.2) has no eigenvalues on the imaginary axis

if and only if ψ(λ) = λA + Q + λ−1A> > 0 for all λ on the unit circle, where the matrices

(5)

Theorem 2.3. ([1, Theorem 7]) Assume that ψ(λ) ≥ 0 for all λ on the unit circle and let Z+

be the maximal solution of (2.5). Then the eigenvalue of −Z−1+ A are precisely the eigenvalues

of φ(λ) = λ2A>+ λQ + A inside or on the unit circle, with the same partial multiplicities

for each eigenvalue inside the unit circle and with half of the partial multiplicities for each eigenvalue on the unit circle.

Guo [1] also proves that µiis an eigenvalue of φ(µ) if and only if λi = (1+µi)/(1−µi) is an

eigenvalue of G(λ), and the partial multiplicities of µiare the same as the partial multiplicities

of λi. Together with the two theorems, we can see that all eigenvalues of the QEP (1.2) can

be found by finding the eigenvalues of −Z−1+ A, where Z+ is the maximal solution of (2.5),

under the assumption that ψ(λ) ≥ 0 for all λ on the unit circle. Moreover, if xi and yi are,

respectively, the right and left eigenvectors of −Z−1+ A corresponding to µi, then xi and yiare,

respectively, the right and left eigenvectors of S = (I − Z−1+ A)(I + Z−1+ A)−1 corresponding

to λi = (1 + µi)/(1 − µi). Hence, xi and (λiM + MS + G)−>i are the eigenvectors of the

QEP corresponding to ±λi.

The problem is now reduced to computing the maximal solution of (2.5) efficiently. Sev-eral numerical methods have been proposed, including the cyclic reduction method [8] and the structure-preserving doubling algorithm [9]. These algorithms are similar, but the con-vergence analysis of the preserving doubling algorithm is simpler. The structure-preserving doubling algorithm is as follows.

Algorithm 2.1:

A0= A, Q0= Q, P0 = 0,

Ak+1= Ak(Qk− Pk)−1Ak,

Qk+1= Qk− A>k(Qk− Pk)−1Ak,

Pk+1 = Pk− Ak(Qk− Pk)−1A>k.

To ensure that the iteration in Algorithm 2.1 is well-defined, Qk− Pk must be symmetric positive definite for all k. This is guaranteed by the following theorem in [9], which describes the convergence of Algorithm 2.1.

Theorem 2.4. Assume that Z > 0 is a solution of (2.5), and let R = Z−1A. Then

the matrix sequences {Ak}, {Qk} and {Pk} generated by Algorithm 2.1 are well-defined and

satisfy 1. Ak= (Z − Pk)R2k ; 2. 0 ≤ Pk≤ Pk+1< Z and Qk− Pk= (Z − Pk) + A>k(Z − Pk)−1Ak> 0; 3. Z ≤ Qk+1 ≤ Qk≤ Q and Qk− Z = (R>)2k (Z − Pk)R2k ≤ (R>)2k ZR2k .

(6)

Moreover, if the maximal solution Z+ satisfies ρ(S+) < 1, where S+= Z−1+ A, we have

kAkk2 ≤ kZ+k2kS2 k

+k2→ 0, as k → ∞;

kZ+− Qkk2 ≤ kZ+k2kS2+kk22→ 0, as k → ∞.

Theorem 2.4 shows that Algorithm 2.1 converges quadratically when no eigenvalues of Z−1+ A lies on the unit circle. When ρ(Z−1+ A) = 1, Chu et al [10] proved the following result. Theorem 2.5. Assume that Z+ is the maximal solution of (2.5), and ρ(Z−1+ A) = 1. If the

partial multiplicities of Z−1+ A associated with the unimodular eigenvalues are all even, then

the matrix sequence {Qk} generated by Algorithm 2.1 converges to Z+ with convergence rate

1/2.

3

Eigenvalue Shifting

Theorem 2.2, together with Theorems 2.1 and 2.4, show that if the QEP (1.2) has no eigenval-ues on the imaginary axis, the matrix sequence {Qk} generated by Algorithm 2.1 converges

quadratically to the maximal solution Z+ of the equation (2.5). However, if the QEP (1.2)

has eigenvalues on the imaginary axis, ψ(λ) = λ2A>+λQ+A has eigenvalues on the unit

cir-cle (because of the Cayley transformation involved). Hence if the conditions in Theorem 2.5 hold, Algorithm 2.1 still converges, although on a slower rate. So it is desirable to shift purely imaginary eigenvalues while keeping the remaining eigenpairs.

Chu et al [13] discussed model updating with no spill-over for quadratic models, which incorporates the original quadratic model with some measured data. The updated model matches the measure data preserving part of original eigenstruture. Based on this approach, we modify the original gyroscopic model (M, G, K) to a new gyroscopic model ( ˜M, ˜G, ˜K) ≡ (M + ∆M, G + ∆G, K + ∆K), with ∆M = ∆M>, ∆G = −∆G>, ∆K = ∆K>, such that

the original eigenvalues on the imaginary axis are shifted away while preserving the other part of the original eigenstructure.

It is well known that for G(λ), there exist an n × 2n matrix X and an 2n × 2n Jordan canonical form matrix J, such that

· J XJ ¸ is nonsingular, and MXJ2+ GXJ + KX = 0, (3.1) and (X, J) is referred to as a Jordan pair of G(λ). Suppose that J and X are partitioned as J = diag(J1, J2), X = [X1 X2], where X1 contains the eigenvectors and generalized eigenvectors corresponding to J1. The following theorem gives a relationship between eigenvalues and

eigenvectors.

(7)

0, and (X1, J1), (X2, J2) are defined as above. If σ(J1) ∩ σ(−J2) = ∅, we have

(1) J>1X>1MX2J2+ X>1KX2= 0; (3.2)

(2) J>1X>1GX2J2− X>1KX2J2+ J>1X>1KX2 = 0; (3.3) (3) X>1MX2J2− J>1X>1MX2+ X>1GX2 = 0. (3.4)

Proof. We first prove (3.4). Obviously, (X1, J1) and (X2, J2) satisfy

MX2J22+ GX2J2+ KX2= 0, (3.5)

J2>1 X>1M − J1>X>1G + X>1K = 0. (3.6) Multiplying (3.5) by X>

1 on the left, multiplying (3.6) by X2 on the right and eliminating

X>

1KX2, we obtain

X>1MX2J22+ X>1GX2J2 = J2>1 X>1MX2− J>1X>1GX2,

which can be rewritten as

(X>1MX2J2− J>1X>1MX2+ X>1GX2)J2

+J>1(X>1MX2J2− J1>X>1MX2+ X>1GX2) = 0.

By the assumption σ(J1) ∩ σ(−J2) = ∅, the equation above implies that

X>1MX2J2− J>1X>1MX2+ X>1GX2 = 0.

The proofs of (3.2) and (3.3) are given in a similar way by eliminating the ‘G-term’ and ‘M-term’, respectively, from (3.5) and (3.6).

Now we assume that J1 contains all the eigenvalues on the imaginary axis. To avoid

complex arithmetic, we assume that J1 ∈ Rk×k and X1 ∈ Rn×k are of the forms

J1= diag(P1, . . . , Ps; N1, . . . , Nt), X1 = [Y1, . . . , Ys; Z1, . . . , Zt], with Pj =      Λj I . .. ... Λj I Λj     ∈ R 2mj×2mj, Λ j = · 0 αj −αj 0 ¸ , Yj = [y(j)1R, y1I(j), . . . , y(j)mjR, ym(j)jI] ∈ Rn×2mj, j = 1, . . . , s,

(8)

and Nj =      0 1 . .. ... 0 1 0     ∈ R nj×nj, Zj = [z1, . . . , znj] ∈ R n×nj, j = 1, . . . , t.

Here Pj contains the pure imaginary eigenvalues ±iαj with partial multiplicities mj, and y(j)1R± iy(j)1I, . . . , y(j)m

jR± iy

(j)

mjI are the (generalized) eigenvectors associated with ±iαj (j =

1, . . . , s); and Nj contains the zero eigenvalues with multiplicities nj, and z1, . . . , znj are

the corresponding (generalized) eigenvectors (j = 1, . . . , t). Here we say that x1, . . . , xm are

(generalized) eigenvectors associated with the eigenvalue λ0 satisfying

G(λ0)x1 = 0, G(λ0)x2+ (2λ0M + G)x1 = 0, G(λ0)x3+ (2λ0M + G)x2+ Mx1 = 0, .. . G(λ0)xm+ (2λ0M + G)xm−1+ Mxm−2 = 0.

On the other hand, (X2, J2) is the part of eigenstructure to be preserved, so (∆M, ∆G, ∆K) satisfy

(M + ∆M)X2J22+ (G + ∆G)X2J2+ (K + ∆K)X2 = 0.

Equation (3.5) implies

∆MX2J22+ ∆GX2J2+ ∆KX2 = 0. (3.7)

The following theorem gives a sufficient condition for (3.7). Theorem 3.2. Let

∆M = MX1ΦX>1M, (3.8)

∆G = MX1ΦX>1G + GX1ΦX>1M − MX1ΦJ>1X>1M + MX1J1ΦX>1M, (3.9)

∆K = (MX1J1+ GX1)Φ(−J1>X>1M + X>1G). (3.10)

Then for any real symmetric matrix Φ, (∆M, ∆G, ∆K) defined by (3.8)–(3.10) is a solution to (3.7), with constraints ∆M = ∆M>, ∆G = −∆G>, ∆K = ∆K>.

Proof. It is easy to verify that if Φ is symmetric, (∆M, ∆G, ∆K) defined by (3.8)–(3.10)

(9)

Since J1 contains all eigenvalues on the imaginary axis, the condition σ(J1) ∩ σ(−J2) = ∅ is satisfied automatically, thus (3.4) holds. By direct substitution and (3.4), we have

∆MX2J22+ ∆GX2J2+ ∆KX2 =MX1ΦX>1MX2J22+ MX1ΦX>1GX2J2− MX1ΦJ>1X>1MX2J2+ GX1ΦX>1MX2J2 + MX1J1ΦX>1MX2J2+ (MX1J1+ GX1)Φ(−J>1X>1M + X>1G)X2 =MX1Φ(X>1MX2J2− J>1X>1MX2+ X>1GX2)J2 + (MX1J1+ GX1)Φ(X>1MX2J2− J>1X>1MX2+ X>1GX2) =0. (3.11)

Hence, the theorem is proved.

Remark (i): In Theorem 3.2, we use (3.4) to show that (3.11) holds. Similarly, we can use (3.2) to show that (∆M, ∆G, ∆K) of the forms

∆M = MX1J1ΦJ>1X>1M,

∆G = MX1J1ΦX>1K − KX1ΦJ>1X>1M,

∆K = −KX1ΦX>1K,

with Φ being symmetric, also satisfies (3.7). Unfortunately, if K is singular, then K + ∆K =

(I − KX1ΦX>

1)K is singular. Consequently, the zeroes eigenvalues of the original model are

preserved in the new updated model. This is not what we required.

Remark (ii): Theorem 3.2 shows that when (∆M, ∆G, ∆K) is given by (3.8)–(3.10), the

eigenstructure in (X2, J2) in the original model (M, G, K) is preserved in the updated model

( ˜M, ˜G, ˜K).

The following theorem concerns the other part of eigenvalues of the updated model ( ˜M, ˜G, ˜K) ≡ (M + ∆M, G + ∆G, K + ∆K).

Theorem 3.3. Let (∆M, ∆G, ∆K) given by (3.8)–(3.10). Then the eigenvalues of the QEP: ˜

G(λ) = λ2M + λ ˜˜ G + ˜K are those of J2 together with those of the matrix pencil (J1 + ΦJ>

1X>1MX1− ΦX1>GX1, I + ΦX>1MX1).

Proof. From MX1J21+ GX1J1+ KX1 = 0, we have

G(λ)X1= (λ2M + λG + K)X1 = (λMX1+ MX1J1+ GX1)(λI − J1),

which implies that

(10)

From (3.8)–(3.10), we have ˜ G(λ) =λ2(M + ∆M) + λ(G + ∆G) + (K + ∆K) =G(λ) + λ2MX1ΦX>1M + λ(MX1ΦX>1G + GX1ΦX>1M − MX1ΦJ>1X>1M + MX1J1ΦX>1M) + (MX1J1+ GX1)Φ(−J>1X>1M + X>1G) =G(λ) + (λMX1+ MX1J1+ GX1)Φ(λX1>M − J>1X>1M + X>1G) =G(λ) + G(λ)X1(λI − J1)−1Φ(λX>1M − J>1X>1M + X>1G) =G(λ)(I + X1(λI − J1)−1Φ(λX>1M − J>1X>1M + X>1G)).

Hence, with the help of det(I + RS) = det(I + SR) when R ∈ Cn×m and S ∈ Cm×n, we obtain det ³ ˜ G(λ) ´ = det (G(λ)) det ³ I + X1(λI − J1)−1Φ(λX>1M − J>1X>1M + X>1G) ´ = det (G(λ)) det ³ I + (λI − J1)−1Φ(λX>1MX1− J1>X>1MX1+ X>1GX1) ´

= det (G(λ)) det¡(λI − J1)−1

¢ det

³

λI − J1+ Φ(λX>1MX1− J>1X>1MX1+ X>1GX1)

´

= det (G(λ)) det¡(λI − J1)−1¢det ³

λ(I + ΦX>1MX1) − (J1+ ΦJ>1X>1MX1− ΦX>1GX1) ´

.

Since G(λ) shares the eigenvalues of J1, the above equality shows that ˜G(λ) and G(λ) share

the same spectrum, except that the eigenvalues of J1 are replaced by those in

(J1+ ΦJ>1X>1MX1− ΦX>1GX1, I + ΦX>1MX1). (3.12)

Remark (i): Theorem 3.3 shows that ˜G(λ) keeps the eigenvalues of G(λ) with nonzero real parts and changes all purely imaginary eigenvalues of G(λ) to those eigenvalues of the pencil in (3.12). The new updated model ˜G(λ), generically, has no eigenvalues on the imaginary axis for a randomly chosen symmetric matrix Φ.

Remark (ii): If Φ is chosen to be positive semidefinite, then ∆M defined by (3.8) is positive

semidefinite, and ∆K in (3.10) is negative semidefinite, which ensures that the corresponding

˜

Q = 2( ˜M − ˜K) is still positive definite.

For the computation of J1 and X1, we use Newton’s method [14]. Let iω be a purely

imaginary eigenvalue of G(λ). Then det((iω)2M + iωG + K) = 0 if and only if

det(D(ω)) ≡ det µ· −ω2M + K ωG −ωG −ω2M + K ¸¶ = 0. (3.13)

(11)

For any ω = ω(k), let Θ

kD(ω(k)) = LkQ>k be the LQ-factorization with row pivoting of

D(ω(k)), where L

kis lower triangular, Qkis orthogonal and Θkis an appropriate permutation.

We then have the iteration

ω(k+1)= ω(k)− 1/x(k)n , (3.14)

where x(k)n is the last component of the solution x of the system

Lkx = ΘkD0(ω(k))Qken. (3.15)

Here en is the last column of the identity.

Once the Newton iteration (3.14) converges to ω, we then apply Algorithm 3.1 in [15]. Setting α := iω, A := · −K 0 0 M ¸ , B := · G M M 0 ¸ , (3.16)

the following Algorithm computes the Jordan basis X1 and Jordan block J1 corresponding

to iω:

Algorithm 3.1:

Initiation: J1 = ∅, X1= ∅, T = ∅, R À 1, ω(0) = 0.

1. If K is singular, then T = T ∪ {0};

2. Apply Algorithm 3.1 of [15] to compute the Jordan block and the Jordan basis (J1,0, X1,0) corresponding to 0 by setting α = 0, and A, B as in (3.16);

3. Set X1 := X1,0, J1:= J1,0; else ω(0)= ε > 0 (small);

4. Repeat: compute until convergence (by (3.14)): ω(k+1)= ω(k)− 1/x(k)

n , k = 0, 1, 2, . . . ;

5. Set ω = ω(k+1), T = T ∪ {iω};

6. Apply Algorithm 3.1 of [15] to compute the Jordan block and the Jordan basis (J1,ω, X1,ω) by setting α = iω, and A, B as in (3.16);

7. X1 := [X1 X1,ω], J1 := J1⊕ J1,ω;

8. If ω(0)≥ R, stop; else go to Step 4.

Note that the shift-strategy proposed by [16]: ω(0):= ω(0)+ 2|ω − ω(0)| can be used.

In summary, we have the following algorithm for solving QEP (1.2): Algorithm 3.2:

1. Compute X1, J1by using Algorithm 3.1, and choose a real symmetric positive semidefinite

matrix Φ;

2. Compute the eigenvalues of the matrix pencil (3.12);

3. Compute ∆M, ∆G, ∆K from (3.8)–(3.10), and set A = M+∆M+G+∆G+K+∆K, Q = 2(M + ∆M − K − ∆K);

4. Apply Algorithm 2.1 to find the maximal solution Z+ of the equation (2.5);

(12)

6. Use λi= (1 + µi)/(1 − µi) to find the eigenvalues on the right half plane, and obtain eigenvalues on the left half plane by symmetry;

7. Replace the eigenvalues of the matrix pencil (3.12) by those of J1.

8. If eigenvectors are also required, find the right and left eigenvectors xi and yi of −Z−1+ A corresponding to µj in Step 5; then xi and (λiM + MS + G)−>i are the eigenvectors of

G(λ) corresponding to ±λi.

9. Replace the eigenvectors corresponding to the eigenvalues of the matrix pencil (3.12) by the columns of X1.

4

Numerical Examples

To illustrate the performance of Algorithm 3.2, we present three numerical examples, using MATLAB 6.5 with machine accuracy ² = 2.22 × 10−16.

Example 1 Consider the following example with

M = · 1 0 0 1 ¸ , G = · 0 1 −1 0 ¸ , K = · −5 0 0 0 ¸ .

The QEP has four eigenvalues 2, −2, 0, 0, and the eigenvector and generalized eigenvector associated with the zero eigenvalue are

· 0 1 ¸ and · 0.2 2 ¸ respectively. Thus J1 = · 0 1 0 0 ¸ , X1 = · 0 0.2 1 2 ¸ .

With an randomly chosen real positive semidefinite matrix Φ, Algorithm 3.2 converges in 6 iterations. The computed eigenvalues of the updated QEP are

±2, ±0.18947265349913.

The eigenvalues of the matrix pencil (3.12) are

±0.18947265349913.

From here on, the number of iteration steps refers to that of the algorithm used to solve (2.5). If Algorithm 3 in [1] is applied directly to the original QEP, it does not converge after 1000 iterations. It is because the zero eigenvalues of the QEP is transformed into −1 by the Cayley transformation.

Example 2 Consider the following example with

M = · 1 0 0 1 ¸ , G = · 0 1 −1 0 ¸ , K = · −5 0 0 1 ¸ .

(13)

The MATLAB function ‘polyeig’ computes the four eigenvalues as

±2.04757964523172, ±1.09205421274186i,

and the eigenvectors associated with the pure imaginary eigenvalues are

[−0.17366897685339, 0.98480408532799i]>and [−0.17366897685339, −0.98480408532799i]> respectively. Thus J1 = · 0 2.04757964523172 −2.04757964523172 0 ¸ , X1 = · −0.17366897685339 0 0 0.98480408532799 ¸ .

With an randomly chosen real positive semidefinite matrix Φ, Algorithm 3.2 converges in 5 iterations, and the computed eigenvalues of the updated QEP are

±2.04757964523172, ±0.76662856860975.

The eigenvalues of the matrix pencil (3.12) as

±0.76662856860975.

Again, without the ‘eigenvalue-shifting’ technique, Algorithm 3 in [1] failed to converge after 1000 iterations. The two purely imaginary eigenvalues of the QEP were transformed into eigenvalues on the unit circle with multiplicities 1 by the Cayley transformation, violating the conditions in Theorem 2.5.

Example 3 ([1, Example 2] with g = 3) The 8 eigenvalues of the QEP are

±0.70710679886422 ± 0.70710676350888i, ±√2i (multiplicity 2),

and the eigenvector and generalized eigenvector associated with√2i are [0, 0, 1, −√2i]> and

[0, 0, 0,13]. Thus J1=     0 2 1 0 −√2 0 0 1 0 0 0 2 0 0 −√2 0     , X1=     0 0 0 0 0 0 0 0 1 0 0 0 0 −√2 13 0     .

With an randomly chosen real positive semidefinite matrix Φ, Algorithm 3.2 converges in 6 iterations, and the computed eigenvalues of the updated QEP are

±0.70710679899162 ± 0.70710676339740i, ±0.79537763655493 ± 1.53342432908264i.

The eigenvalues of the matrix pencil (3.12) as

±0.79537763655493 ± 1.53342432908264i.

Algorithm 3 in [1] converges in 27 iterations, and the computed eigenvalues are

±0.70710679913448 ± 0.70710676325454i, ±0.00000000225659 ± 1.41421356237310i.

(14)

5

Conclusions

In [1], Guo proposes an algorithm for computing all eigenvalues of a quadratic eigenvalue problem arising from gyroscopic systems, by finding a proper solvent of the quadratic matrix equation MS2 + GS + K = 0. Using a Cayley transformation and a proper substitution, the problem can be solved by finding the maximal solution of the nonlinear matrix equation Z + A>Z−1A = Q with A = M + G + K and Q = 2(M − K). A cyclic reduction method,

or the equivalent SDA method, can be applied. This approach preserves the Hamiltonian structure of the spectrum of the QEP, and is less expensive than the linearization approach followed by the QZ algorithm. However, it is based on the condition that the QEP has no eigenvalues on the imaginary axis. Although this approach still works for some cases when the condition is violated, the cyclic reduction or SDA method for the maximal solution of the nonlinear matrix equation converges much slower. In this paper, using the concept of model updating, we propose an eigenvalue shifting technique to modify the original gyroscopic system to a new gyroscopic system, shifting all the purely imaginary eigenvalues. Hence the SDA method applied to the new system will converge quadratically. Numerical examples illustrate the efficiency of our approach.

References

[1] C.H. Guo, Numerical solution of a quadratic eigenvalue problem, Linear Algebra Appl., 385, 391–406, 2004.

[2] F. Tisseur and K. Meerbergen, The quadratic eigenvalue problem, SIAM Review, 43, 235–286, 2001.

[3] P. Lancaster, Lambda-Matrices and Vibrating Systems, Pergamon Press, Oxford, UK, 1966.

[4] P. Lancaster, Strongly stable gyroscopic systems, Electron. J. Linear Algebra, 5, 53–66, 1999.

[5] P. Benner, V. Mehrmann and H. Xu, A numerically stable, structure preserving method for computing the eigenvalues of real Hamiltonian or symplectic pencils, Numer. Math., 78, 329–358, 1998.

[6] C.F. Van Loan, A symplectic method for approximating all the eigenvalues of a Hamil-tonian matrix, Linear Algebra Appl., 22, 233–251, 1984.

[7] V. Mehrmann and D. Watkins, Structure-preserving methods for computing eigenpairs of large sparse skew-Hamiltonian/Hamiltonian pencils, SIAM J. Sci. Comput., 22, 1905– 1925, 2001.

[8] B. Meini, Efficient computation of the extreme solutions of X + A∗X−1A = Q and

(15)

[9] W.W. Lin and S.F. Xu, Convergence analysis of structure-preserving doubling algorithms for Riccati-type matrix equations, SIAM Matrix Anal. Appl., 28(1), 26–39, 2006. [10] E. K.-w. Chu, T.M. Hwang, W.W. Lin and S.F. Xu, On a doubling algorithm for the

nonlinear matrix equation X + A>X−1A = Q when |λ(X−1A)| ≤ 1, National Centre for

Theoretical Sciences, National Tsing Hua University, Preprints in Mathematics, 2006. [11] J.C. Engwerda, A.C.M. Ran and A.L. Rijkeboer, Necessary and sufficient conditions for

the existence of a positive definite solution of the matrix equation X + A∗X−1A = Q,

Linear Algebra Appl., 186, 255–275, 1993.

[12] C.H. Guo and P. Lancaster, Iterative solution of two matrix equations, Math. Comp., 68, 1589–1603, 1999.

[13] M.T. Chu, W.W. Lin and S.F. Xu, Updating quadratic models with no spill-over, submitted.

[14] V.N. Kublanovskaja, On an application of Newton’s method to the determination of eigenvalues of λ-matrices, Soviet Math. Dokl., 10(5), 1240–1241, 1969.

[15] P. Van Dooren, The computation of Kronecker’s canonical form of a singular pencil,

Linear Algebra Appl., 27, 103–140, 1979.

[16] T. Ericsson and A. Ruhe, The spectral transformation Lanczos method for the numer-ical solution of large sparse generalized symmetric eigenvalue problems, Math. Comp., 35(152), 1251–1268, 1980.

參考文獻

相關文件

The main tool in our reconstruction method is the complex geometri- cal optics (CGO) solutions with polynomial-type phase functions for the Helmholtz equation.. This type of

The disadvantage of the inversion methods of that type, the encountered dependence of discretization and truncation error on the free parameters, is removed by

• Examples of items NOT recognised for fee calculation*: staff gathering/ welfare/ meal allowances, expenses related to event celebrations without student participation,

Numerical experiments are done for a class of quasi-convex optimization problems where the function f (x) is a composition of a quadratic convex function from IR n to IR and

Chen, The semismooth-related properties of a merit function and a descent method for the nonlinear complementarity problem, Journal of Global Optimization, vol.. Soares, A new

In section 4, based on the cases of circular cone eigenvalue optimization problems, we study the corresponding properties of the solutions for p-order cone eigenvalue

We propose a primal-dual continuation approach for the capacitated multi- facility Weber problem (CMFWP) based on its nonlinear second-order cone program (SOCP) reformulation.. The

Theorem 5.6.1 The qd-algorithm converges for irreducible, symmetric positive definite tridiagonal matrices.. It is necessary to show that q i are in