**NUMERICAL SOLUTION OF QUADRATIC EIGENVALUE**
**PROBLEMS WITH STRUCTURE-PRESERVING METHODS**^{∗}

TSUNG-MIN HWANG* ^{†}*, WEN-WEI LIN

*, AND VOLKER MEHRMANN*

^{‡}

^{§}**Abstract. Numerical methods for the solution of large scale structured quadratic eigenvalue**
problems are discussed. We describe a new extraction procedure for the computation of eigenvectors
and invariant subspaces ofskew-Hamiltonian/Hamiltonian pencils using the recently proposed skew-
Hamiltonian isotropic implicitly restarted Arnoldi method (SHIRA).

As an application we discuss damped gyroscopic systems. For this problem we ﬁrst solve the eigenvalue problem for the undamped system using the structure-preserving method and then use the quadratic Jacobi–Davidson method as correction procedure. We also illustrate the properties of the new approach for several other application problems.

**Key words. quadratic eigenvalue problems, skew-Hamiltonian/Hamiltonian pencils, invariant**
subspace, gyroscopic system, quadratic Jacobi–Davidson method, nonequivalence deﬂation technique

**AMS subject classiﬁcations. 15A18, 47A15, 47J10**
**PII. S106482750139220X**

**1. Introduction. In this paper we study the numerical methods for computing**
eigenpairs or invariant subspaces of the quadratic eigenvalue problem

*(λ*^{2}*M + λ(G + εD) + K)x = 0,*
(1)

*where M, C, K, D are square n × n real matrices with M = M*^{T}*, G = −G*^{T}*, D = D** ^{T}*,

*and K = K*

^{T}*. Typically, the matrices M and K or −K represent the mass matrix*

*and the stiﬀness matrix, respectively; G and εD represent gyroscopic forces and the*

*damping, respectively, of the system. We not only concentrate on the case in which ε*

*is a small parameter, but also discuss in detail the undamped case that ε = 0.*

Quadratic eigenvalue problems (1) arise in the solution of initial or boundary value problems for second order systems of the form

*M ¨x + (G + εD) ˙x + Kx = f*
(2)

and in numerous other applications. These include ﬁnite element discretization in structural analysis [34], in acoustic simulation of poro-elastic materials [22, 32, 35], or in the elastic deformation of anisotropic materials [2, 19, 33]. See also [42] for a recent survey.

The classical approach in solving the quadratic eigenvalue problems is to turn
*them into linear eigenvalue problems by introducing a new vector y = λx. In the case*

*∗*Received by the editors July 10, 2001; accepted for publication (in revised form) August 19, 2002;

published electronically February 6, 2003. This research was partially supported by the National Center for Theoretical Science, National Tsing Hua University, Hsinchu, Taiwan.

http://www.siam.org/journals/sisc/24-4/39220.html

*†*Department ofMathematics, National Taiwan Normal University, Taipei 116, Taiwan, Republic
ofChina (min@math.ntnu.edu.tw).

*‡*Department ofMathematics, National Tsing Hua University, Hsinchu 300, Taiwan, Republic of
China (wwlin@am.nthu.edu.tw).

*§*Institut f¨ur Mathematik, MA 4-5, Technische Universit¨at Berlin, Str. des 17. Juni 136, D-10623
Germany (mehrmann@math.tu-berlin.de). This author was also partially supported by Deutsche
Forschungsgemeinschaft research grant Me 790/11-3.

1283

*of (1) this leads to the linearized generalized eigenvalue problem*

*λ*

*M G + εD*

0 *I*

*−*

*0 −K*

*I* 0

*y*
*x*

*= 0.*

(3)

*If (λ, [*^{y}_{x}*]) is an eigenpair of (3), then x is an eigenvector of (1) associated with the*
*eigenvalue λ. The approach (3) allows us to determine eigenpairs numerically, since*
for the generalized eigenvalue problem like (3) the mathematical theory, numerical
methods as well as the perturbation theory are well established [1, 11, 39]. How-
ever, a diﬃculty of the linearization approach is that due to the embedding into the
problem of double size, the condition number, i.e., the sensitivity of the eigenvalues
*and eigenvectors with respect to perturbations in the data matrices M, G, D, K, may*
increase. This is because the set of admissible perturbations for (3) is larger than for
(1). If the perturbations would, however, respect the speciﬁc structure of the blocks
in (3), i.e., the zero, the identity blocks, and the symmetries, then the perturbation
results would be the same. Furthermore, there are many diﬀerent ways to do the
linearization with diﬀerent conditioning, as has been demonstrated in [40]. In view of
these remarks it would be ideal to have a numerical method that works directly with
the original data of the quadratic eigenvalue problem and that avoids the problem of
the increased condition numbers. It is diﬃcult to develop such a method, and there
are only few methods that partially fulﬁll these requirements, such as the quadratic
or polynomial Jacobi–Davidson method [14, 36, 37].

In many applications the original quadratic eigenvalue problem has some extra
structures that should be reﬂected in its linearization. A typical structure is the
Hamiltonian eigenvalue symmetry, i.e., that the spectrum is symmetric with respect
*to the real and imaginary axes. This symmetry occurs if ε = 0 in (1), which is the case,*
for example, in the gyroscopic system [18] and in the elasticity problems [2, 26]; see
[42] for a recent survey. A similar eigenvalue symmetry arises in the optimal control
problems for which the variational principle leads to a skew-Hamiltonian/Hamiltonian
(SHH) pencil that has similar properties as (3) in the undamped case; see [5, 25, 26].

In these applications the generalized eigenvalue problem arises directly and is not from any linearization of a quadratic eigenvalue problem.

Another structure that should be reﬂected in a proper method is preservation of the sparsity structure inherited from ﬁnite element or ﬁnite diﬀerence discretizations [3, 26, 34].

In this paper we discuss ﬁrst the solution of large sparse generalized eigenvalue problems that have a Hamiltonian eigenvalue symmetry and then allow perturbations of this structure. The use of structure-preserving linearizations for problem (1) in the undamped case has ﬁrst been suggested and successfully employed in [10, 26] to design structure-preserving methods for large sparse linearized quadratic eigenvalues problems with a Hamiltonian eigenvalue symmetry. We make use of these techniques, in particular, the skew-Hamiltonian isotropic implicitly-restarted Arnoldi algorithm (SHIRA) proposed in [26]. We will brieﬂy recall this algorithm in section 2.

The key feature of the SHIRA algorithm is that it inherits the convergence prop- erties, the implicit restart, and the reorthogonalization techniques of the standard shift-and-invert Arnoldi algorithm [20, 31, 38], while being, in general, more eﬃcient.

Furthermore, it generates isotropic Krylov subspaces to guarantee that the computed spectrum has the correct eigenvalue symmetry. In [41] the stability properties of the methods that preserve the Hamiltonian structure (although not for this algorithm) are analyzed.

A disadvantage of the SHIRA algorithm is that it does not directly generate the eigenvectors or invariant subspaces of the linearization. Instead, they are determined in [26] using a few steps of inverse or subspace iteration [11, 44], respectively. For these iterations one has to solve in every step a linear system, and hence extra sparse matrix factorizations are needed. This results in a bottleneck in the computation and also limits the possible system size. Even though, in practice, only a few iterations are needed, these extra factorizations add to the method substantial computational overhead.

In this paper we develop a new extraction method for invariant subspaces in the SHIRA algorithm that avoids extra sparse matrix factorizations. This method is a modiﬁcation of an idea suggested in [45, 46] for the computation of the stable invariant subspace of a Hamiltonian matrix. We compare the performance of this extraction procedure with that of classical inverse or subspace iteration in section 5.

As an application we then discuss numerical methods for problems like (1) that have
*a damping term with ε = 0. In this problem the Hamiltonian eigenvalue symmetry*
is destroyed and therefore structure-preserving methods cannot be used. On the
other hand, if the damping is small, then we can expect that the damped system
has a spectrum and also invariant subspaces close to those of the structured system.

We discuss this topic in section 4 and introduce a method that ﬁrst computes the eigenvalues, eigenvectors, or invariant subspaces of the structured system and then use them as initial vectors in the quadratic Jacobi–Davidson procedure [3, 36] to compute the eigenvalues, eigenvectors, or invariant subspaces of the damped system.

A similar approach for problems with small damping in structural mechanics was suggested in [27]. In order to treat eigenvalue problems with clusters of eigenvalues, a nonequivalence low-rank transformation technique [9, 12] is used to deﬂate the computed eigenpairs. We compare this approach with the classical shift-and-invert subspace iteration (SISI) for several application problems in section 5.

**2. Structure-preserving algorithms. In this section we brieﬂy recall the SHIRA**
algorithm of [26], which is designed for the computation of a small number of speciﬁed
eigenvalues of real large scale generalized eigenvalue problems for SHH pencils of the
form

*αN − βH = α*

*F*1 *G*1

*H*1 *F*_{1}^{T}

*− β*

*F*2 *G*2

*H*2 *−F*_{2}^{T}

*,*
(4)

*where G*1*= −G*^{T}_{1}*, H*1*= −H*_{1}^{T}*, G*2*= G*^{T}_{2}*, and H*2*= H*_{2}* ^{T}*.

*Matrices N and H in (4) are called skew-Hamiltonian and Hamiltonian, respec-*
*tively. If J = [*_{−I}^{0}_{n}^{I}_{0}^{n}*] with I**n* *being the n×n identity matrix, then skew-Hamiltonian*
*matrices satisfy (N J)*^{T}*= −N J and Hamiltonian matrices satisfy (HJ)*^{T}*= HJ. T he*
special structure of SHH pencils ensures [23, 24, 25] that the eigenvalues occur in
*quadruples λ, λ, −λ, −λ, if λ is complex, or pairs λ, −λ if λ is real. Moreover, if x*
*is a right eigenvector associated with an eigenvalue λ, then Jx is a left eigenvector*
*associated with −λ.*

*If the skew-Hamiltonian matrix N in the SHH pencil αN −βH in (4) is invertible*
*and given in the factored form N = Z*_{1}*Z*_{2} *with Z*_{2}^{T}*J = ±JZ*_{1}, then the pencil is
*equivalent to a pencil of the form αI − βW, where*

*W = ±Z*_{1}^{−1}*HZ*_{2}^{−1}

*is Hamiltonian. The factorization of N can be easily determined via a Cholesky-like*
decomposition for skew-symmetric matrices developed in [7]; see also [4].

*Suppose that one wishes to compute the eigenvalues of W nearest to some target*
*value λ*_{0}*. Then typically one would use shift-and-invert of the form (W − λ*_{0}*I)** ^{−1}* to

*get the eigenvalues near λ*

_{0}via a Krylov subspace method. But this matrix is not Hamiltonian any longer. In order to preserve the structure, [26] suggests a rational

*transformation with four shifts (λ*

_{0}

*, λ*

_{0}

*, −λ*

_{0}

*, −λ*

_{0}) given by

*R*1*(λ*0*, W) = (W − λ*0*I)*^{−1}*(W + λ*0*I)*^{−1}*(W − λ*0*I)*^{−1}*(W + λ*0*I)*^{−1}*.*
(5)

*If the target λ*_{0} is either real or purely imaginary, one may use the simpler trans-
formation

*R*2*(λ*0*, W) = (W − λ*0*I)*^{−1}*(W + λ*0*I)*^{−1}*.*
(6)

*If the matrix W is real and Hamiltonian, then both matrices R*_{1}*and R*_{2}are real
and skew-Hamiltonian, and hence the eigenvalues have algebraic multiplicity of at
*least two, if they are not purely imaginary. For a real skew-Hamiltonian matrix K*
*and a given vector q*_{1}*, an Arnoldi iteration applied to K would generate the Krylov*
space

*V(K, q*1*, k) = span{[q*1*, Kq*1*, . . . , K*^{k−1}*q*1*]}.*

Using an appropriate orthogonal basis of this space given by the columns of an or-
*thogonal matrix Q** _{k}*, one produces a “Ritz”-projection

*K**k* *= Q*^{T}_{k}*KQ**k**.*

*The “Ritz”-values, i.e., the eigenvalues of K** _{k}* are then used as eigenvalue approxima-
tions.

In order to obtain a structure-preserving method, one needs a “Ritz”-projection
*that is again skew-Hamiltonian. For this one would need an isotropic subspace V, i.e.,*
*a subspace for which x*^{T}*Jy = 0 for all x, y ∈ V; see [26]. Let V*_{k}*∈ R** ^{2n×k}* and let

*Q*^{T}*V**k* =

*R**k*

*T*0* _{k}*
0

(7)

*be the symplectic QR-factorization [8] of V*_{k}*, where Q ∈ R** ^{2n×2n}* is orthogonal and

*symplectic and where R*

*k*

*and T*

*k*

*∈ R*

*are upper and strictly upper triangular,*

^{k×k}*respectively. If V*

*k*is isotropic and of full column rank, then it is shown in [8] that

*R*

^{−1}

_{k}*exists and that T*

*k*

*= 0. This means, in particular, that if V*

*n*is of full column rank,

*then there exists an orthogonal symplectic matrix Q ≡ [Q*

*n*

*| JQ*

*n*

*] with Q*

*n*

*∈ R*

*, such that*

^{2n×n} *Q*^{T}_{n}*Q*^{T}_{n}*J*^{T}

*K [Q**n* *JQ**n*] =

*H*_{n}*K** _{n}*
0

*H*

_{n}

^{T}
*.*
(8)

*Here H**n* *= Q*^{T}_{n}*KQ**n* *is an upper Hessenberg matrix and K**n* *= Q*^{T}_{n}*KJQ**n* is skew-
symmetric. This idea is the basis for the structure-preserving Arnoldi algorithm
SHIRA introduced in [26], which is given by the recursion

*KQ**k**= Q**k**H**k**+ q**k+1**h**k+1,k**e*^{T}_{k}*,*
(9)

*where H*_{k}*is the leading k × k principal submatrix of H*_{n}*and e*_{k}*= [0, . . . , 0, 1]*^{T}*∈*
R^{k×1}*. Since Q*_{k}*is orthogonal and Q*^{T}_{k}*JQ*_{k}*= 0 for k < n, it is easily seen that*
*[Q*_{k}*|JQ*_{k}*] ∈ R** ^{2n×k}* is orthogonal. This implies that

*Q*^{T}_{k}*Q*^{T}_{k}*J*^{T}

*K [Q**k* *| JQ**k*] =

*H**k* *G**k*

0 *H*_{k}^{T}

*∈ R** ^{2k×2k}*
(10)

*is skew-Hamiltonian and G*_{k}*= Q*^{T}_{k}*KJQ** _{k}* is skew-symmetric.

*Let (θ**i**, v**i**) be an eigenpair of H**k**, i.e., H**k**v**i* *= θ**i**v**i**, and let x**i* *= Q**k**v**i* be a

*“Ritz”-vector of the eigenvalue problem Kx = µx corresponding to the “Ritz” value*
*θ**i**. Then we have the following residual bound, see [31], for the “Ritz”-pair (θ**i**, x**i*):

* Kx**i**− θ**i**x**i*2*= KQ**k**v**i**− θ**i**Q**k**v**i*2

*= *

*Q*_{k}*H*_{k}*+ q*_{k+1}*h*_{k+1,k}*e*^{T}_{k}

*v*_{i}*− θ*_{i}*Q*_{k}*v*_{i}_{2}

*= q**k+1**h**k+1,k**(e*^{T}_{k}*v**i**) *2*= |h**k+1,k**||e*^{T}_{k}*v**i**|.*

*Remark 1.*

*(a) The transformations R*_{1}*(λ*_{0}*, W) and R*_{2}*(λ*_{0}*, W) can be viewed as a structure-*
preserving shift-and-invert strategy to accelerate the convergence of the de-
sired eigenvalues.

*(b) In exact arithmetic, the values t*_{i,k}*= (Jq** _{i}*)

^{T}*Kq*

_{k}*in the kth step of the SHIRA*algorithm are zero, but in practice roundoﬀ errors cause them to be nonzero.

It has been shown in [26] how to design an isotropic orthogonalization scheme
*to ensure that the spaces span{q*1*, . . . , q**k**} are isotropic to working precision.*

(c) To avoid the reorthogonalization, after a number of steps the SHIRA algo- rithm also uses implicit restarts as in [38].

(d) A minor disadvantage of the SHIRA method is that due to the rational trans-
*formations (5) or (6) the eigenvectors associated with pairs of eigenvalues λ*
*and −λ in the real case or λ and −¯λ in the complex case are both mapped to*
*the same invariant subspace associated with |λ|*^{2}. This has the consequence
that in many applications the desired invariant subspace associated with the
stable eigenvalues is not obtained directly from the method. In section 3
we discuss an extraction method that allows us to compute this subspace
directly.

**3. An extraction method for stable eigenspaces. In this section we discuss**
*a method for computing speciﬁc invariant subspaces of a Hamiltonian matrix W*
*directly from an isotropic invariant subspace of the skew-Hamiltonian matrix W*^{2}
without using inverse or subspace iteration. In many applications, we are interested
in a subspace associated with eigenvalues in the left half plane. The same method can
*also be applied to the skew-Hamiltonian functions R*_{1}*, R*_{2}introduced in section 2.

The method that we describe is a modiﬁcation of a technique ﬁrst suggested in
*[45, 46]. Suppose that W has no eigenvalues on the imaginary axis and let Q** _{n}* be as

*in (8) for K = W*

^{2}. Then

*W*^{2}*Q**n**= Q**n**H**n**.*
(11)

*Let Q**k**∈ R*^{2n×k}*(k ≤ n) be a basis of an isotropic invariant subspace of W*^{2}such that
*W*^{2}*Q*_{k}*= Q** _{k}*Ω

_{k}*,*

(12)

*where for the spectrum σ we have σ(Ω*_{k}*) ⊆ σ(H** _{n}*) and

*σ(Ω*

*k*

*) ∩ {σ(H*

*n*

*)\σ(Ω*

*k*

*)} = ∅,*

i.e., the complete multiplicity of a multiple eigenvalue is included. Since we have
*assumed that W has no purely imaginary eigenvalues, it follows that there exist two*
*bases V*_{k}^{−}*and V*_{k}^{+}*, both of dimension k, associated with the stable and unstable*
*isotropic, invariant subspaces of W, respectively, such that*

*WV*_{k}^{−}*= V*_{k}* ^{−}*Λ

^{−}

_{k}*,*

*WV*

_{k}^{+}

*= V*

_{k}^{+}Λ

^{+}

_{k}*,*(13)

where

*σ(Λ*^{−}_{k}*) = {λ|Re(λ) < 0, λ*^{2}*∈ σ(Ω*_{k}*)},*
*σ(Λ*^{+}_{k}*) = {λ|Re(λ) > 0, λ*^{2}*∈ σ(Ω*_{k}*)}.*

(14)

Since by assumption Ω*k* has no purely imaginary eigenvalue, it follows that Ω*k*

*has a unique positive square root X**k**[16] satisfying X*_{k}^{2}= Ω*k* such that all eigenvalues
*have positive real part. We call this root the positive square root. It can, for example,*
be computed via the MATLAB function sqrtm [13].

*The following theorem shows how to extract the stable eigenspace span(V*_{k}* ^{−}*) from

*the isotropic invariant subspace span(Q*

*).*

_{k}*Theorem 1. Let W ∈ R*^{2n×2n}*be Hamiltonian and let Q*_{k}*∈ R*^{2n×k}*be as in (12).*

*Suppose that there exist nonsingular matrices L*^{−}_{k}*and L*^{+}_{k}*∈ R*^{k×k}*such that*
*Q**k**= V*_{k}^{+}*L*^{+}_{k}*+ V*_{k}^{−}*L*^{−}_{k}*,*

(15)

*where V*_{k}^{+} *and V*_{k}^{−}*are as in (13). Then for X**k**, the positive square root of Ω**k**, we*
*have*

*span(WQ*_{k}*− Q*_{k}*X*_{k}*) = span(V*_{k}^{−}*).*

(16)

*Proof. From (13) and (15) we have*

*WQ**k**− Q**k**X**k* *= W(V*_{k}^{+}*L*^{+}_{k}*+ V*_{k}^{−}*L*^{−}_{k}*) − (V*_{k}^{+}*L*^{+}_{k}*+ V*_{k}^{−}*L*^{−}_{k}*)X**k*

*= V*_{k}^{+}

Λ^{+}_{k}*L*^{+}_{k}*− L*^{+}_{k}*X*_{k}*+ V*_{k}^{−}

Λ^{−}_{k}*L*^{−}_{k}*− L*^{−}_{k}*X*_{k}*.*

If we can prove that Λ^{+}_{k}*L*^{+}_{k}*− L*^{+}_{k}*X** _{k}* = 0 and that Λ

^{−}

_{k}*L*

^{−}

_{k}*− L*

^{−}

_{k}*X*

*is nonsingular, then assertion (16) follows. To do this, we show ﬁrst that*

_{k}*X*_{k}*= (L*^{+}* _{k}*)

*Λ*

^{−1}^{+}

_{k}*L*

^{+}

_{k}*.*(17)

From (15) and (13) we have

*W*^{2}*Q**k* *= W*^{2}

*V*_{k}^{+}*L*^{+}_{k}*+ V*_{k}^{−}*L*^{−}_{k}

*= V*_{k}^{+}
Λ^{+}_{k}_{2}

*L*^{+}_{k}*+ V*_{k}* ^{−}*
Λ

^{−}

_{k}_{2}

*L*^{−}_{k}*.*
(18)

On the other hand, from (12) and (15) we have
*W*^{2}*Q*_{k}*= Q** _{k}*Ω

*=*

_{k}*V*_{k}^{+}*L*^{+}_{k}*+ V*_{k}^{−}*L*^{−}* _{k}*
Ω

_{k}*.*(19)

Subtracting (19) from (18) we obtain
*V*_{k}^{+} *V*_{k}^{−}

Λ^{+}_{k}_{2}

*L*^{+}_{k}*− L*^{+}* _{k}*Ω

*k*

Λ^{−}_{k}_{2}

*L*^{−}_{k}*− L*^{−}* _{k}*Ω

*k*

*= 0.*

Since

*V*_{k}^{+} *V*_{k}^{−}

has full rank it follows immediately that
Λ^{+}_{k}_{2}

*L*^{+}_{k}*− L*^{+}* _{k}*Ω

*k*= 0

*and (17) follows due to the uniqueness of X*

*k*.

We also have
Λ^{−}_{k}_{2}

*L*^{−}_{k}*− L*^{−}* _{k}*Ω

*k*= 0, and the uniqueness of the positive square root of Ω

*k*implies that

*X** _{k}* =

*L*

^{−}

_{k}

_{−1}*Z*_{k}*L*^{−}_{k}*,*
(20)

*where Z**k* denotes the unique positive square root of
Λ^{−}_{k}_{2}

. Hence, Λ^{−}_{k}*L*^{−}_{k}*− L*^{−}_{k}*X**k* =
Λ^{−}_{k}*− Z**k*

*L*^{−}* _{k}* is nonsingular. This completes the proof.

Theorem 1 indicates a way to determine the desired stable invariant subspace.

*We apply SHIRA to the skew-Hamiltonian operators R*_{1}*(λ*_{0}*, W) and R*_{2}*(λ*_{0}*, W) as in*
(5) and (6), respectively, and determine the associated isotropic invariant subspace
*Q** _{k}*.

*If the target λ*_{0} */∈ σ(W) is real or purely imaginary, and if the entry h*_{k+1,k}*of H** _{k}*
is negligible, then the space

*span(Q*_{k}*) = span{q*_{1}*, . . . , q*_{k}*}*

generated by SHIRA is a good approximation of an isotropic invariant subspace of
*R*_{2}*(λ*_{0}*, W). Then Q** _{k}* satisﬁes (approximately)

*R*_{2}*(λ*_{0}*, W)Q*_{k}*= (W*^{2}*− λ*^{2}_{0}*I)*^{−1}*Q*_{k}*= Q*_{k}*H*_{k}*,*
(21)

*where H** _{k}* is an upper Hessenberg matrix. This implies that
Ω

_{k}*= Q*

^{T}

_{k}*W*

^{2}

*Q*

_{k}*= H*

_{k}

^{−1}*+ λ*

^{2}

_{0}

*I.*

(22)

*Applying Theorem 1, we can extract the stable isotropic eigenspace V*_{k}^{−}*of W from*
Ω*k* by computing its positive square root.

*If the target λ*0 */∈ σ(W) is neither real nor purely imaginary and if h**k+1,k* is
*negligible, then the space span(Q**k**) = span{q*1*, . . . , q**k**} generated by SHIRA is a good*
*approximation to an isotropic invariant subspace of R*1*(λ*0*, W). However, as has been*
*shown in [26], it may happen that this subspace fails to be invariant under W*^{2}.

*It is therefore necessary to check whether or not span(Q*_{k}*) is invariant under W*^{2}
by computing the residual

* W*^{2}*Q*_{k}*− Q** _{k}*Ω

_{k}

_{F}*,*

with Ω_{k}*= Q*^{T}_{k}*W*^{2}*Q** _{k}*. If this residual is small, then the procedure to compute the
invariant subspace is as before.

We summarize the extraction method in the following algorithm.

Algorithm 1 (Extraction method (EM) for the stable invariant subspace or eigenvectors of a Hamiltonian matrix).

Input *Hamiltonian matrix W and a target value λ*0 with negative real part.

Output *Approximate invariant subspace V*_{}^{−}*of W associated with eigenvalues of*
*negative real part nearest to λ*0.

(i) * IF λ*0

*= α or λ*0

*2*

**= iα for α ∈ R, THEN apply SHIRA to R***(λ*0

*, W), i.e.,*(a)

*generate Arnoldi vectors as columns of Q*

*k*

*= [q*1

*, . . . , q*

*k*] and an upper

*Hessenberg matrix H**k**∈ R** ^{k×k}* such that

*(W*^{2}*− λ*^{2}_{0}*I)*^{−1}*Q**k**= Q**k**H**k*.
(b) Compute Ω*k**= H*_{k}^{−1}*+ λ*^{2}0*I.*

(ii) * ELSE IF λ*0

*= α + iβ, where α, β ∈ R, then apply SHIRA to R*1

*(λ*0

*, W), i.e.,*(a)

*generate Arnoldi vectors as columns of Q*

*k*

*= [q*1

*, . . . , q*

*k*] and an upper

*Hessenberg matrix H**k**∈ R** ^{k×k}* such that

*R*1

*(λ*0

*, W)Q*

*k*

*= Q*

*k*

*H*

*k*. (b) Compute Ω

*k*

*= Q*

^{T}

_{k}*W*

^{2}

*Q*

*k*.

(iii) **End**Compute the real Schur decomposition of Ω*k* as
Ω*k**= U**k**T**k**U*_{k}* ^{T}*.

(iv) *Reorder the stable eigenvalues of T**k* *to the top of T**k*using the
reordering method for the Schur form of [1], i.e.,

*T**k**= V**k*

*T*11 *T*12

0 *T*22

*V*_{k}* ^{T}*,

*where T*11

*∈ R*

*has the stable eigenvalues.*

^{×}(v) Set ˜*Q**:= Q**k**U**k**V**k*

*I*

0

*.*

(vi) Compute the unique positive square root*√*

*T*11*of T*11using the method sqrtm
of [21].

(vii) Compute the stable invariant subspace
*V*_{}^{−}*= W ˜Q**− ˜Q**√*

*T*11.

The computational eﬀort for the computation of the square root of Ω* _{k}* is of order

*k*

^{3}

*. If k is small compared to n, then this cost is negligible and it does not add much*to the cost of the SHIRA iteration.

*Remark 2. In a worst case analysis this procedure may suﬀer from a loss of*
accuracy due to the fact that we are computing the square root of the Schur form of
*a function of W, which at least contains quadratic terms in W. Thus, eﬀects similar*
to the square reduced method for the Hamiltonian eigenvalue problem may occur, see
*[43], where the squaring of W leads to a loss of accuracy for small eigenvalues. See*
also the analysis in [45]. We will demonstrate this in Example 2 and also show that
one step of inverse iteration ﬁxes this accuracy loss.

Algorithm 1 can be applied directly to large sparse problems of the form (4).

An important application here is the linear quadratic optimal control problem for descriptor systems, where the pencil has the form [5, 6, 25]

*α*

*E* 0

*0 E*^{T}

*− β*

*A* *BB*^{T}

*C*^{T}*C −A*^{T}

*.*
(23)

Such problems arise in the optimal control of semidiscretized parabolic partial diﬀer- ential equations [15, 28, 29, 30].

*Assume that the skew-Hamiltonian matrix N in (4) is given in factored form*
*N = Z*_{1}*Z*_{2}*, where Z*_{2}^{T}*J = ±JZ*_{1}*.*

(24)

*Such a factorization (called J-Cholesky factorization) exists for all real skew-Hamiltonian*
matrices [5] and it can be obtained either trivially or numerically via a Cholesky-like
factorization of skew-symmetric matrices [7].

*If N is invertible, then using this factorization we can (at least formally) transform*
*the SHH pencil αN − βH to a standard eigenvalue problem αI − βW, where W =*

*±Z*_{1}^{−1}*HZ*_{2}^{−1}*is Hamiltonian. If in (23) the matrix E is invertible, then the resulting*
Hamiltonian eigenvalue problem is

*α*

*I 0*
*0 I*

*− β*

*E*^{−1}*A E*^{−1}*BB*^{T}*E*^{−T}*C*^{T}*C* *−A*^{T}*E*^{−T}

*.*
(25)

Numerical examples of this type are presented in section 5.

**4. Damped gyroscopic systems. In this section we show how the extraction**
procedure given by Algorithm 1 can be used to solve quadratic eigenvalue problems
*for systems of the form (1). To do this we ﬁrst study the undamped case (ε = 0), i.e.,*

*λ*^{2}*Mx + λGx + Kx = 0,*
(26)

*with M = M*^{T}*and K = K*^{T}*positive deﬁnite and G = −G** ^{T}*. Using the linearization
(3) and the factorization

*N = Z*1*Z*2=

*I* ^{1}_{2}*G*
*0 M*

*M* ^{1}_{2}*G*

0 *I*

yields the Hamiltonian eigenvalue problem

*(λI − W)x = (λI − Z*_{1}^{−1}*HZ*_{2}^{−1}*)x = 0.*

(27)

*The matrix (W − λI)** ^{−1}* can be factored as

*(W − λI)*

^{−1}(28)

=

*M* ^{1}_{2}*G*

0 *I*

*I λI*

0 *I*

0 *M*^{−1}

*−Q(λ)** ^{−1}* 0

*I* ^{1}_{2}*G + λM*

0 *M*

*,*

where

*Q(λ) = λ*^{2}*M + λG + K.*

(29)

To this problem we can directly apply the extraction procedure given by Algorithm 1.

If we include the damping term in the gyroscopic system as in (1), then we cannot use the structured SHIRA method directly. But we can still linearize the system (1) and obtain the perturbed SHH system (3) or, in the diﬀerent form,

*λ*

*M G*

*0 M*

*−*

*−εD −K*

*M* 0

*y*
*x*

*= 0,*
(30)

which is now a pencil with one skew-Hamiltonian and one perturbed Hamiltonian matrix. We can still use the factorization (27) of the skew-Hamiltonian matrix and obtain the perturbed Hamiltonian eigenvalue problem

*(λI − ˆW)x = (λI − Z*_{1}^{−1}*HZ*ˆ _{2}^{−1}*)x = 0,*
(31)

with

*W =*ˆ

*−(εD +* ^{1}_{2}*G)M*^{−1}*−K +*^{1}_{4}*GM*^{−1}*G + εDM*^{−1}*G*

*M*^{−1}*−*^{1}_{2}*M*^{−1}*G*

*= W + ε*

*−DM*^{−1}*DM*^{−1}*G*

0 0

*.*

*If the perturbation ε is small, then eigenvalue/eigenvector pairs (or invariant*
subspaces) of the problem (1) can be regarded as small perturbations of eigenvalue/

eigenvector pairs (or invariant subspaces) of problem (26). In this situation it is natural to use an eigenvalue/eigenvector pair of the undamped problem (26) computed via Algorithm 1 as start for a correction method. This method could be any method that can be used for eigenvalue, eigenvector, or subspace correction, such as subspace iteration, inverse iteration, Newton’s method, or the Jacobi–Davidson method. Here we present results that are obtained with the quadratic Jacobi–Davidson method [3, 36] as correction. This method is typically very eﬃcient and well suited for the given problem. However, if the desired eigenvalues of (1) form a cluster of nearby eigenvalues, then the quadratic Jacobi–Davidson method sometimes has diﬃculties in detecting and resolving such a cluster. The undesired eﬀect is that, in this case for diﬀerent starting values (eigenvalue/eigenvector pairs) obtained from the unperturbed problem, it converges to the same eigenvalue/eigenvector pair of (1). It is known that implicit deﬂation techniques based on Schur forms (see, e.g., section 4.7 and section 8.4 of [3]) combined with the Jacobi–Davidson method perform well for linear eigenvalue problems. However, in the quadratic eigenvalue problem, it is not clear how to incorporate an implicit deﬂation technique because a quadratic Schur form, in general, does not exist for quadratic matrix polynomials.

For this reason we have also analyzed the use of an explicit nonequivalence low- rank transformation deﬂation technique that was suggested in [9, 12] for quadratic eigenvalue problems. Let us brieﬂy recall this technique for the polynomial

**L(λ) = λ**^{2}*M + λC + K with C = G + εD.*

(32)

We study the two cases of real eigenvalues or complex conjugate pairs separately.

*Suppose that we have computed a real eigenvalue λ*_{1} as well as the associated right
*and left eigenvectors x*_{1} *and z*_{1}*, respectively, with z*_{1}^{T}*Kx*_{1}**= 1 such that L(λ**_{1}*)x*_{1}= 0
*and z*^{T}_{1}* L(λ*1

*) = 0. Let θ*1

*= (z*

_{1}

^{T}*Mx*1)

*. We then introduce a new deﬂated quadratic eigenproblem as in [12] via*

^{−1}**L(λ)x ≡**

*λ*^{2}*M + λ * *C + K*
*x = 0,*
(33)

where

*M = M − θ* 1*Mx*1*z*_{1}^{T}*M,*
*C = C +* *θ*1

*λ*_{1}*(Mx*_{1}*z*_{1}^{T}*K + Kx*_{1}*z*_{1}^{T}*M),*
(34)

*K = K −* *θ*1

*λ*^{2}_{1}*Kx*_{1}*z*^{T}_{1}*K.*

*Suppose that we have computed an complex eigenvalue λ*_{1} *= α*_{1}*+ iβ*_{1} as well
*as the associated right and left eigenvectors x*_{1} *= x*_{1R}*+ ix*_{1I}*and z*_{1} *= z*_{1R}*+ iz** _{1I}*,

*respectively, such that Z*

_{1}

^{T}*KX*

_{1}

*= I*

_{2}

*, where X*

_{1}

*= [x*

_{1R}*, x*

_{1I}*] and Z*

_{1}

*= [z*

_{1R}*, z*

*]. Let*

_{1I}Θ_{1}*= (Z*_{1}^{T}*MX*_{1})* ^{−1}*. We then introduce a new deﬂated quadratic eigenproblem as in
[9] via (33), where

*M = M − MX* _{1}Θ_{1}*Z*_{1}^{T}*M,*

*C = C + MX* 1Θ1Λ^{−T}_{1} *Z*_{1}^{T}*K + KX*1Λ^{−1}_{1} Θ^{T}_{1}*Z*_{1}^{T}*M,*
(35)

*K = K − KX* _{1}Λ^{−1}_{1} Θ_{1}Λ^{−T}_{1} *Z*_{1}^{T}*K*
in which Λ_{1}= [_{−β}^{α}^{1}_{1}^{β}_{α}^{1}_{1}].

*Note that if the matrices M and K are symmetric, then the matrices M and K*
in (34) or (35) are symmetric as well. The results in [9, 12] then imply the following
proposition.

*Proposition 1. (i) Let λ*1 **be a simple real eigenvalue of L(λ) as in (32) and let***x*1 *and z*1 *with z*_{1}^{T}*Kx*1 *= 1 be the right and left eigenvectors, respectively. Then the*
**spectrum of L(λ) in (33) with coeﬃcients as in (34) is given by (σ(L(λ)){λ**_{1}*}) ∪*
**{∞} = σ(L(λ)) provided that λ**^{2}_{1}*= θ*_{1}*.*

*(ii) Let λ*_{1} **be a simple complex eigenvalue of L(λ) as in (32) and let X**_{1} =
*[x*_{1R}*, x*_{1I}*] and Z*_{1} *= [z*_{1R}*, z*_{1I}*] with Z*_{1}^{T}*KX*_{1} *= I*_{2}*, where x*_{1} *= x*_{1R}*+ ix*_{1I}*and z*_{1} =
*z*_{1R}*+iz*_{1I}*are the associated right and left eigenvectors, respectively. Then the spectrum*
**of L(λ) in (33) with coeﬃcients as in (35) is given by**

**σ(L(λ)){λ**_{1}*, ¯λ*_{1}*}*

*∪{∞, ∞} =*
**σ(L(λ)) provided that Λ**_{1}Λ^{T}_{1} *= Θ*_{1}*.*

*Furthermore, in both cases (i) and (ii), if λ*_{2}*= λ*_{1} *and (λ*_{2}*, x*_{2}*) is an eigenpair of*
**L(λ), then the pair (λ**_{2}*, x*_{2}**) is also an eigenpair of L(λ).**

Using this deﬂation procedure we have now presented all the ingredients for our algorithm. We solve the damped gyroscopic system (1) by the quadratic Jacobi–

Davidson method combined with the explicit nonequivalence deﬂation technique (33), (34) or (33), (35) by using eigenpairs computed via Algorithm 1 as starting values.

We summarize this approach in the following algorithm.

Algorithm 2 (Quadratic Jacobi–Davidson method with deﬂation).

Input *Matrices M, G, D, and K and parameter ε as in (1). Target shift λ*0 and
*number of desired eigenvalue/eigenvector pairs nearest to λ*0. Tolerance
*Tol for stopping criterion.*

Output *The eigenpairs {(λ**j**, x**j**)}*^{}* _{j=1}* of

**L(λ)x = (λ**^{2}*M + λ(G + εD) + K)x = 0,*
*associated with eigenvalues {λ**j**}*^{}_{j=1}*that are closest to the target λ*0.
(i) *Compute the eigenpairs {(λ*^{(0)}_{j}*, x*^{(0)}_{j}*)}*^{}*j=1*of

*(λ*^{2}*M + λG + K)x = 0*

*by Algorithm 1, where {λ*^{(0)}_{j}*}*^{}_{j=1}*are the closest eigenvalues to the target λ*0

*and (x*^{(0)}* _{j}* )

^{H}*x*

^{(0)}

*= 1.*

_{j}(ii) *For j = 1, . . . , *

*Compute the eigenpair (λ**j**, (x**j**, z**j***)) of L(λ)by the quadratic Jacobi–Davidson***method with target λ*^{(0)}_{j}*and initial vector x*^{(0)}_{j}*, where x**j**and z**j*are the
associated right and left eigenvectors satisfying the relations as in (i)or (ii),
respectively, of Proposition 1.

*If x**j**− x**i** ≤ T ol for some i < j, then*

*Compute the eigenpair (λ**j**, (x**j**, z**j*)) of**L(λ)as in (33), (34)if λ***i* is real, and
*as in (33), (35) if λ**i* is complex, by the quadratic Jacobi–Davidson method
*with target λ*^{(0)}_{j}*and initial vector x*^{(0)}* _{j}* .

End if End for

Note that a recently proposed locking technique for the solution of quadratic eigenvalue problem as suggested in [22] might also be adapted instead of our non- equivalence low-rank deﬂation technique in Algorithm 2.

**5. Numerical results. In this section we present some numerical tests for the**
algorithms proposed in this paper. All computations were done in MATLAB 6.0 [21]

or in Fortran 90 on a Compaq DS20E workstation.

We ﬁrst discuss the new extraction method and compare it with several other iterative methods. These methods are as follows:

EM: the SHIRA algorithm combined with the extraction method given by Algorithm 1;

*SISI(q):* *the SHIRA algorithm followed by q steps of shift-and-invert subspace*
iteration;

EM SISI: the SHIRA algorithm combined with the extraction method EM adding one step of shift-and-invert subspace iteration;

IPI: one step of the inverse power iteration.

*Here, in the SISI the target value λ*0 with negative real part is taken as a ﬁxed
*shift and the iteration starts with the subspace span(Q**k*) generated by SHIRA, which
*includes the eigenvectors associated with eigenvalues near λ*_{0}*and −λ*_{0}. Then SISI con-
*verges to a subspace span(Q*^{−}* _{k}*), associated with stable eigenvalues and the eigenpairs

*are computed from the “Ritz”-pairs of (Q*

^{−}*)*

_{k}

^{T}*WQ*

^{−}*.*

_{k}In the EM SISI variant, the target value with negative real part is again taken as
*a ﬁxed shift and one extra step of SISI is performed starting with span(V*_{k}* ^{−}*) obtained

*by the new extraction method EM. With the resulting subspace span(V*

*), then the*

_{k}*eigenpairs are computed from the “Ritz”-pairs of V*

_{k}

^{T}*WV*

*.*

_{k}In the inverse power iteration (IPI) an approximate eigenvalue computed by
*SHIRA is taken as shift and the associated Arnoldi vector q**i* is used as starting
vector.

In the following we present results for three problem classes: quadratic eigen- value problems from elasticity theory, linear quadratic optimal control problems, and damped gyroscopic systems.

**5.1. Quadratic eigenproblems from elasticity theory. Consider the quadratic**
eigenvalue problem

*λ*^{2}*Mx + λGx + Kx = 0,*
(36)

*where M, G, and K are deﬁned as follows. As in [26], let*

*B =*

0 0

1 ...

... ...

0 1 0

*∈ R*^{m×m}*,*

*M =*˜ ^{1}_{6}*(4I**m**+ B + B*^{T}*), ˜G = B − B** ^{T}*, and ˜

*K = −(2I*

*m*

*− B − B*

*). Deﬁne*

^{T}*M = c*11

*I*

*m*

*⊗ ˜M + c*12

*M ⊗ I*˜

*m*

*,*

*G = c*21*I**m**⊗ ˜G + c*22*G ⊗ I*˜ *m**,*
*K = c*_{31}*I*_{m}*⊗ ˜K + c*_{32}*K ⊗ I*˜ _{m}*,*
(37)

−0.14 −0.12 −0.1 −0.08 −0.06 −0.04 (1a) eigenvalues

target value

−0.14 −0.12 −0.1 −0.08 −0.06 −0.04

10^{−15}
10^{−10}
10^{−5}

(1b) eigenvalues

residuals

residuals by EM residuals by SISI(10) residuals by IPI

*Fig. 1. Eigenvalues and the corresponding residual.*

*where c**ij* *are positive constants. Then we have M = M*^{T}*> 0, G = −G*^{T}*, and*
*K = K** ^{T}*.

*Example 1 (see [26, Example 6.2]). We take m = 90, so that W is a 16200×16200*
matrix, and

*c*_{11}*= 1.00,* *c*_{12}*= 1.30,*
*c*_{21}*= 0.10,* *c*_{22}*= 1.10,*
(38)

*c*31*= 1.00,* *c*32*= 1.20.*

For this example, the twelve eigenvalues computed by the SHIRA algorithm clos-
*est to the target λ*0*= −0.1 are depicted in Figure 1(a). The residuals (λ*^{2}_{i}*M +λ**i**G+*

*K)x*_{i}_{2} *for the eigenpairs (λ*_{i}*, x*_{i}*), where x** _{i}* is computed via EM, SISI(10), and IPI,
respectively, are given in Figure 1(b).

We see from Figure 1(b) that the methods EM and IPI yield residuals of the same
*magnitude, ≈ 10** ^{−10}*, while SISI(10) yields smaller residual for some eigenvalues but
the residuals of the eigenvalues further away from the target are much larger.

In order to evaluate the computational complexity for this problem class, we
compare the major computational tasks per iteration step. One iteration of the inverse
power iteration for (36) requires one forward/backward substitution for evaluating
*Q(λ)*^{−1}*z (assuming that an LU-factorization for Q(λ) is available). On the other*
*hand, from (29), we see that the matrix W can be factored as*

*W =*

*I −*^{1}_{2}*GM*^{−1}

0 *M*^{−1}

*0 −K*

*I* 0

*I −*^{1}_{2}*G*

0 *I*

*.*

The dominant cost of the extraction method is to determine the eigenvector in step
*(vii) of Algorithm 1. It consists of matrix-vector products for evaluating Kz and Gz,*
*respectively, forward/backward substitution for evaluating M*^{−1}*z (assuming that an*

Table 5.1

*Computational costs for EM and IPI; here k = = 12.*

EM IPI

0 *evaluations Q(λ**i*) *evaluations Q(λ**i*)

1 *sparse LU factorization of M* *sparse LU factorizations of Q(λ**i*)

*f/b substitutions of M*^{−1}*z* *f/b substitutions of Q(λ**i*)^{−1}*z*

*matrix-vector products Kz* 0 *matrix-vector products Kz*
*2* *matrix-vector products Gz* 0 *matrix-vector products Gz*

*(2 + k)* SAXPY operations 0 SAXPY operations

*LU or Cholesky-factorization for M is available) as well as 2 + k SAXPY operations.*

The cost for the evaluation of Ω_{k}*, U*_{k}*and V*_{k}*in Algorithm 1 are negligible if k n.*

In Table 5.1 we compare the computational cost for the computation of eigenvec-
tors using the EM and one step of the IPI, respectively. This comparison shows that
the computational cost for the new extraction method compares favorably with that
of one step of inverse power iteration. The major savings in computational time arise
*from the fact that no further factorizations of Q(λ) are needed.*

**5.2. Optimal control problems. The second class of problems arises from**
continuous-time linear quadratic control problems of the form

min*x,u*

_{∞}

0 *(u*^{T}*Ru + x*^{T}*C*^{T}*Cx)dt*
subject to

*˙x = Ax + Bu,*
*y = Cx,*
(39)

*where A ∈ R*^{n×n}*, B ∈ R*^{n×m}*, C ∈ R*^{p×n}*, R ∈ R*^{m×m}*with p, m n and where R is*
symmetric positive deﬁnite.

As test cases we consider the spatial (central diﬀerence) discretization in polar coordinates of a reaction-diﬀusion equation with Dirichlet boundary conditions on the two-dimensional unit disk; see [17] for details.

Performing a semidiscretization in space, one obtains a continuous-time system as in (39),

*A = I + T,*
(40)

*and there exists an orthogonal matrix W , see [17], such that the matrix A in (40) can*
be transformed to a tridiagonal matrix *A ≡ W*^{T}*AW . Thus, the LU-factorization of*
*A is easily computed in O(n) operations [11]. Let*

*W*^{T}*(A − λ*_{0}*I)W = L*_{1}*U*_{1}*,* *−W*^{T}*(A*^{T}*+ λ*_{0}*I)W = L*_{2}*U*_{2}
(41)

*be LU-factorizations of W*^{T}*(A − λ*0*I)W and −W*^{T}*(A*^{T}*+ λ*0*I)W , respectively. For*
*this problem, (W − λ*_{0}*I)** ^{−1}* can be factored as

*(W − λ*_{0}*I)** ^{−1}* =

*W* 0

*0 W*

*F** ^{−1}* 0

0 *I*

*I − ˜B ˜B*^{T}

0 *I*

*F* 0
*0 D*^{−1}

*×*

*I* 0

*− ˜C*^{T}*C I*˜

*F** ^{−1}* 0

0 *I*

*W** ^{T}* 0

0 *W*^{T}

*,*
(42)

−0.049 −0.048 −0.047

−4

−2
0
2
4x 10^{−4}

target value

(2a) real parts

imaginary parts

−0.049 −0.048 −0.047

10^{−20}
10^{−15}
10^{−10}
10^{−5}

(2b) negative values of norms of eigenvalues

residuals

residuals by EM residuals by SISI(6) residuals by EM_SISI

*Fig. 2. Eigenvalues and corresponding residuals.*

*where F = W*^{T}*(A − λ*0*I)W , ˜B = W*^{T}*B, ˜C = CW , and D = −W*^{T}*(A*^{T}*+ λ*0*I)W −*
*C*˜^{T}*CF*˜ ^{−1}*B ˜*˜*B*^{T}*. To evaluate D*^{−1}*, we use the LU-factorizations in (41) combined with*
the Sherman–Morrison–Woodbury formula [11] to obtain

*D*^{−1}*= U*_{2}^{−1}

*I + L*^{−1}_{2} *C*˜^{T}*S*^{−1}*CU*˜ _{1}^{−1}*L*^{−1}_{1} *B ˜*˜*B*^{T}*U*_{2}^{−1}*L*^{−1}_{2} *,*
(43)

where

*S = I − ˜CU*_{1}^{−1}*L*^{−1}_{1} *B ˜*˜*B*^{T}*U*_{2}^{−1}*L*^{−1}_{2} *C*˜^{T}*.*
(44)

*Example 2. With the data in [17] we get a system of size n = 8100. We*
*chose B, C ∈ R*^{n×2}*randomly with uniform distribution in the interval (0, 1) and*
*used SHIRA to compute the 16 eigenvalues closest to λ*_{0} *= −0.048. The eigenvalues*
are depicted in Figure 2(a). The associated eigenvectors are then computed by EM,
EM SISI, SISI(6), and IPI, respectively. The residuals for the eigenpairs computed
by EM, EM SISI, and SISI(6), respectively, are shown in Figure 2(b).

*In this example the IPI had convergence problems for the vectors x** _{j}* associated

*with those eigenvalues with index j = 2, 3, 5, 6, 7, 8, 10, 11, 15, 16, because these com-*plex conjugate pairs have relatively small imaginary parts and their use as shifts is not adequate. Here we see the loss of accuracy in the extraction method and we also observe that, as expected, this is easily compensated by adding a step of SISI.

*Let us brieﬂy compare the costs for computing an 5-dimensional invariant sub-*
space associated with stable eigenvalues for optimal control problems via EM SISI
and SISI.

*The matrix S in (44) is computed in SHIRA; it can be reused for SISI. It*
follows from the factorizations (42) and (43) that one iteration of SISI with re-
*orthogonalization requires 55 forward/backward substitutions by assuming that the*