• 沒有找到結果。

Residual Arnoldi Method for solving large eigenvalue problems

N/A
N/A
Protected

Academic year: 2021

Share "Residual Arnoldi Method for solving large eigenvalue problems"

Copied!
83
0
0

加載中.... (立即查看全文)

全文

(1)

Residual Arnoldi Method

for solving large eigenvalue problems

Che-Rung Lee

roger@umd.edu

(2)

University of Maryland, College Park

Outline

IntroductionTheoryExperimentsConclusion

(3)

Introduction

Eigenproblem

Subspace methods

Krylov subspace and the Arnoldi processProblems of the Arnoldi process

Residual Arnoldi method

(4)

University of Maryland, College Park

Eigenproblem

Let A be a matrix of order n. If a scalar λ and a

nonzero vector x satisfy

Ax = λx,

• λ is an eigenvalue of A, and

• x is the corresponding (right) eigenvector. • (λ, x) is called an eigenpair.

Eigenproblem: find all or some eigenpairs of A.

(5)

Eigenproblem

Let A be a matrix of order n. If a scalar λ and a

nonzero vector x satisfy

Ax = λx,

• λ is an eigenvalue of A, and

• x is the corresponding (right) eigenvector. • (λ, x) is called an eigenpair.

Eigenproblem: find all or some eigenpairs of A. • In this presentation, we assume that kxk = 1, and

(6)

University of Maryland, College Park

Subspace methods

When A is large (and maybe sparse), subspace

methods are usually used.

Steps

1. Generate a subspace.

2. Extract approximations from that subspace. 3. Convergence test.

(7)

Krylov subspace

Given an unit vector u1. The Krylov subspace

Kk(A, u1) is a subspace spanned by

u1, Au1, . . . , Ak−1u1.

A Krylov subspace usually contains good

approximations to the eigenvectors corresponding to the eigenvalues on the edge of spectrum.

(8)

University of Maryland, College Park

The Arnoldi process

Algorithm that generates orthonormal bases of a

series of Krylov subspaces.

(9)

The Arnoldi process

Algorithm that generates orthonormal bases of a

series of Krylov subspaces.

Steps

1. U1 = u1

2. For i=1, 2, . . .

3. Compute v = Aui

4. Orthogonalization: ui+1 = (I − UiUi∗)v 5. Normalization : ui+1 = ui+1/kui+1k

(10)

University of Maryland, College Park

The Arnoldi process

Algorithm that generates orthonormal bases of a

series of Krylov subspaces.

Steps

1. U1 = u1

2. For i=1, 2, . . .

3. Compute v = Aui

4. Orthogonalization: ui+1 = (I − UiUi∗)v 5. Normalization : ui+1 = ui+1/kui+1k

6. Expand U by u : Ui+1 = (Ui ui+1)

Arnoldi relation: AUk = UkHk + βkuk+1e∗

k.

(11)

Problem of the Arnoldi process

When there are errors in the computation of Au,

convergence will stagnate.

(12)

University of Maryland, College Park

Problem of the Arnoldi process

When there are errors in the computation of Au,

convergence will stagnate.

Example: Let A be a 100 × 100 nonsymmetric

matrix with eigenvalues 1, 0.95, . . . , 0.9599.

40 40

(13)

Problem of the Arnoldi process

When there are errors in the computation of Au,

convergence will stagnate.

Example: Let A be a 100 × 100 nonsymmetric

matrix with eigenvalues 1, 0.95, . . . , 0.9599.

0 10 20 30 40 10−15 10−10 10−5 100 40

(14)

University of Maryland, College Park

Problem of the Arnoldi process

When there are errors in the computation of Au,

convergence will stagnate.

Example: Let A be a 100 × 100 nonsymmetric

matrix with eigenvalues 1, 0.95, . . . , 0.9599.

0 10 20 30 40 10−15 10−10 10−5 100 10 20 30 40 10−15 10−10 10−5 100

x-axis: dimension of subspace. y-axis: error. Add relative error ǫ = 10−3 to Au.

(15)

Residual Arnoldi method

Subspace is expanded by residuals.

Let (µ, z) be an eigenpair approximation. Its

residual is r = Az − µz.

The pair (µ, z) is called a candidate.

(16)

University of Maryland, College Park

Residual Arnoldi method

Subspace is expanded by residuals.

Let (µ, z) be an eigenpair approximation. Its

residual is r = Az − µz.

The pair (µ, z) is called a candidate.

0 10 20 30 40 10−15 10−10 10−5 100 40

(17)

Residual Arnoldi method

Subspace is expanded by residuals.

Let (µ, z) be an eigenpair approximation. Its

residual is r = Az − µz.

The pair (µ, z) is called a candidate.

0 10 20 30 40 10−15 10−10 10−5 100 0 10 20 30 40 10−15 10−10 10−5 100 3

(18)

University of Maryland, College Park

Change candidate

What happens if we select a different candidate

during the computation?

40

(19)

Change candidate

What happens if we select a different candidate

during the computation?

Example: candidate is changed at iteration 30.

0 10 20 30 40

10−15 10−10 10−5 100

(20)

University of Maryland, College Park

Shift-invert enhancement

Given a shift σ, the subspace is generated by the

following steps:

1. Select a candidate and compute its residual r. 2. Solve the linear system (A − σI)v = r.

3. Use v is subspace expansion

40

(21)

Shift-invert enhancement

Given a shift σ, the subspace is generated by the

following steps:

1. Select a candidate and compute its residual r. 2. Solve the linear system (A − σI)v = r.

3. Use v is subspace expansion

Example:

Use σ = 1.3 and

solve linear systems to precision 10−3. 0 10 20 30 40 10−15 10−10 10−5 100

(22)

University of Maryland, College Park

Theory

Perturbation theory of eigenproblemAlgorithm of residual Arnoldi methodResidual Arnoldi relation

The backward errorConvergence theory

Shift-invert enhancement

(23)

Perturbation theory

Let ˜A = A + E. For an eigenpair (λ, x) of A,

there exists an eigenpair (˜λ, ˜x) of ˜A, such that when kEk is small enough,

˜

λ ≃ λ + (y∗Ex) ˜

x ≃ x + X(λI − L)−1Y ∗Ex,

where (x X) is a nonsingular matrix, (y Y )∗ is its inverse, and L = Y ∗AX.

(24)

University of Maryland, College Park

Condition number

˜ λ ≃ λ + (y∗Ex) ˜ x ≃ x + X(λI − L)−1Y ∗Ex,

(25)

Condition number

˜

λ ≃ λ + (y∗Ex) ˜

x ≃ x + X(λI − L)−1Y ∗Ex,

cond(λ) = kykkxk, and

(26)

University of Maryland, College Park

Condition number

˜ λ ≃ λ + (y∗Ex) ˜ x ≃ x + X(λI − L)−1Y ∗Ex,

cond(λ) = kykkxk, and

cond(x) = sep−1(λ, L) = k(λI − L)−1k.

Therefore, |˜λ − λ| ≤ cond(λ)kEk

k˜x − xk ≤ C1cond(x)kEk.

(27)

Condition number

˜

λ ≃ λ + (y∗Ex) ˜

x ≃ x + X(λI − L)−1Y ∗Ex,

cond(λ) = kykkxk, and

cond(x) = sep−1(λ, L) = k(λI − L)−1k.

Therefore, |˜λ − λ| ≤ cond(λ)kEk

k˜x − xk ≤ C1cond(x)kEk.

If E has some special structure, kExk ≪ kEk,

|˜λ − λ| ≤ cond(λ)kExk

(28)

University of Maryland, College Park

Start from the algorithm

1. Compute an eigenpair approximation

2. Compute its residual (inexactly).

3. Orthogonalize residual against Uk

(29)

Start from the algorithm

1. Compute an eigenpair approximation

Using Rayleigh–Ritz method.

Rayleigh quotient: Hk = U∗

k AUk.

Ritz pair:k, Ukyk), where Hkyk = µkyk.

2. Compute its residual (inexactly).

(30)

University of Maryland, College Park

Start from the algorithm

1. Compute an eigenpair approximation

Using Rayleigh–Ritz method.

Rayleigh quotient: Hk = U∗

k AUk.

Ritz pair:k, Ukyk), where Hkyk = µkyk.

2. Compute its residual (inexactly).

• r˜k = rk + fk = AUkyk − µkUkyk + fk.

Relative error condition: kfkk ≤ ǫkrkk.

3. Orthogonalize residual against Uk

(31)

Start from the algorithm

1. Compute an eigenpair approximation

Using Rayleigh–Ritz method.

Rayleigh quotient: Hk = U∗

k AUk.

Ritz pair:k, Ukyk), where Hkyk = µkyk.

2. Compute its residual (inexactly).

• r˜k = rk + fk = AUkyk − µkUkyk + fk.

Relative error condition: kfkk ≤ ǫkrkk.

3. Orthogonalize residual against Uk

• rk + f⊥

k = Ukgk + βkuk+1e∗k

• f⊥

(32)

University of Maryland, College Park

Residual Arnoldi relation

For i = 1, · · · , k

Put all yi into an upper triangular matrix YkPut all µi into a diagonal matrix Mk

Put all gi and βi (except βk) into an upper

Hessenberg matrix Gk

Put all f⊥

k into Fk.

(33)

Residual Arnoldi relation

For i = 1, · · · , k

Put all yi into an upper triangular matrix YkPut all µi into a diagonal matrix Mk

Put all gi and βi (except βk) into an upper

Hessenberg matrix GkPut all f⊥ k into Fk. AUk + FkYk−1 = Uk(Gk + YkMk)Yk−1 + βk ηk uk+1eTk

(34)

University of Maryland, College Park

Residual Arnoldi relation

For i = 1, · · · , k

Put all yi into an upper triangular matrix YkPut all µi into a diagonal matrix Mk

Put all gi and βi (except βk) into an upper

Hessenberg matrix GkPut all f⊥ k into Fk. AUk + FkYk−1 = Uk(Gk + YkMk)Yk−1 + βk ηk uk+1eTk • (Gk + YkMk)Y −1 k is upper Hessenberg.

(35)

Backward error

(36)

University of Maryland, College Park

Backward error

AUk + FkYk−1 = Uk(Gk + YkMk)Yk−1 + βηkk uk+1eTkLet Ek = FkY −1 k Uk∗. (A + Ek)Uk = Uk(Gk + YkMk)Yk−1 + βk ηk uk+1e T k .

(37)

Backward error

AUk + FkYk−1 = Uk(Gk + YkMk)Yk−1 + βηkk uk+1eTkLet Ek = FkY −1 k Uk∗. (A + Ek)Uk = Uk(Gk + YkMk)Yk−1 + βk ηk uk+1e T k .

By the uniqueness of Arnoldi relation, Ek is the

backward error of the residual Arnoldi method, and Uk spans a Krylov subspace of A + Ek.

(38)

University of Maryland, College Park

Backward error

AUk + FkYk−1 = Uk(Gk + YkMk)Yk−1 + βηkk uk+1eTkLet Ek = FkY −1 k Uk∗. (A + Ek)Uk = Uk(Gk + YkMk)Yk−1 + βk ηk uk+1e T k .

By the uniqueness of Arnoldi relation, Ek is the

backward error of the residual Arnoldi method, and Uk spans a Krylov subspace of A + Ek.

Empirically, kEkk is around the level of ǫ. With

that, we can prove kEkxk ≤ C2ǫkrkk.

(39)

Some notation

Let (˜λk, ˜xk) be an eigenpair of A + Ek corresponding to (λ, x) of A, and let (˜µk, ˜zk) be the candidate

(40)

University of Maryland, College Park

Convergence theory

Some assumptions for our proof.

(41)

Convergence theory

Some assumptions for our proof.

(42)

University of Maryland, College Park

Convergence theory

Some assumptions for our proof.

1. The target eigenpair (λ, x) is simple.

2. There exists a constant C3 > 0 such that

sep−1(µk, L) < C3.

Matrix L = X∗AX, where X is an

orthonormal basis of span{I − xx∗}.

(43)

Convergence theory

Some assumptions for our proof.

1. The target eigenpair (λ, x) is simple.

2. There exists a constant C3 > 0 such that

sep−1(µk, L) < C3.

Matrix L = X∗AX, where X is an

orthonormal basis of span{I − xx∗}.

3. There exists a positive constant C4 such that if

kEk ≤ C4, then there are descending constants

˜

κ1, ˜κ2, . . . independent of E, with limkκ˜k = 0

(44)

University of Maryland, College Park

Convergence theory

Some assumptions for our proof.

1. The target eigenpair (λ, x) is simple.

2. There exists a constant C3 > 0 such that

sep−1(µk, L) < C3.

Matrix L = X∗AX, where X is an

orthonormal basis of span{I − xx∗}.

3. There exists a positive constant C4 such that if

kEk ≤ C4, then there are descending constants

˜

κ1, ˜κ2, . . . independent of E, with limkκ˜k = 0

such that k˜zk − ˜xkk ≤ ˜κk. 4. kEkk ≤ ǫC5.

(45)

Put everything together

Perturbation theory: k˜xk − xk ≤ C6kEkxk. • Backward error: kEkxk ≤ C2ǫkrkk. • Assumption 3: k˜zk − ˜xkk ≤ ˜κk.Residual bound: krkk ≤ 2kzk − xk. • Invariant propertyk = zk.

(46)

University of Maryland, College Park

Put everything together

Perturbation theory: k˜xk − xk ≤ C6kEkxk. • Backward error: kEkxk ≤ C2ǫkrkk. • Assumption 3: k˜zk − ˜xkk ≤ ˜κk.Residual bound: krkk ≤ 2kzk − xk. • Invariant propertyk = zk. If ǫ ≤ 1/2C2C6, krkk ≤ 2˜κk 1 − 2C2C6ǫ .

(47)

Shift-invert enhancement

Let S = (A − σI)−1 and Tk = (σI − Mk)−1Y −1

k .

The SIRA relation is

SUk+FkTk = Uk(Hk−Yk)Tk+

βk

(σ − µk)ηk

(48)

University of Maryland, College Park

Shift-invert enhancement

Let S = (A − σI)−1 and Tk = (σI − Mk)−1Y −1

k .

The SIRA relation is

SUk+FkTk = Uk(Hk−Yk)Tk+ βk (σ − µk)ηk uk+1eTk . • Backward error Ek = FkTkU∗ k .

(49)

Shift-invert enhancement

Let S = (A − σI)−1 and Tk = (σI − Mk)−1Y −1

k .

The SIRA relation is

SUk+FkTk = Uk(Hk−Yk)Tk+ βk (σ − µk)ηk uk+1eTk . • Backward error Ek = FkTkU∗ k .

(50)

University of Maryland, College Park

Shift-invert enhancement

Let S = (A − σI)−1 and Tk = (σI − Mk)−1Y −1

k .

The SIRA relation is

SUk+FkTk = Uk(Hk−Yk)Tk+ βk (σ − µk)ηk uk+1eTk . • Backward error Ek = FkTkU∗ k .

We can prove that kEkxk ≤ C8ǫkrkk. • If ǫ < 1/C9 for some constant C9, then

krkk ≤

2˜κk 1 − C9ǫ

.

(51)

Experiments

RAPACK

Compare with ARPACKCompare with SRRITInexact Krylov method

(52)

University of Maryland, College Park

RAPACK

A numerical package implementing the residual

Arnoldi method.

(53)

RAPACK

A numerical package implementing the residual

Arnoldi method.

(54)

University of Maryland, College Park

RAPACK

A numerical package implementing the residual

Arnoldi method.

Two computational modes: RA and SIRA.Uses reverse communication to get matrix

operation results.

(55)

RAPACK

A numerical package implementing the residual

Arnoldi method.

Two computational modes: RA and SIRA.Uses reverse communication to get matrix

operation results.

Implements Krylov Schur restarting method for

(56)

University of Maryland, College Park

RAPACK

A numerical package implementing the residual

Arnoldi method.

Two computational modes: RA and SIRA.Uses reverse communication to get matrix

operation results.

Implements Krylov Schur restarting method for

memory management.

Allows an arbitrary initial subspace.

(57)

ARPACK

(58)

University of Maryland, College Park

ARPACK

Implements implicitly restarted Arnoldi method.Can solve standard eigenproblems, generalized

eigenproblems, and singular value

decompositions for symmetric, nonsymmetric and complex matrices.

(59)

ARPACK

Implements implicitly restarted Arnoldi method.Can solve standard eigenproblems, generalized

eigenproblems, and singular value

decompositions for symmetric, nonsymmetric and complex matrices.

Four computational modes.

Mode 1: Standard eigenproblem.

Mode 2: Generalized eigenproblem.

(60)

University of Maryland, College Park

ARPACK

Implements implicitly restarted Arnoldi method.Can solve standard eigenproblems, generalized

eigenproblems, and singular value

decompositions for symmetric, nonsymmetric and complex matrices.

Four computational modes.

Mode 1: Standard eigenproblem.

Mode 2: Generalized eigenproblem.

Mode 3, 4: With shift-invert enhancement.We only compare mode 1 with the RA mode and

mode 3 with the SIRA mode.

(61)

Test problem

A real nonsymmetric eigenmat A of order 10000. • First 100 eigenvalues are 1, 0.95, · · · , 0.9599.Other eigenvalues are in (0.25, 0.75).

(62)

University of Maryland, College Park

Test problem

A real nonsymmetric eigenmat A of order 10000. • First 100 eigenvalues are 1, 0.95, · · · , 0.9599.Other eigenvalues are in (0.25, 0.75).

The condition number is around 105.Tasks

Compute 6 largest eigenvalues using mode 1

of ARPACK and the RA mode of RAPACK.

Compute 6 smallest eigenvalues using mode 3

of ARPACK and the SIRA mode of RAPACK.

(63)

Test problem

A real nonsymmetric eigenmat A of order 10000. • First 100 eigenvalues are 1, 0.95, · · · , 0.9599.Other eigenvalues are in (0.25, 0.75).

The condition number is around 105.Tasks

Compute 6 largest eigenvalues using mode 1

of ARPACK and the RA mode of RAPACK.

Compute 6 smallest eigenvalues using mode 3

of ARPACK and the SIRA mode of RAPACK.

Setting

Maximum dimension of subspace 20.Convergence precision 10−13.

(64)

University of Maryland, College Park

Mode 1 and the RA mode

Mode 1 RA mode ETime (second) 4.6860 8.4242

MVM 113 138

Let xi be the ith eigen-vector of A. The error is measured by kxi − U U∗xik. 0 50 100 150 10−15 10−10 10−5 100

(65)

Mode 3 and the SIRA mode

Use GMRES to solve linear systems with shift=0.

Mode 3 SIRA mode

Etime (second) 378 168

MVM 11842 4606

Outer iterations 68 144 Precision for solving 10−13 10−3

(66)

University of Maryland, College Park

Mode 3 and the SIRA mode

Use GMRES to solve linear systems with shift=0.

Mode 3 SIRA mode

Etime (second) 378 168

MVM 11842 4606

Outer iterations 68 144 Precision for solving 10−13 10−3

20 40 60 80 100 120 0 50 100 150 200 250 150

(67)

Mode 3 and the SIRA mode

Use GMRES to solve linear systems with shift=0.

Mode 3 SIRA mode

Etime (second) 378 168

MVM 11842 4606

Outer iterations 68 144 Precision for solving 10−13 10−3

50 100 150 200 250 10−10 10−5 100

(68)

University of Maryland, College Park

SRRIT

Implements Schur–Rayleigh–Ritz iteration

method.

Can compute the dominant invariant subspace.

Can use an arbitrary subspace to start the process.

(69)

SRRIT

Implements Schur–Rayleigh–Ritz iteration

method.

Can compute the dominant invariant subspace.

Can use an arbitrary subspace to start the process.Compare it with the RA mode for using an

existing subspace as initialization.

Use matrix S = A−1, where A is previously

defined, to compute 6 smallest eigenvalues.

(70)

University of Maryland, College Park

Successive inner-outer process

Properties of Krylov subspaces 1. Stagnate around error level 2. Invariant convergent curves 3. Superlinear convergence 0 10 20 30 40 10−15 10−10 10−5 100 1.e−3 1.e−6 1.e−9 1.e−12

(71)

Successive inner-outer process

Properties of Krylov subspaces 1. Stagnate around error level 2. Invariant convergent curves 3. Superlinear convergence 0 10 20 30 40 10−15 10−10 10−5 100 1.e−3 1.e−6 1.e−9 1.e−12 Algorithm:

1. Divide the process into 4 stages with increasing precision requirement 10−3, 10−6, 10−9, 10−12. 2. Stage i computes matrix-vector multiplication

(solves linear system) to precision ǫi.

3. Each stage uses previously generated subspace as an initial subspace.

(72)

University of Maryland, College Park

SRRIT and the RA mode

Stage 10−3 10−6 10−9 10−12 Total SRRIT 305 120 152 213 783 RA mode 39 40 40 34 153 50 100 150 10−10 10−5 100

(73)

Inexact Krylov method

Allows increasing errors in matrix-vector

(74)

University of Maryland, College Park

Inexact Krylov method

Allows increasing errors in matrix-vector

multiplication.

Implemented in the RA mode with S = A−1, and

tolerable error size max ǫ, mkrǫτ

i −1k



ǫ: relative error

τ : convergence precision

m: maximum number of subspace ri: residual in the ith iteration

(75)

Inexact Krylov method

Allows increasing errors in matrix-vector

multiplication.

Implemented in the RA mode with S = A−1, and

tolerable error size max ǫ, mkrǫτ

i −1k



ǫ: relative error (=10−3) τ : convergence precision (=10−12) m: maximum number of subspace (=50)

(76)

University of Maryland, College Park

Inexact Krylov method

Allows increasing errors in matrix-vector

multiplication.

Implemented in the RA mode with S = A−1, and

tolerable error size max ǫ, mkrǫτ

i −1k



ǫ: relative error (=10−3) τ : convergence precision (=10−12) m: maximum number of subspace (=50)

ri: residual in the ith iteration

Compare it with mode 3 and the SIRA mode to

compute 6 smallest eigenvalues of A.

(77)

Result of inexact Krylov method

Inexact Mode 3 SIRA

Etime 80 106 48

TMVM 5240 7083 2829 Iteration 43 50 89

(78)

University of Maryland, College Park

Result of inexact Krylov method

Inexact Mode 3 SIRA

Etime 80 106 48 TMVM 5240 7083 2829 Iteration 43 50 89 20 40 60 80 0 50 100 150 Inexact Krylov Mode 3 SIRA Mode 0 10 20 30 40 50 10−15 10−10 10−5 100

(79)

Conclusion

Residual Arnoldi method for eigenproblems

allows errors in the computation, and can work on an appropriate initial subspace.

(80)

University of Maryland, College Park

Conclusion

Residual Arnoldi method for eigenproblems

allows errors in the computation, and can work on an appropriate initial subspace.

With shift-invert enhancement, residual Arnoldi

method can reduce a lot of computational cost.

(81)

Conclusion

Residual Arnoldi method for eigenproblems

allows errors in the computation, and can work on an appropriate initial subspace.

With shift-invert enhancement, residual Arnoldi

method can reduce a lot of computational cost.

RAPACK can compute few selected eigenpairs

for real matrices efficiently, and only requires moderate memory.

(82)

University of Maryland, College Park

Conclusion

Residual Arnoldi method for eigenproblems

allows errors in the computation, and can work on an appropriate initial subspace.

With shift-invert enhancement, residual Arnoldi

method can reduce a lot of computational cost.

RAPACK can compute few selected eigenpairs

for real matrices efficiently, and only requires moderate memory.

Many other algorithms can be implemented in

RAPACK and get better performance.

(83)

Future work

Block residual Arnoldi method.

Using other eigenvector approximations, such as

refine Ritz vector or harmonic Ritz vector.

More inexact Krylov subspace methods.Extension of RAPACK to solve other

eigenproblems, such as generalized eigenproblem.

Parallelization of RAPACK.

參考文獻

相關文件

For the assessment of Reading, Writing (Part 2: Correcting and Explaining Errors/Problems in a Student’s Composition) and Listening, which does not involve the use

Qi (2001), Solving nonlinear complementarity problems with neural networks: a reformulation method approach, Journal of Computational and Applied Mathematics, vol. Pedrycz,

In section 4, based on the cases of circular cone eigenvalue optimization problems, we study the corresponding properties of the solutions for p-order cone eigenvalue

The entire moduli space M can exist in the perturbative regime and its dimension (∼ M 4 ) can be very large if the flavor number M is large, in contrast with the moduli space found

All steps, except Step 3 below for computing the residual vector r (k) , of Iterative Refinement are performed in the t-digit arithmetic... of precision t.. OUTPUT approx. exceeded’

Theorem 5.6.1 The qd-algorithm converges for irreducible, symmetric positive definite tridiagonal matrices.. It is necessary to show that q i are in

Optim. Humes, The symmetric eigenvalue complementarity problem, Math. Rohn, An algorithm for solving the absolute value equation, Eletron. Seeger and Torki, On eigenvalues induced by

We have also discussed the quadratic Jacobi–Davidson method combined with a nonequivalence deflation technique for slightly damped gyroscopic systems based on a computation of