• 沒有找到結果。

Efficient methods of computing interior transmission eigenvalues for the elastic waves

N/A
N/A
Protected

Academic year: 2022

Share "Efficient methods of computing interior transmission eigenvalues for the elastic waves"

Copied!
27
0
0

加載中.... (立即查看全文)

全文

(1)

Efficient methods of computing interior transmission eigenvalues for the elastic waves

Wei-Chen Chang

Wen-Wei Lin

Jenn-Nan Wang

Abstract

We study the interior transmission eigenvalue problem for the elastic wave scattering in this paper. We aim to show the distribution of posi- tive eigenvalues by efficient numerical algorithms. Here the elastic waves are scattered by the perturbations of medium parameters, which include the elasticity tensor C and the density ρ. Let us denote (C0, ρ0) and (C1, ρ1) the background and the perturbed medium parameters, respec- tively. We consider two cases of perturbations, C0= C1, ρ16= ρ0(case 1) and C06= C1, ρ1= ρ0(case 2). After discretizing the associated PDEs by FEM, we are facing the computation of generalized eigenvalues problems (GEP) with matrices of large size. These GEPs contain huge number of nonphysical zeros (for case 1) or nonphysical infinities (for case 2). In order to locate several hundred positive eigenvalues effectively, we then convert GEPs to suitable quadratic eigenvalues problems (QEP). We then imple- ment a quadratic Jacobi-Davidson method combining with partial locking or partial deflation techniques to compute 500 positive eigenvalues.

Keywords: Interior transmission eigenvalues, elastic waves, generalized eigen- value problems, quadratic eigenvalue problems, quadratic Jacobi-Davidson method.

1 Introduction

We study the interior transmission eigenvalue problem (ITEP) for the elastic wave scattering in this paper. Our purpose here is to propose an efficient nu- merical algorithm to compute as many transmission eigenvalues as possible for the time-harmonic elastic waves. Let D be an open bounded domain in R2 with smooth boundary ∂D. Let u(x) = [u1(x), u2(x)]> be the two-dimensional vector representing the displacement vector and its infinitesimal stain tensor be

Department of Mathematics, National Taiwan University, Taipei 106, Taiwan. Email:

d04221003@ntu.edu.tw

Department of Applied Mathematics, National Chiao Tung University, Hsinchu 300, Tai- wan. E-mail: wwlin@math.nctu.edu.tw

Institute of Applied Mathematical Sciences, NCTS, National Taiwan University, Taipei, 106, Taiwan. Email: jnwang@math.ntu.edu.tw

(2)

given by ε(u) = ((∇u)T+ ∇u)/2. We consider the linear elasticity, that is, the stress tensor σ(u) is defined by ε(u) via Hook’s law:

σC(u) = Cε(u),

where C is the elasticity tensor. For elasticity, the elasticity tensor C = (Cijkl), 1 ≤ i, j, k, l ≤ 2, is fourth rank tensor satisfying two symmetry properties:

Cijkl= Cklij (major symmetry),

Cijkl= Cjikl (minor symmetry). (1) We also require that C satisfies the strong convexity condition: there exists κ > 0 such that for any symmetric matrix A

CA : A ≥ κ|A|2 ∀ x ∈ D, (2)

where for two matrices A, B, A : B =P

ijaijbijand |A|2= A : A. In particular, the elastic body is called isotropic if

Cijkl= λδijδkl+ µ(δikδjl+ δilδjk)

where µ and λ are called Lamé coefficients. In other words, for isotropic elastic body, the stress-strain relation is given by

σC(u) = 2µε(u) + λtr(ε(u))I = 2µε(u) + λ∇ · uI, (3) where I stands for the identity matrix. The convexity condition (2) is equivalent to

µ(x) > 0, λ + µ > 0. (4)

Let Ci, i = 0, 1, be two elasticity tensors satisfying (1), (2). Denote σCi its associated stress tensor. The ITEP is to find ω2 ∈ C such that there exists a nontrivial solution (u, v) ∈ [H1(D)]2× [H1(D)]2solving

∇ · σC0(u) + ρ0ω2u = 0 in D, (5a)

∇ · σC1(v) + ρ1ω2v = 0 in D, (5b)

u = v on ∂D, (5c)

σC0(u)ν = σC1(v)ν on ∂D, (5d) where ρ0, ρ1are density functions and ν is the outer normal of ∂D. Physically, σC0(u)ν (or σC1(v)ν) denotes the traction acting on ∂D.

The study of ITEP originates from the validity of some qualitative ap- proaches to the inverse scattering problems in an inhomogeneous medium such as the linear sampling method [13] and the factorization method [22]. To de- scribe its originality, we assume that ρ0 = 1 and σC0 is the isotropic stress tensor (3) with constant Lamé coefficients λ, µ. Let the incident field uine(x) with e = p or s be given by

uinp(x) = ξeikpx·ξ or uins(x) = ξeiksx·ξ

(3)

with ξ ∈ Ω := {kξk2 = 1} and kp := ω/√

λ + 2µ, ks := ω/√

µ represent the compressional and the shear wave numbers, respectively. It is easily seen that uine satisfies the elastic wave equation

∇ · σC0(uine) + ω2uine = 0 in R2.

Suppose that the homogeneous medium is perturbed by a penetrable object D with elasticity parameters (C1, ρ1). Denote eC = C1χD + C0χR2\ ¯D and

˜

ρ = ρ1χD+ χR2\ ¯D and the total field utoe (x) = uine(x) + usce(x) solves

∇ · σ

Ce(utoe ) + ˜ρω2utoe = 0 in R2.

Here, by the Helmhotlz-Hodge decomposition, we can write usce(x) = uscep(x) + usces(x). The scattered fields uscep(x), usces(x) are required to satisfy the following the Kupradze’s radiation conditions at |x| → ∞

∂uscep

∂|x| − ikpuscep= o(|x|−1/2),

∂usces

∂|x| − iksusces = o(|x|−1/2)

uniformly in ˆx = x/|x|. It is well known that every radiating solution usce(x) has an asymptotic behavior of the form

usce(x) = eikp|x|

p|x|uep(ˆx, ξ)ˆx +eiks|x|

p|x|ues(ˆx, ξ)ˆx+ O(|x|−3/2),

where ue (ˆx, ξ) := (uep(ˆx, ξ), ues(ˆx, ξ)) is called the far-field pattern. The in- verse problem we are interested in is to reconstruct the shape of the pene- trable object D by the far-field patterns up (ˆx, ξ) = (upp(ˆx, ξ), ups(ˆx, ξ)) and us (ˆx, ξ) = (usp(ˆx, ξ), uss(ˆx, ξ)) for all ˆx, ξ ∈ Ω.

In the linear sampling method and the factorization method, the following far-field operator F : [L2(Ω)]2→ [L2(Ω)]2plays an important role

(Fg)(ˆx) = e−iπ/4 Z

rkp

ωup (ˆx, ξ)gp(ξ) + rkp

ω us (ˆx, ξ)gs(ξ)

! dξ,

where g(ξ) = (gp(ξ), gs(ξ)). Due to the superposition principle, the far-field operator is the far-field pattern of the elastic Herglotz wave function

ug(x) := e−iπ/4 Z

rkp

ω ξeikpx·ξgp(ξ) + rkp

ωξeiksx·ξgs(ξ)

! dξ.

A well-known result is that the far-field operator F is injective and has a dense range if and only if ω2 is not an eigenvalue of (5) with eigenfunction (ug, v).

Roughly speaking, if ω2is an eigenvalue of (5) with corresponding eigenfunction (ug, v), the penetrable object D is a non-scattered object.

(4)

The ITEP have recently enjoyed a rapid development in the study of di- rect/inverse scattering problems for acoustic and electromagnetic waves in in- homogeneous media [6, 7, 8, 9, 14, 15, 16, 23, 29]. We refer to two monographs [5, 24] for the detailed accounts of the ITEP for acoustic and electromagnetic waves. For the investigation of ITEP for the elastic waves, there are a few the- oretical results [1, 2, 10, 11, 12]. Especially, in [1], the fundamental questions of existence and discreteness of transmission eigenvalues associated with (5) were established.

The main focus of this work is to develop a numerical method to compute the transmission eigenvalues of (5). For acoustic and electromagnetic waves, some numerical algorithms for computing the transmission eigenvalues were proposed recently. Three finite element methods and a coupled boundary element method were proposed for solving the 2D/3D ITEP [15, 17, 31]. Two iterative methods combining with convergent analysis based on the existence theory of the fourth order reformulation for the transmission eigenvalues in [8] were considered in [30]. A mixed finite element method for the 2D ITEP was suggested in [20]

and the corresponding non-Hermitian quadratic eigenvalue problem (QEP) was solved by the classical secant iteration with an adaptive Arnoldi method. The multilevel correction method was used to transform the solution of TEP into a series of solutions corresponding to linear boundary value problems and then solved by the multigrid method [21].

In many cases, we are mainly interested in locating positive transmission eigenvalues. However, for general inhomogeneous media, the desired positive transmission eigenvalues are surrounded by complex ones. We would like to mention that an accurate numerical method, based on a surface integral formu- lation of the ITEP, for solving corresponding nonlinear eigenvalue problems for many different obstacles in 3D was presented in [25], but, only constant index of refraction and smooth domain can be treated. In [28] (for 2D ITEP) and [18]

(for 3D ITEP), the QEP is rewritten as a particular parametrized symmetric definite generalized eigenvalue problem (GEP) for which the eigenvalue curves are arranged in a monotonic order so that the desired curves can be sequentially solved with a new secant-type iteration. In [26, 27], a quadratic Jacobi-Davidson method with nonequivalence deflation is applied to compute a large number of positive eigenvalues of the corresponding QEPs.

As far as we know, there is only one result [19] considering numerical compu- tation of ITEP for the elastic waves. In [19], a numerical method was presented to compute a few smallest positive transmission eigenvalues of (5). The ITEP was reformulated as locating the roots of a nonlinear function whose values are generalized eigenvalues of a series of self-adjoint fourth-order problems. After discretizing the fourth-order eigenvalue problems using H2-conforming finite element, a secant-type method was employed to compute the roots of the non- linear function. Our method here is based on the ideas in [26, 27]. We first discretize (5) by the finite element method, we then apply a quadratic Jacobi- Davidson method with nonequivalence deflation to compute the eigenvalues of the resulting QEP. Contrary to the result in [19], our method is able to locate a large number of positive transmission eigenvalues of (5). Also, in [19], the

(5)

authors only studied the isotropic case having the same Lamé coefficients and different densities, i.e., (λ0, µ0) = (λ1, µ1), ρ0 6= ρ1. In our paper, we consider the anisotropic elasticity with C0= C1, ρ06= ρ1 or C06= C1, ρ0= ρ1. The lat- ter case is more practical representing the background and the perturbed bodies have different elastic behaviors.

Before ending the introduction, we would like to summarize the main con- tributions of this paper.

(i) We study the interior transmission eigenvalues problems for the elastic waves in which either ρ06= ρ1 or C06= C1. For these two cases, we reduce the PDE problems to GEPs by FEM. Due to the existence of non-physical zeros or infinities, in order to efficiently compute several hundred positive eigenvalues, GEPs are transformed to QEPs. The reduction of GEP to QEP in the first case was derived before. We then implement a quadratic Jacobi-Davidson algorithm to locate 500 positive eigenvalues of the QEP in the first case.

(ii) The second case where the elasticity tensors are different has never been studied numerically before. By mimicking the method for the first case, we try to convert the associated GEP to a QEP. Unfortunately, the naive attempt meets a special challenge in which a crucial matrix is degenerate in the second case. Consequently, a straightforward generalization of the transformations used for the first case fails in the second case. Here we discover a trick to circumvent this difficulty and derive a QEP that can be solved by the quadratic Jacobi-Davidson algorithm suitably combining with partial locking and partial deflation.

(iii) We are also interested in the behaviors of interior transmission eigenfunc- tions for the elastic waves. For the classical acoustic wave scattering, it was proved theoretically in [4] and demonstrated numerically in [3] that the interior transmission eigenfunction vanishes near the corner with the interior angle less than π and is localized near the corner with the interior angle greater than π. Our numerical results for the first case indicate the similar behaviors for the interior transmission eigenfunction. We want to point out that the ITEP for the elastic waves with only density jump is analogous to the acoustic wave scattering considered in [4] and [3]. Having the support of this numerical evidence, it is a natural question to verify the phenomenon rigorously.

(iv) For the second case, the behaviors of the interior transmission eigenfunc- tions near the boundary have never been investigated either theoretically or numerically. Intuitively, since the jump occurs in the leading order (i.e., in the elasticity tensor), one should study the behaviors of ∇u and

∇v near the boundary rather than u and v. Our numerical simulations demonstrate that ∇u and ∇v are localized near the point where the in- terior angle is larger than π. We believe that proving this phenomenon rigorously is a daunting task.

(6)

(v) The resulting matrices derived from FEM are sparse and have large sizes.

To be able to compute eigenvalues effectively, it is crucial to maintain sparsity in all matrix computations. Especially, in the second case, we have to deal with a logically sparse matrix given as the sum of a sparse matrix and a low-rank perturbation. The inverse of such matrix must be computed carefully using the Shermann-Morrison-Woodbury formula.

The paper is organized as follows. In Section 2, we introduce a GEP by discretizing the ITEP by FEM. In two cases considered in the paper, GEPs contain huge number of nonphysical eigenvalues. We then convert GEPs to QEPs in Section 3. Efficient quadratic Jacobi-Davidson method is discussed in Section 4. Numerical results are presented in Section 5. We end the paper with some conclusions and discussions in Section 6.

2 Discretization of ITEP

We first review the discretization of the ITEP (5) based on the standard piece- wise linear FEM (see [15] for details). Let

Sh= The space of continuous piecewise linear functions on D,

ShI = The subspace of functions in Sh that have vanishing DoF on ∂D, ShB= The subspace of functions in Sh that have vanishing DoF in D, where DoF is the degrees of freedom. Let {Φi}ni=1and {Ψi}mi=1denote standard nodal bases for the finite element spaces of ShI and ShB, respectively, then

u = uIh+ uBh =

n

X

j=1

ujΦj+

m

X

j=1

wjΨj,

v = vIh+ uBh =

n

X

j=1

vjΦj+

m

X

j=1

wjΨj.

The choice of uBh in v is to ensure that (5c) is satisfied. Applying the standard piecewise linear finite element method to (5a) and using the integration by parts, we obtain

n

X

j=1

ujC0j), ∇Φi) +

m

X

j=1

wjC0j), ∇Φi)

= ω2

n

X

j=1

uj0Φj, Φi) +

m

X

j=1

wj0Ψj, Φi)

. (6)

(7)

Matrix Dimension Definition

interior space stiffness matrices:

Kl 0, l = 0, 1, n × n

(Kl)ij = (σClj), ∇Φi)

El, l = 0, 1, n × m interior/boundary stiffness matrices:

(El)ij = (σClj), ∇Φi) interior space mass matrices:

Ml 0, l = 0, 1, n × n

(Ml)ij= (ρlΦj, Φi)

interior/boundary mass matrices:

Fl, l = 0, 1, n × m

(Fl)ij = (ρlΨj, Φi)

boundary space stiffness matrices:

Gl 0, l = 0, 1, m × m

(Gl)ij = (σClj), ∇Ψi) boundary space mass matrices:

Hl 0, l = 0, 1, m × m

(Hl)ij= (ρlΨj, Ψi)

Table 1: Stiffness and mass matrices for (6), (7), and (8).

Similarly, applying the standard piecewise linear finite element method to (5b), we have

n

X

j=1

vjC1j), ∇Φi) +

m

X

j=1

wjC1j), ∇Φi)

2

n

X

j=1

vj1Φj, Φi) +

m

X

j=1

wj1Ψj, Φi)

. (7)

Remind that

Cj), ∇Φi) = Z

D

Cε(Φj) : ε(Φi)dx and for the isotropic elasticity

Cj), ∇Φi) = Z

D

{2µε(Φj) : ε(Φi) + λ(∇ · Φj) · (∇ · Φi)} dx, where for matrices A and B, A : B =P

i,jaijbij. Finally, taking into account of (5c), (5d), applying the linear finite element method to the difference equation between (5a), (5b) and performing the integration by parts, yields

n

X

j=1

(ujC0j), ∇Ψi) − vjC1j), ∇Ψi)) +

m

X

j=1

wjC0j) − σC1j), ∇Ψi)

= ω2

n

X

j=1

(uj0Φj, Ψi) − vj1Φj, Ψi)) +

m

X

j=1

wj((ρ0− ρ1j, Ψi)

. (8)

Hereafter, we define the stiffness matrices Kl, El, Gl, and mass matrices Ml, Fl, Hl as in Table 1. In addition, we set u = [u1, . . . , un]>, v = [v1, . . . , vn]>,

(8)

and w = [w1, . . . , wm]>. Then, the discretizations of (6), (7) and (8) give rise to a generalized eigenvalue problem (GEP)

Az = λBz (9a)

with λ = ω2,

A =

K0 0 E0

0 K1 E1

E0> −E1> G0− G1

, B =

M0 0 F0

0 M1 F1

F0> −F1> H0− H1

, z =

 u v w

. (9b)

3 GEP to QEP

Our next aim is to convert the GEP (9) into a quadratic eigenvalue problem (QEP). We now discuss two cases separately.

Case 1. For this, we assume that C0 = C1 and ρ0 > ρ1. Thus, we have K0 = K1 = K, E0 = E1 = E, G0 = G1. This is exactly the same system considered in [26]. The difficulty of locating a bunch of eigenvalues near the left most of the positive axis lies in the fact that there are too many zero eigenvalues.

To see this, we compute the null space of A. It is clear that Az = 0 is equivalent to





Ku + Ew = 0 ⇔ u = −K−1Ew, Kv + Ew = 0 ⇔ v = −K−1Ew, E>u − E>v = 0.

(10)

Therefore, for any w 6= 0, (−K−1Ew, −K−1Ew, w) belongs to null(A). In other words, dim(null(A)) = m. Since we have a cluster of zero eigenvalues, the usual GEP solver is ineffective in finding the positive eigenvalues we are interested. So we follow the ideas in [26] (precisely in [18]) converting the GEP to the QEP. We now write

H = H0− H1, M = M0− M1, F = F0− F1,

Mc1= M1− F1H−1F>, M = M − F Hc −1F>, K = K − EHb −1F>, and

S =K E , T1=M1 F1 , M = M F F> H

 .

Since ρ0 > ρ1, we can see that H  0, M  0, and M  0. We also have M  0. The λ-matrix L(λ) in (9) is now written asc

L(λ) =

K − λ(M + M1) 0 E − λ(F + F1)

0 K − λM1 E − λF1

E>− λ(F>+ F1>) −E>+ λF1> −λH

. (11)

(9)

Introducing two invertible matrices

J =

In 0 0

0 −In 0

0 0 In

, P =

0 In 0

−In In 0

0 0 In

.

We can transform L(λ) to a symmetric λ-matrix

J PL(λ)P =

−K + λM1 K − λM1 E − λF1

K − λM1 −λM −λF

E>− λF1> −λF> −λH

=−K + λM1 S − λT1

(S − λT1)> −λM

 .

Furthermore, let C(λ) =

 In 0

λ−1M−1(S − λT1)> In+m



then we can compute

(C(λ))>(J PL(λ)P)C(λ)

=−K + λM1+ λ−1(S − λT1)M−1(S − λT1)> 0

0 −λM



=λ−1Q(λ) 0

0 −λM

 , where

Q(λ) ≡ λ2A2+ λA1+ A0 , (12) with A2, A1 and A0 being n × n symmetric matrices given by

A2= M1+ cM1Mc−1Mc1>+ F1H−1F1> (13a)

= M1+ T1M−1T1>,

A1= −K − bK cM−1Mc1>− cM1Mc−1Kb>− EH−1F1>− F1H−1E> (13b)

= −K − SM−1T1>− T1M−1S>,

A0= bK cM−1Kb>+ EH−1E> (13c)

= SM−1S>.

It has been shown [18] that the GEP (9) can be reduced to the QEP as in (12) and (13) in which all nonphysical zero are removed.

Theorem 1. [18] Let L(λ) and Q(λ) be defined as in (9) and (12), respectively.

Then

σ(L(λ)) = {0, · · · , 0}

| {z }

m

∪ σ(Q(λ)).

Here, σ(·) denotes the spectrum of the associated matrix pencil.

(10)

Case 2. Now we assume ρ0 = ρ1 and C0> C1 in the sense of (2). Then M0= M1= M , F0= F1= F , H0− H1= 0, and (9) becomes

Az = λBz (14)

with

A =

K0 0 E0

0 K1 E1

E0> −E1> G0− G1

, B =

M 0 F

0 M F

F> −F> 0

. (15)

Similar to the observation (10), we can see that null(B) = m. Consequently, the GEP (14) has m infinite eigenvalues. Since we are interested in locating eigenvalues in the leftmost positive axis, those m infinite eigenvalues seems harmless. It looks like one may try to solve the GEP (14) directly. However, even here GEP is not ineffective due to the excessive number of complex eigenvalues.

So we will again convert the GEP (14) to a QEP.

Changing λ to λ−1, (14) is equivalent to

Bz = λAz. (16)

In other words, we are interested in locating eigenvalues of (16) in the rightmost positive axis. Now (16) has m nonphysical zeros. Like in Case 1, defining K = K0− K1, E = E0− E1, and G = G0− G1, the corresponding λ-matrix L(λ) of (16) becomese

L(λ) =e

M − λ(K + K1) 0 F − λ(E + E1)

0 M − λK1 F − λE1

F>− λ(E + E1)> −F>+ λE1> −λG

, (17)

which is nothing but (11). Therefore, as in Case 1, we can transform eL(λ) to a symmetric λ-matrix

J P eL(λ)P =

−M + λK1 M − λK1 F − λE1

M − λK1 −λK −λE

F>− λE1> −λE> −λG

= −M + λK1 R − λU1

(R − λU1)> −λK

 ,

where

R =M F , U1=K1 E1 , K = K E E> G

 .

Likewise, since C0> C1, we can see that G  0, K  0. However, in this case, the matrix K may be degenerate. We can not proceed what we did in Case 1.

To further analyze this case, we assume that

null (K) = span{v1, · · · , vk}

and denote V0 = [v1, · · · , vk] ∈ R(n+m)×k. Then there exist a series of House- holder transforms

Q = (I − 2qkq>k q>kqk

) · · · (I − 2q1q>1 q>1q1

)

(11)

such that

QV0= 0 Γ

 ,

where Γ is a k × k nonsingular triangular matrix. Note that Q is orthogonal and can be written as

Q = I − W Y> (18)

with W, Y ∈ R(n+m)×k. It is not hard to check that

QKQ>=

 Kb 0

0 0

 , where

K =b I(n+m)−k 0 (K − KY W>− W Y>K + W (Y>KY )W>)I(n+m)−k 0



= K1+h Ye Wf

i 0 −Ik

−Ik Ke

"

Ye>

Wf>

#

(19) and

K1=I(n+m)−k 0 KI(n+m)−k 0



, K = Ye >KY,

Y =e I(n+m)k 0 KY, W =f I(n+m)−k 0 W.

Note that K1is invertible. It is important to point out that since K is sparse and so is K1. We observe from (19) that bK is written as the sum of a sparse matrix and a low-rank perturbation. For that, bK is called logically sparse. This property is crucial in order to compute bK−1 efficiently. More precisely, in view of (19), Kb−1 can be obtained efficiently by using the Shermann-Morrison-Woodbury formula.

We now write

In 0

0 Q

  −M + λK1 R − λU1

(R − λU1)> −λK

 In 0 0 Q>



=

−M + λK1 R − λ bb U1 (S − λT ) ( bR − λ bU1)> −λ bK 0

(S − λT )> 0 0

,

where

h

R − λ bb U1 (S − λT )i

= (R − λU1)Q>. (20) Thus GEP (16) is equivalent to

−M + λK1 R − λ bb U1 (S − λT ) ( bR − λ bU1)> −λ bK 0

(S − λT )> 0 0

 x y z

= 0,

(12)

namely,

(−M + λK1)x + ( bR − λ bU1)y + (S − λT )z = 0, (21a) ( bR − λ bU1)>x − λ bKy = 0, (21b)

(S − λT )>x = 0. (21c)

It follows from (21b) that y = 1

λKb−1( bR − λ bU1)>x.

Substituting this relation into (21a) and combining (21c) gives



λ−M S S> 0



+ λ2 K1 −T

−T> 0

 +



R bbK−1Rb> 0

0 0



− λ



Ub1Kb−1Rb>+ bR bK−1Ub1> 0

0 0



2



Ub1Kb−1Ub1> 0

0 0

 x z



= 0, that is,



λ2K1+ bU1Kb−1Ub1> −T

−T> 0



+ λ−M −Ub1Kb−1Rb>− bR bK−1Ub1> S

S> 0



+



R bbK−1Rb> 0

0 0

 x z



= 0.

(22)

As in Theorem 1, m zero eigenvalues of GEP (16) have been removed from the QEP (22). It is helpful to remind that (22) is derived with the aim of finding eigenvalues in the rightmost positive axis. However, we are interested in locating eigenvalues in the leftmost positive axis. So we modify (22) into

 λ2



R bbK−1Rb> 0

0 0



+ λ−M −Ub1Kb−1Rb>− bR bK−1Ub1> S

S> 0



+K1+ bU1Kb−1Ub1> −T

−T> 0

 x z



= 0.

(23)

QEP (23) now can be solved by the Jacobi-Davidson method.

4 Efficient quadratic Jacobi-Davidson algorithm

After transforming the GEP to the QEP, our goal now is to find positive eigen- values of

Q(λ) = λ2A2+ λA1+ A0, with A2, A1, A0 being given in (13) in Case 1 and

A2=



R bbK−1Rb> 0

0 0



, A1=−M −Ub1Kb−1Rb>− bR bK−1Ub1> S

S> 0



A0=K1+ bU1Kb−1Ub1> −T

−T> 0



(13)

in Case 2. For Case 1, the quadratic polynomial Q(λ) is similar to that in [26, 27]. So the Jacobi-Davidson algorithm combining with a partial locking scheme developed there can be applied to Q(λ) to locate eigenvalues in the leftmost positive axis.

We will pay more attention to Case 2 since Q(λ) is obtained by modifying sparse matrices with appropriate Householder transforms Q. It is important to maintain the sparsity of matrices for computational efficiency. At each step of the Jacobi-Davidson algorithm for QEP (23), we have to solve the linear system (with a shift θ) of the correction equation

Q(θ)t :=K1− θM + ( bU1− θ bR) bK−1( bU1− θ bR)> θS − T

(θS − T )> 0

 t1

t3



=b1

b3

 . (24) For the sake of computational efficiency, it is not advisable to solve (24) directly.

Here is a trick. Observe that (24) is equivalent to

K1− θM Ub1− θ bR (θS − T ) ( bU1− θ bR)> − bK 0

(θS − T )> 0 0

 t1

t2

t3

=

 b1

0 b3

,

with t2= bK−1(θ bR − bU1)>t1; that is

 K1− θM U1− θR (U1− θR)> −K

 t1

˜t2



=b1

2



, (25)

where

˜t2= Q>t2 t3



, b˜2= Q> 0 b3



. (26)

Now oncet1

˜t2



of (25) is solved with MATLAB in sparse version, we can find the solutiont1

t3



for (24) using the first formula of (26). Note that

t2

t3



= Q˜t2= (I − W Y>)˜t2= ˜t2− W Y>˜t2.

Therefore, we only need to multiply ˜t2 by a lower rank matrix to obtain t3. Let the matrix Vk have been selected in the Jacobi-Davidson algorithm. We then define

Qk(λ) = λ2VkA2Vk+ λVkA1Vk+ VkA0Vk := λ2A(k)2 + λA(k)1 + A(k)0 . Select a suitable eigenpair (ˆθ, ˆs) with Qk(ˆθ)ˆs = 0 and kˆsk2 = 1. We now compute

uk= Vkˆs, rk= Q(ˆθ)uk, pk = Q0(ˆθ)uk

(14)

and solve the correction equation for the search direction t:

(I − pku>k u>kpk

)Q(ˆθ)(I − uku>k)t = −rk with t ⊥ uk. (27)

It turns out the solution of (27) can be obtained by computing

tk= −Q(ˆθ)−1rk+ ηkQ(ˆθ)−1pk and ηk= u>kQ(ˆθ)−1rk u>kQ(ˆθ)−1pk

. (28)

In (28), vectors Q(ˆθ)−1rk, Q(ˆθ)−1pk can be computed by solving the linear system (24) via (25) and (26) efficiently.

We now use the solution tk of the correction equation (27) to expand the matrix Vk. Let

˜

vk+1= tk− (t>kVk)Vk and vk+1= v˜k+1

k˜vk+1k2

,

then vk+1⊥ Vk and kvk+1k2= 1. Define Vk+1=Vk vk+1 and update Qk+1(λ) = Vk+1> Q(λ)Vk+1= λ2A(k+1)2 + λA(k+1)1 + A(k+1)0 ,

where A(k+1)j , j = 0, 1, 2, are symmetric matrices with the updated last columns computed by the following formulas, respectively,

last column of A(k+1)j = Vk>

v>k+1



Ajvk+1.

Finally, in order to compute a large number of positive eigenvalues successively, partial deflation and partial locking schemes are necessary. We refer the reader to [26] for detailed descriptions of the methods.

For sake of completeness, we briefly describe the partial deflation scheme.

Let Y0= A0X1Λ−11 and Y2= A2X1and define A˜2 = A2− Y2Θ1Y2>

1 = A1+ Y2Θ1Y0>+ Y0Θ1Y2>

0 = A0− Y0Θ1Y0>

with Θ1 := (X1>A2X1)−1 for (Λ1, X1) being an eigenmatrix pair of Q(λ) of (23). Then the QEP after deflation is

Qd(λ)p ≡ (λ22+ λ ˜A1+ ˜A0)p = 0.

Moreover, setting U = θY2− Y0, we can solve the correction equation efficiently by solving the linear system

Q(θ)Z = ˜p ˜r U .

(15)

where ˜r = Qd(θ)˜u and ˜p = (2θ ˜A2+ ˜A1)˜u. We then define the correction vector td for the deflated QEP by

td =u˜>˜t2

>˜t1

˜t1− ˜t2, (29)

where

˜t1 ˜t2 = Z(:, 1 : 2) + Z(:, 3 : r + 2)(Θ−11 − U>Z(:, 3 : r + 2))−1U>Z(:, 1 : 2) and r = rank V1.

The algorithms for our methods are given below.

Algorithm 1 Quadratic Jacobi-Davidson with partial locking for Case 1 Input: Coefficient Matrices A0, A1, A2, locking number l and an initial or-

thonormal matrix V .

Output: The desired eigenpairs (λj, xj) for j = 1, · · · , p and the desired eigen- vector of GEP in Case 1.

1: Set Vc = [ ]

2: for j = 1, · · · , p do

3: while (the desired eigenpair is not convergent) do

4: Compute the eigenpair (θ, s) of VQ(θ)V s = 0 with ksk = 1;

5: Solve the correction equation (see [26, 27]);

6: Orthogonalize t against V ; set V = [V, t/ktk];

7: end while

8: Set λj = θ and xj = s;

9: if j ≤ l then

10: Orthogonalize xj against Vc; set Vc= [Vc, xj/kxjk];

11: else

12: Orthogonalize xj against Vc(:, 2 : l); set Vc= [Vc(:, 2 : l), xj/kxjk];

13: end if

14: Set V ≡ [Vc, V0] with V0>V0= I;

15: end for

16: Evaluate x = λ−11 M−1(S − T1)>x1;

17: Set z = [x(1 : n); x(1 : n) − x1; x(n + 1 : end)]

(16)

Algorithm 2 Quadratic Jacobi-Davidson with partial locking or partial defla- tion for Case 2

Input: Mass matrices M, F , stiffness matrices K0, K1, E0, E1, G and locking or deflation number l.

Output: The desired eigenpairs (λj, xj), j = 1, · · · , p and the associated eigenvector for GEP in Case 2.

1: Set K = K E E> G



;

2: Compute NK by shift-and-invert Lanczos method; Set N = rank(NK);

3: for i = 1, · · · , N do

4: Set g = NK(1 : end − i + 1, i);

5: Compute q = g−sgn(g(end))kgke(n+m)−i+1; Set W (1 : length(q), i) = q;

6: Compute NK(1 : end − i + 1, i : end) = NK(1 : end − i + 1, i : end) − 2qq>NK(1 : end − i + 1, i : end);

7: end for

8: Compute Q =

N

Q

i=1



I − 2W (:, i)W (:, i)>

W (:, i)>W (:, i)



;

9: Compute additional matrices Y satisfying (18);

10: Compute the coefficient matrices A0, A1 and A2 according to (23);

11: for j = 1, · · · , p do

12: Apply Jacobi-Davidson with partial locking scheme or with partial defla- tion scheme based on the correction equation (25) or (29), respectively;

13: end for

14: Set D = A − λ1B defined in (15);

15: Compute the associated eigenvector z by shift-and-invert Lanczos on D

5 Numerical results

In our numerical simulations, we consider seven domains. Standard triangular meshes with equal mesh length about 0.04 are generated for these domains.

The number of interior points and of boundary points are denoted by a and b, respectively, and are shown in the table of meshes below. Since we are ealing with the elasticity system in the plane, the dimensions of stiffness and mass ma- trices in Table 1 are doubled, i.e., n = 2a and m = 2b. All computations were carried out in Matlab. Additionally, the hardware configurations used were two servers equipped with Intel Quad-Core Xeon X5560 2.80GHz CPUs installing Matlab R2014a and Intel 12-Core Xeon E5-2697 2.70GHz CPUs installing Mat- lab R2017b and 58GB and 256GB of main memory, respectively.

(17)

Disc Ellipse Triangle Cone

(143841, 1270) (71590, 970) (79800, 1200) (108203, 1235)

Dumbbell Square Mouth

(121829, 1556) (140848, 1400) (95885, 1221)

Table 2: Meshes for different domains

We consider the isotropic elasticity (3) in the simulations. In our numerical results, we also show the values of eigenvector corresponding to the first positive eigenvalue for both Case 1 and Case 2. The purpose of showing these results is to explore the behaviors of the interior transmission eigenfunctions near corners for the elastic waves. In the classical acoustic waves, it was proved theoreti- cally in [4] and demonstrated numerically in [3] that the interior transmission eigenfunction vanishes near the corner with the interior angle less than π and is localized near the corner with the interior angle greater than π. Our numerical results indicate that a similar behavior occurs for the elastic waves.

Case 1. In the simulations, we choose µ0= µ1= 5, λ0 = λ1 = 5, ρ0= 50, ρ1 = 1. We implement Algorithm 1 to this case and locate first 500 positive eigenvalues (i.e., p = 500) and the associated eigenvector of the first positive eigenvalue. Results are shown in Figure 1.

(18)

0 50 100 150 200 250 300 350 400 450 500 10-1

100 101

-10 0 10

-10 0 10

0 2 4 6

0 20 40 60 80 100

0 50 100 150 200 250 300 350 400 450 500

10-1 100 101

-10 0 10

-5 0 5

0 2 4 6

0 20 40 60 80 100

0 50 100 150 200 250 300 350 400 450 500

100 101

-10 0 10

0 10 20

0 2 4 6

0 20 40 60 80 100

0 50 100 150 200 250 300 350 400 450 500

10-1 100 101

-10 0 10

0 5 10 15

0 2 4 6

0 20 40 60 80 100

0 50 100 150 200 250 300 350 400 450 500

10-1 100 101

-10 0 10

-10 0 10

0 2 4 6

0 20 40 60 80 100

0 50 100 150 200 250 300 350 400 450 500

10-1 100 101

-10 0 10

-5 0 5

0 2 4 6

0 20 40 60 80 100

0 50 100 150 200 250 300 350 400 450 500

10-1 100 101

-10 0 10

-10 -5 0 5

0 2 4 6

0 20 40 60 80 100

Figure 1: The distribution of 500 positive eigenvalues for all domains in case 1.

The y-coordinate represents the logarithm of the eigenvalue. The histograms show the distribution of 500 positive eigenvalues in 12 subintervals of length 0.5 from 0 to 6.

Furthermore, we zoom in on the distribution of positive eigenvalues to show the first 10 positive eigenvalues for all domains in Figure 2. We also list the first positive eigenvalue for each domain in Table 3.

(19)

0 0.05 0.1 0.15 0.2 0.25 0.3 0

1 2 3 4 5

6 Circle

Ellipse Triangle Cone Square Dumbbell Mouth

Figure 2: This figure shows the distributions of first 10 eigenvalues for different domains in case 1.

Domains 1st pos. eig.

0.041644 0.094985 0.107700 0.067220 0.098504 0.049478 0.078684

Table 3: Values of first posi- tive eigenvalues for different domains.

The graphs of the eigenvectors corresponding to the first positive eigenvalues for different domains are listed in Figire 3-5. As pointed out above, the elastic waves in Case 1 corresponds to the acoustic waves considered in [4] and [3].

Our numerical results show that the interior transmission eigenfunctions have the similar behaviors near the singular points, namely, they vanish near the point where the interior angle is less than π and are localized near the point where the interior angle is greater than π.

Figure 3: The interior transmission eigenfunctions associated with the first pos- itive eigenvalues for disc and ellipse domains. The upper left, upper right, lower left, lower right subfigures correspond to u1, u2, v1, and v2, respectively.

(20)

Figure 4: The interior transmission eigenfunctions associated with the first pos- itive eigenvalues for triangle, cone, and square domains. Subfigures in first column are eigenfunctions. The upper left, upper right, lower left, lower right subfigures correspond to u1, u2, v1, and v2, respectively. Subfigures in the sec- ond and third columns are 3D plots of u1 and u2, respectively. We can observe that the values of u1and u2 are almost zero near corners.

(21)

Figure 5: The interior transmission eigenfunctions associated with the first pos- itive eigenvalues for dumbbell and mouth domains. Subfigures in first column are eigenfunctions. The upper left, upper right, lower left, lower right subfigures correspond to u1, u2, v1, and v2, respectively. Subfigures in the second column are 3D plots of u1 for two domains. Note that the interior angles of singular points are greater than π. The localized behaviors near the singular points are clearly observed.

(22)

Case 2. For this case, we choose µ0 = 200, µ1 = 2, λ0 = 200, λ1 = 1 and ρ0= ρ1= 50. The results are shown in Figure 6.

0 50 100 150 200 250 300 350 400 450 500

10-1 100

-10 0 10

-10 -5 0 5 10

0 0.5 1 1.5

0 20 40 60 80

0 50 100 150 200 250 300 350 400 450 500

10-1 100

-10 0 10

-5 0 5

0 0.5 1 1.5

0 20 40 60 80

0 50 100 150 200 250 300 350 400 450 500

10-1 100

-10 0 10

0 10 20

0 0.5 1 1.5

0 20 40 60 80

0 50 100 150 200 250 300 350 400 450 500

10-1 100

-10 0 10

0 5 10 15

0 0.5 1 1.5

0 20 40 60 80

0 50 100 150 200 250 300 350 400 450 500

10-1 100

-10 0 10

-10 0 10

0 0.5 1 1.5

0 20 40 60 80

0 50 100 150 200 250 300 350 400 450 500

10-1 100

-10 0 10

-5 0 5

0 0.5 1 1.5

0 20 40 60 80

0 50 100 150 200 250 300 350 400 450 500

10-1 100

-10 0 10

-10 -5 0 5

0 0.5 1 1.5

0 20 40 60 80

Figure 6: The distribution of 500 positive eigenvalues for all domains in case 2.

The y-coordinate represents the logarithm of the eigenvalue. The histograms show the distribution of 500 positive eigenvalues in 15 subintervals of length 0.1 from 0 to 1.5.

Again, we also draw the first 10 positive eigenvalues for all domains in Fig- ure 7. The list of the first positive eigenvalue for each domain is given in Table 4.

Like in Case 1, the graphs of the eigenvectors corresponding to the first positive eigenvalues for different domains are listed in Figure 8-9. Even in the acoustic wave scattering having jumps in leading order, the behaviors of interior transmission eigenfunctions near singular points have never been studied before. Our numerical simulations demonstrate some possible phenomena of interior transmission eigenfunctions near the singular points. Intuitively, since the perturbations occur at the leading order, it seems more justified to consider

∇u. Especially, in Figure 9, we can observe that the localization behaviors of

∇u in the dumbbell and mouth domains.

(23)

0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0

1 2 3 4 5

6 Circle

Ellipse Triangle Cone Square Dumbbell Mouth

Figure 7: The distribution of first 10 eigenvalues for different domains in case 2.

Domains 1st pos. eig.

0.011849 0.019858 0.021110 0.016045 0.013208 0.010552 0.018077

Table 4: Values of first posi- tive eigenvalues for different domains.

Figure 8: The interior transmission eigenfunctions associated with the first pos- itive eigenvalues for disc, ellipse, triangle, cone, square domains. The upper left, upper right, lower left, lower right subfigures correspond to u1, u2, v1, and v2, respectively. We can observe that the interior transmission eigenfunctions u, v in triangle, cone, square domains are almost flat near singular points.

參考文獻

相關文件

The existence of transmission eigenvalues is closely related to the validity of some reconstruction methods for the inverse scattering problems in an inhomogeneous medium such as

In this paper, we propose a practical numerical method based on the LSM and the truncated SVD to reconstruct the support of the inhomogeneity in the acoustic equation with

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems..

Tsung-Min Hwang, Wei-Cheng Wang and Weichung Wang, Numerical schemes for three dimensional irregular shape quantum dots over curvilinear coordinate systems, accepted for publication

Numerical experiments are done for a class of quasi-convex optimization problems where the function f (x) is a composition of a quadratic convex function from IR n to IR and

Arbenz et al.[1] proposed a hybrid preconditioner combining a hierarchical basis preconditioner and an algebraic multigrid preconditioner for the correc- tion equation in the

H..  In contrast to the two traditional mechanisms which all involve evanescent waves, this mechanism employs propagating waves.  This mechanism features high transmission and

Taking second-order cone optimization and complementarity problems for example, there have proposed many ef- fective solution methods, including the interior point methods [1, 2, 3,