• 沒有找到結果。

Carleman estimate for second order elliptic equations with Lipschitz leading coefficients and jumps at an interface

N/A
N/A
Protected

Academic year: 2022

Share "Carleman estimate for second order elliptic equations with Lipschitz leading coefficients and jumps at an interface"

Copied!
46
0
0

加載中.... (立即查看全文)

全文

(1)

Carleman estimate for second order elliptic equations with Lipschitz leading coefficients and

jumps at an interface

M. Di Cristo

E. Francini

C.-L. Lin

S. Vessella

§

J.-N. Wang

Abstract

In this paper we prove a Carleman estimate for second order elliptic equa- tions with a general anisotropic Lipschitz coefficients having a jump at an interface. Our approach does not rely on the techniques of microlocal analysis.

We make use of the elementary method so that we are able to impose almost optimal assumptions on the coefficients and, consequently, the interface. It is possible that the framework can be applied to other cases.

Contents

1 Introduction 2

2 Notations and statement of the main theorem 3

3 Step 1 - A Carleman estimate for leading coefficients depending on

y only 6

3.1 Fourier transform of the conjugate operator and its factorization . . . 7 3.2 Derivation of the Carleman estimate for the simple case . . . 11 3.3 Proof of Proposition 3.1 . . . 12 4 Step 2 - The Carleman estimate for general coefficients 31 4.1 Partition of unity and auxiliary results . . . 31 4.2 Estimate of the left hand side of the Carleman estimate, I . . . 36 4.3 Estimate of the left hand side of the Carleman estimate, II . . . 40

Politecnico di Milano, Italy. Email: michele.dicristo@polimi.it

Universit`a di Firenze, Italy. Email: elisa.francini@unifi.it

National Cheng Kung University, Taiwan. Email: cllin2@mail.ncku.edu.tw

§Universit`a di Firenze, Italy. Email: sergio.vessella@unifi.it

National Taiwan University, Taiwan. Email: jnwang@math.ntu.edu.tw

(2)

1 Introduction

Since T. Carleman’s pioneer work [Car], Carleman estimates have been indispensable tools for proving the unique continuation property for partial differential equations.

Recently, Carleman estimates have been successfully applied to study inverse prob- lems, see for [Is], [KSU]. Most of Carleman estimates are proved under the assump- tion that the leading coefficients possess certain regularity. For example, for general second order elliptic operators, Carleman estimates were proved when the leading co- efficients are at least Lipschitz [H], [H3]. The restriction of regularity on the leading coefficients also reflects the fact that the unique continuation may fail if the coeffi- cients are only H¨older continuous in Rn with n ≥ 3 (see examples constructed by Pli´s [P] and [M]). In R2, the unique continuation property holds for W1,2 solutions of second elliptic equations in either non-divergence or divergence forms with essen- tially bounded coefficients [BJS], [BN], [AM], [S]. It should be noted that the unique continuation property for the second order elliptic equations in the plane with essen- tially bounded coefficients is deduced from the theory of quasiregular mappings. No Carleman estimates are derived in this situation.

From discussions above, Carleman estimates for second order elliptic operators with general discontinuous coefficients are not likely to hold. However, when the discontinuities occur as jumps at an interface with homogeneous or non-homogeneous transmission conditions, one can still derive useful Carleman estimates. This is the main theme of the paper. There are some excellent works on this subject. We mention several closely related papers including Le Rousseau-Robbiano [LR1], [LR2], and Le Rousseau-Lerner [LL]. For the development of the problem and other related results, we refer the reader to the papers cited above and references therein. Our result is close to that of [LL], where the elliptic coefficient is a general anisotropic matrix- valued function. To put our paper in perspective, we would like to point out that the interface is assumed to be a Chypersurface in [LL] and the coefficients are Caway from the interface. Here we prove a Carleman estimate near a flat interface from which it is easy to obtain under a standard change of coordinates a Carleman estimate for operator with leading coefficients which have a jump discontinuity at a C1,1 interface and are Lipschitz continuous apart from such an interface (see Theorem 2.1 for a precise statement). The approach in [LL] is close to Calder´on’s seminal work on the uniqueness of Cauchy problem [Cal] as an application of singular integral operators (or pseudo-differential operators). Therefore, the regularity assumptions of [LL] are due to the use of calculus of pseudo-differential operators and the microlocal analysis techniques.

The aim here is to derive the Carleman estimate using more elementary methods.

Our approach does not rely on the techniques of microlocal analysis, but rather on the straightforward Fourier transform. Thus we are able to relax the regularity assumptions on the coefficients and the interface. We first consider the simple case where the coefficients depend only on the normal variable. Taking advantage of the simple structure of coefficients, we are able to derive a Carleman estimate by

(3)

elementary computations with the help of the Fourier transform on the tangential variables. To handle the general coefficients, we rely on some type of partition of unity. In Section 2 after the Theorem 2.1 we give a more detailed outline of our proof. Our approach is very general and is likely to be extended to other equations or systems.

2 Notations and statement of the main theorem

Define H± = χRn

± where Rn± = {(x, y) ∈ Rn−1 × R|y ≷ 0} and χRn± is the char- acteristic function of Rn±. Let us stress that for a vector (x, y) of Rn, we mean x = (x1, . . . , xn−1) ∈ Rn−1 and y ∈ R. In places we will use equivalently the symbols D, ∇, ∂ to denote the gradient of a function and we will add the index x or y to denote gradient in Rn−1 and the derivative with respect to y respectively.

Let u±∈ C(Rn). We define

u = H+u++ Hu=X

±

H±u±,

hereafter, we denote P

±a±= a++ a, and for Rn−1× R L(x, y, ∂)u :=X

±

H±divx,y(A±(x, y)∇x,yu±), (2.1)

where

A±(x, y) = {a±ij(x, y)}ni,j=1, x ∈ Rn−1, y ∈ R (2.2) is a Lipschitz symmetric matrix-valued function satisfying, for given constants λ0 ∈ (0, 1], M0 > 0,

λ0|z|2 ≤ A±(x, y)z · z ≤ λ−10 |z|2, ∀(x, y) ∈ Rn, ∀ z ∈ Rn (2.3) and

|A±(x0, y0) − A±(x, y)| ≤ M0(|x0− x| + |y0− y|). (2.4) We write

h0(x) := u+(x, 0) − u(x, 0), ∀ x ∈ Rn−1, (2.5) h1(x) := A+(x, 0)∇x,yu+(x, 0) · ν − A(x, 0)∇x,yu(x, 0) · ν, ∀ x ∈ Rn−1, (2.6) where ν = −en.

Let us now introduce the weight function. Let ϕ be ϕ(y) =

( ϕ+(y) := α+y + βy2/2, y ≥ 0,

ϕ(y) := αy + βy2/2, y < 0, (2.7) where α+, α and β are positive numbers which will be determined later. In what follows we denote by ϕ+ and ϕ the restriction of the weight function ϕ to [0, +∞)

(4)

and to (−∞, 0) respectively. We use similar notation for any other weight functions.

For any ε > 0 let

ψε(x, y) := ϕ(y) −ε 2|x|2, and let

φδ(x, y) := ψδ δ−1x, δ−1y . (2.8) For a function h ∈ L2(Rn), we define

ˆh(ξ, y) = Z

Rn−1

h(x, y)e−ix·ξdx, ξ ∈ Rn−1.

As usual we denote by H1/2(Rn−1) the space of the functions f ∈ L2(Rn−1) satisfying Z

Rn−1

|ξ|| ˆf (ξ)|2dξ < ∞, with the norm

||f ||2H1/2(Rn−1) = Z

Rn−1

(1 + |ξ|2)1/2| ˆf (ξ)|2dξ. (2.9) Moreover we define

[f ]1/2,Rn−1 =

Z

Rn−1

Z

Rn−1

|f (x) − f (y)|2

|x − y|n dydx

1/2

,

and recall that there is a positive constant C, depending only on n, such that C−1

Z

Rn−1

|ξ|| ˆf (ξ)|2dξ ≤ [f ]21/2,Rn−1 ≤ C Z

Rn−1

|ξ|| ˆf (ξ)|2dξ,

so that the norm (2.9) is equivalent to the norm ||f ||L2(Rn−1)+ [f ]1/2,Rn−1. We use the letters C, C0, C1, · · · to denote constants. The value of the constants may change from line to line, but it is always greater than 1.

We will denote by Br(x) the (n − 1)-ball centered at x ∈ Rn−1 with radius r > 0.

Whenever x = 0 we denote Br = Br(0).

Theorem 2.1 Let u and A±(x, y) satisfy (2.1)-(2.6). There exist α+, α, β, δ0, r0 and C depending on λ0, M0 such that if δ ≤ δ0 and τ ≥ C, then

X

± 2

X

k=0

τ3−2k Z

Rn±

|Dku±|2e2τ φδ,±(x,y)dxdy +X

± 1

X

k=0

τ3−2k Z

Rn−1

|Dku±(x, 0)|2eδ(x,0)dx

+X

±

τ2[eτ φδ(·,0)u±(·, 0)]21/2,Rn−1+X

±

[D(eτ φδ,±u±)(·, 0)]21/2,Rn−1

≤C X

±

Z

Rn±

|L(x, y, ∂)(u±)|2e2τ φδ,±(x,y)dxdy + [eτ φδ(·,0)h1]21/2,Rn−1

+[∇x(eτ φδh0)(·, 0)]21/2,Rn−1+ τ3 Z

Rn−1

|h0|2e2τ φδ(x,0)dx + τ Z

Rn−1

|h1|2e2τ φδ(x,0)dx

 . (2.10)

(5)

where u = H+u++ Hu, u± ∈ C(Rn) and supp u ⊂ Bδ/2× [−δr0, δr0], and φδ is given by (2.8).

Remark 2.2 Estimate (2.10) is a local Carleman estimate near xn = 0. As men- tioned in the Introduction, by an easy change of coordinates, one can derive a local Carleman estimate near a C1,1 interface from (2.10).

Remark 2.3 Let us point out that the level sets

(x, y) ∈ Bδ/2× (−δr0, δr0)) : φδ(x, y) = t

have approximately the shape of paraboloid and, in a neighborhood of (0, 0), ∂yφδ > 0 so that the gradient of φ points inward the halfspace Rn+. These features are crucial to derive from the Carleman estimate (2.10) a H¨older type smallness propagation es- timate across the interface {(x, 0) : x ∈ Rn−1} for weak solutions to the transmission problem





L(x, y, ∂)u =P

±H±b±· ∇x,yu±+ c±u±, u+(x, 0) − u+(x, 0) = 0,

A+(x, 0)∇x,yu+(x, 0) · ν − A(x, 0)∇x,yu(x, 0) · ν = 0,

(2.11)

where b±∈ L(Rn, Rn) and c± ∈ L(Rn). More precisely if the error of observation of u is known in an open set of Rn+, we can find a H¨older control of u in a bounded set of Rn. For more details about such type of estimate we refer to [LR1, Sect. 3.1].

The proof of Theorem 2.1 is divided into two steps as follows.

Step 1. We first consider the particular case of the leading matrices (2.2) inde- pendent of x and we prove (Theorem 3.1), for the corresponding operator L(y, ∂), a Carleman estimate with the weight function φ(x, y) = ϕ(y) + sγ · x, where s is a suitable small number and γ is an arbitrary unit vector of Rn−1. The features of the leading matrices and of the weight function φ allow to factorize the Fourier transform of the conjugate of the operator L(y, ∂)u with respect to φ. So that we can follow, roughly speaking, at an elementary level the strategy of [LL] for the oper- ator L(y, ∂). Nevertheless such an estimate has only a prepatory character to prove Theorem 2.1, because, due to the particular feature of the weight φ (i.e. linear with respect to x), the Carleman estimate obtained in Theorem 3.1 cannot yield to any kind of significant smallness propagation estimate across the interface.

Step 2. In the second we adapt the method described in [Tr, Ch. 4.1] to an operator with jump discontinuity. More precisely, we localize the operator (2.1) with respect to the x variable and we linearize the weight function, again with respect the x variable, and by the Carleman estimate obtained in the Step 1 we derive some local Carleman estimates. Subsequently we put together such local estimates by mean of the unity partition introduced in [Tr].

(6)

3 Step 1 - A Carleman estimate for leading coef- ficients depending on y only

In this section we consider the simple case of the leading matrices (2.2) independent of x. Moreover, the weight function that we consider is linear with respect to x variable, so that, as explained above, the Carleman estimates we get here are only preliminary to the one that we will get in the general case.

Assume that

A±(y) = {a±ij(y)}ni,j=1 (3.1) are symmetric matrix-valued functions satisfying (2.3) and (2.4), i.e.,

λ0|z|2 ≤ A±(y)z · z ≤ λ−10 |z|2, ∀y ∈ R, ∀ z ∈ Rn (3.2)

|A±(y0) − A±(y00)| ≤ M0|y0− y00|, ∀ y0, y00 ∈ R. (3.3) From (3.2), we have

a±nn(y) ≥ λ0 ∀ y ∈ R. (3.4)

In the present case the the differential operator (2.1) became L(y, ∂)u :=X

±

H±divx,y(A±(y)∇x,yu±), (3.5)

where u =P

±H±u±, u± ∈ C(Rn)

We also set, for any s ∈ [0, 1] and γ ∈ Rn−1 with |γ| ≤ 1

φ(x, y) = ϕ(y) + sγ · x = H+φ++ Hφ, (3.6) where ϕ is defined in (2.7).

Our aim here is to prove the following Carleman estimate.

Theorem 3.1 There exist τ0, s0, r0, C and β0 depending only on λ0, M0, such that for τ ≥ τ0, 0 < s ≤ s0 < 1, and for every w = P

±H±w± with supp w ⊂

(7)

B1× [−r0, r0], we have that X

± 2

X

k=0

τ3−2k Z

Rn±

|Dkw±|2e2τ φ±dxdy

+X

± 1

X

k=0

τ3−2k Z

Rn−1

|Dkw(x, 0)|2e2τ φ(x,0)dx +X

±

τ2[(eτ φw±)(·, 0)]21/2,Rn−1

+X

±

[∂y(eτ φ±w±)(·, 0)]21/2,Rn−1 +X

±

[∇x(eτ φw±)(·, 0)]21/2,Rn−1

≤C Z

Rn±

|L(y, ∂)w|2e2τ φ±dxdy + [∇x eτ φ(·,0)(w+(·, 0) − w(·, 0))]21/2,Rn−1

+ [eτ φ(·,0)(A+(0)∇x,yw+(x, 0) · ν − A(0)∇x,yw(x, 0) · ν)]21/2,Rn−1

+ τ Z

Rn−1

e2τ φ(x,0)|A+(0)∇x,yw+(x, 0) · ν − A(0)∇x,yw(x, 0) · ν|2dx +τ3

Z

Rn−1

e2τ φ(x,0)|w+(x, 0) − w(x, 0)|2dx

 ,

(3.7)

with β ≥ β0 and α± properly chosen.

3.1 Fourier transform of the conjugate operator and its fac- torization

To proceed further, we introduce some operators and find their properties. We use the notation ∂j = ∂xj for 1 ≤ j ≤ n − 1. Let us denote B±(y) = {b±jk(y)}n−1j,k=1, the symmetric matrix satisfying, for z = (z1, · · · , zn−1, zn) =: (z0, zn),

B±(y)z0· z0 = A±(y)z · z |

zn=−Pn−1 j=1

a± nj(y)zj a±

nn(y)

. (3.8)

In view of (3.2) we have

λ1|z0|2 ≤ B±(y)z0· z0 ≤ λ−11 |z0|2, ∀ y ∈ R, ∀ z0 ∈ Rn−1, (3.9) λ1 ≤ λ0 depends only on λ0.

Notice that

b±jk(y) = a±jk(y) − a±nj(y)a±nk(y)

a±nn(y) , j, k = 1, · · · , n − 1. (3.10) We define the operator

T±(y, ∂x)u±:=

n−1

X

j=1

a±nj(y)

a±nn(y)∂ju±. (3.11)

(8)

It is easy to show, by direct calculations ([LL]), that

divx,y A±(y)∇x,yu± = (∂y+ T±)a±nn(y)(∂y+ T±)u±+ divx B±(y)∇xu±. (3.12) Let w =P

±H±w±, where w± ∈ C(Rn), and define

θ0(x) := w+(x, 0) − w(x, 0) for x ∈ Rn−1, (3.13) θ1(x) := A+(0)∇x,yw+(x, 0) · ν − A(0)∇x,yw(x, 0) · ν for x ∈ Rn−1, (3.14) where ν = −en. By straightforward calculations, we get

a+nn(y) ∂y + T+(y, ∂x)w+(x, y) |y=0 −ann(y) ∂y+ T(y, ∂x)w(x, y) |y=0= −θ1(x).

(3.15) In order to derive the Carleman estimate (3.7), we investigate the conjugate op- erator of L(y, ∂) with eτ φ for φ given by (3.6). Let v = eτ φw and ˜v = e−τ sγ·xv, then we have

w = e−τ φv =X

±

H±e−τ φ±v± =X

±

H±e−τ ϕ±±

and therefore

eτ φL(y, ∂)(e−τ φv) = eτ sγ·xeτ ϕL(y, ∂)(e−τ ϕ˜v).

It follows from (3.12) that eτ ϕL(y, ∂)(e−τ ϕ˜v) =X

±

H±(∂y− τ ϕ0±+ T±)a±nn(y)(∂y− τ ϕ0±+ T±)˜v±

+X

±

H±divx B±(y)∇x±, which leads to

eτ φL(y, ∂)(e−τ φv) = eτ sγ·xeτ ϕL(y, ∂)(e−τ ϕv)˜

= eτ sγ·xX

±

H±(∂y − τ ϕ0±+ T±)a±nn(y)(∂y− τ ϕ0±+ T±)(e−τ sγ·xv±) + eτ sγ·xX

±

H±divx B±(y)∇x(e−τ sγ·xv±).

(3.16)

By the definition of T±(y, ∂x), we get that

T±(y, ∂x)(e−τ sγ·xv±) = e−τ sγ·x

n−1

X

j=1

a±nj(y)

a±nn(y)(∂jv±− τ sγjv±) := e−τ sγ·xT±(y, ∂x− τ sγ)v±.

To continue the computation, we observe that

eτ sγ·x(∂y− τ ϕ0±+ T±(y, ∂x))a±nn(y)(∂y − τ ϕ0±+ T±(y, ∂x))(e−τ sγ·xv±)

= ∂y− τ ϕ0±+ T±(y, ∂x− τ sγ)a±nn(y) ∂y − τ ϕ0±+ T±(y, ∂x− τ sγ)v±

(3.17)

(9)

and

eτ sγ·xdivx B±(y)∇x(e−τ sγ·xv±)

=

n−1

X

j,k=1

b±jk(y)∂jk2 v±− 2sτ

n−1

X

j,k=1

b±jk(y)∂jv±γk+ s2τ2

n−1

X

j,k=1

b±jk(y)γjγkv±. (3.18)

Combining (3.16), (3.17) and (3.18) yields eτ φL(y, ∂)(e−τ φv)

=X

±

H±

 ∂y − τ ϕ0±+ T±(y, ∂x− τ sγ)a±nn(y) ∂y− τ ϕ0±+ T±(y, ∂x− τ sγ)v±



+X

±

H±

n−1

X

j,k=1

b±jk(y)∂jk2 v±− 2sτ

n−1

X

j,k=1

b±jk(y)∂jv±γk+ s2τ2

n−1

X

j,k=1

b±jk(y)γjγkv±.

(3.19) We now focus on the analysis of eτ φL(y, ∂)(e−τ φv). To simplify it, we introduce some notations:

f (x, y) = eτ φL(y, ∂)(e−τ φv), (3.20) B±(ξ, γ, y) =

n−1

X

j,k=1

b±jk(y)ξjγk, ξ ∈ Rn−1, (3.21)

ζ±(ξ, y) = 1

a±nn(y)B±(ξ, ξ, y) + 2isτ B±(ξ, γ, y) − s2τ2B±(γ, γ, y), (3.22) and

t±(ξ, y) =

n−1

X

j=1

a±nj(y)

a±nn(y)ξj. (3.23)

By (3.19), we have

f (ξ, y) =ˆ X

±

H±P±±, (3.24)

where

P±± := ∂y− τ ϕ0±+ it±(ξ + iτ sγ, y)a±nn(y) ∂y− τ ϕ0±+ it±(ξ + iτ sγ, y)ˆv±

− a±nn(y)ζ±(ξ, y)ˆv±.

(3.25) Our aim is to estimate f (x, y) or, equivalently, its Fourier transform ˆf (ξ, y). In order to do this, we want to factorize the operators P±. For any z = a + ib with (a, b) 6= (0, 0), we define the square root of z,

√z = s

a +√

a2+ b2

2 + i b

q

2(a +√

a2+ b2) .

(10)

We remind that the square root √

z is defined with a cut along the negative real axis and note that <√

z ≥ 0. Thus, it needs extra care to estimate its derivative. Now we define two operators

E± = ∂y + it±(ξ + iτ sγ, y) − (τ ϕ0±+p

ζ±), (3.26)

F± = ∂y+ it±(ξ + iτ sγ, y) − (τ ϕ0±−p

ζ±). (3.27)

With all the definitions given above, we obtain that

P++= E+a+nn(y)F+ˆv+− ˆv+y a+nn(y)p

ζ+, (3.28)

P= Fann(y)Eˆv+ ˆvy ann(y)p

ζ. (3.29)

Let us now introduce some other useful notations and estimates that will be intensively used in the sequel. After taking the Fourier transform, the terms on the interface (3.13) and (3.15), become

η0(ξ) := ˆv+(ξ, 0) − ˆv(ξ, 0) =eτ φ(x,0)0(x) (3.30) and

η1(ξ) :=−eτ φ(x,0)1(x)

= a+nn(0)∂yˆv+(ξ, 0) − τ α++(ξ, 0) + it+(ξ + iτ sγ, 0)ˆv+(ξ, 0)

− ann(0)∂y(ξ, 0) − τ α(ξ, 0) + it(ξ + iτ sγ, 0)ˆv(ξ, 0).

(3.31)

For simplicity, we denote

V±(ξ) := a±nn(0)∂yˆv±(ξ, 0) − τ α±±(ξ, 0) + it±(ξ + iτ sγ, 0)ˆv±(ξ, 0), (3.32) so that

V+(ξ) − V(ξ) = η1(ξ). (3.33) Moreover, we define

m±(ξ, y) :=

s

B±(ξ, ξ, y) a±nn(y) . From (3.9) we have

λ1|ξ|2 ≤ B±(ξ, ξ, y) ≤ λ−11 |ξ|2, ∀ y ∈ R, ∀ ξ ∈ Rn−1, (3.34) and, from (3.3),

|∂yB±(ξ, η, y)| ≤ M1|ξ||η|, ∀ ξ, η ∈ Rn−1, (3.35) where M1 depends only on λ0 and M0. In a similar way, we list here some useful bounds, that can be easily obtained from (3.9) and (3.3).

λ2|ξ| ≤ m±(ξ, y) ≤ λ−12 |ξ|, (3.36)

(11)

|∂ym±(ξ, y)| ≤ M2|ξ|, (3.37)

|t±(ξ, y)| ≤ λ−13 |ξ|, (3.38)

|∂yt±(ξ, y)| ≤ M3|ξ|, (3.39)

±(ξ, y)| ≤ (λ0λ1)−1(|ξ|2 + s2τ2), (3.40)

|∂yζ±(ξ, y)| ≤ M4(|ξ|2+ s2τ2). (3.41) Here λ2 =√

λ0λ1, λ3 depends only on λ0, while M2, M3 and M4 depends only on λ0 and M0.

3.2 Derivation of the Carleman estimate for the simple case

The derivation of the Carleman estimate (3.7) is a simple consequence of the auxiliary Proposition 3.1 stated below and proved in the following Section 3.3 via the inverse Fourier transform. We first set

L := sup

ξ∈Rn−1\{0}

m+(ξ, 0) m(ξ, 0).

Note that, by (3.36), λ22 ≤ L ≤ λ−22 . Now we introduce the fundamental assumption on the coefficients α± in the weight function. As in [LL], we choose positive α+ and α, such that

L < α+ α

. (3.42)

This choice will only be conditioned by λ0. These constants will be fixed. Denote the factor

Λ = (|ξ|2+ τ2)1/2. We now state our main tool.

Proposition 3.1 There exist τ0, s0, ρ, β and C, depending only on λ0 and M0, such that for τ ≥ τ0, supp ˆv±(ξ, ·) ⊂ [−ρ, ρ], s ≤ s0 < 1, we have

1 τ

X

±

||∂y2±(ξ, ·)||2L2(R±)2 τ

X

±

||∂y±(ξ, ·)||2L2(R±)

4 τ

X

±

||ˆv±(ξ, ·)||2L2(R±)+ ΛX

±

|V±(ξ)|2+ Λ3X

±

|ˆv±(ξ, 0)|2

≤ C X

±

||P±ˆv±(ξ, ·)||2L2(R±)+ Λ|η1(ξ)|2+ Λ30(ξ)|2

!

. (3.43)

Here R±= {y ∈ R : y ≷ 0}.

(12)

Proof of Theorem 3.1. Substituting (3.24) and the definitions of η0, η1 (see (3.30), (3.31)) into the right hand side of (3.43) implies

1 τ

X

±

||∂y2±(ξ, ·)||2L2(R±)2 τ

X

±

||∂y±(ξ, ·)||2L2(R±)4 τ

X

±

||ˆv±(ξ, ·)||2L2(R±)

+ ΛX

±

|V±(ξ)|2+ Λ3X

±

|ˆv±(ξ, 0)|2

≤C X

±

||f (ξ, ·)||2L2(R)+ Λ|eτ φ(·,0)1(·)|2+ Λ3|eτ φ(·,0)0(·)|2

! .

(3.44) Recalling (3.32), it is not hard to see that

ΛX

±

|∂y±(ξ, 0)|2 ≤ C ΛX

±

|V±(ξ)|2+ Λ3X

±

|ˆv±(ξ, 0)|2

!

. (3.45)

Since Λ4 ≥ |ξ|2τ2+ |ξ|4+ τ4, |ξ|3+ |ξ|2τ + |ξ|τ2+ τ3 ≤ CΛ3, and Λ3 ≤ C0(|ξ|3+ τ3), by integrating in ξ, we can deduce from (3.44) and (3.45) that

X

± 2

X

k=0

τ3−2k Z

Rn±

|Dkv±|2+X

±

[∇xv±(·, 0)]21/2,Rn−1 +X

±

[∂yv±(·, 0)]21/2,Rn−1

+X

±

τ2[v±(·, 0)]21/2,Rn−1+X

±

τ Z

Rn−1

|∇xv±(x, 0)|2dx

+X

±

τ Z

Rn−1

|∂yv±(x, 0)|2dx +X

±

τ3 Z

Rn−1

|v±(x, 0)|2dx

≤C

||f ||2L2(Rn)+ [eτ φ(·,0)θ1(·)]21/2,Rn−1 + [∇x eτ φ(·,0)θ0(·)]21/2,Rn−1

+τ Z

Rn−1

e2τ φ(x,0)1|2dx + τ3 Z

Rn−1

e2τ φ(x,0)0|2dx

 .

(3.46)

Replacing v±= eτ φ±w± into (3.46) immediately leads to (3.7).

2 3.3 Proof of Proposition 3.1

Let κ be the positive number

κ = 1 2



1 − Lα

α+



(3.47)

(13)

depending only on λ0 and M0. The proof of Proposition 3.1 will be divided into three cases

















τ ≥ λ22|ξ|

2s0 , m+(ξ, 0)

(1 − κ) α+ ≤ τ ≤ λ22|ξ|

2s0 , τ ≤ m+(ξ, 0)

(1 − κ) α+. Recall that λ2 =√

λ0λ1 (from (3.36)) depends only on λ0. Of course, we first choose a small s0 < 1, depending on λ0 and M0 only, such that

m+(ξ, 0) (1 − κ) α+

≤ λ22|ξ|

2s0

, ∀ ξ ∈ Rn. A smaller value s0 will be chosen later in the proof.

We need to introduce here some further notations. First of all, let us denote by P±0, E±0, and F±0

the operators defined by (3.25), (3.26) and (3.27), respectively, in the special case s = 0. We also give special names to these functions that will be used in the proof:

ω+(ξ, y) = a+nn(y)F+ˆv+(ξ, y), ω(ξ, y) = ann(y)E(ξ, y) (3.48) and, for the special case s = 0,

ω0+(ξ, y) = a+nn(y)F+0+(ξ, y), ω0(ξ, y) = ann(y)E0(ξ, y). (3.49)

Case 1:

τ ≥ λ22|ξ|

2s0 (3.50)

Note that, in this case, we have |ξ| ≤ 2λ−22 s0τ , which implies τ ≤ Λ ≤√

−22 τ. (3.51)

We will need several lemmas. In the first one, we estimate the difference P±±−P±0±. Lemma 3.2 Let τ ≥ 1 and assume (3.50), then we have

|P±±(ξ, y) − P±0±(ξ, y)| ≤ Csττ (α±+ 1 + β|y|)|ˆv±(ξ, y)| + |∂yˆv±(ξ, y)|, (3.52) where C depends only on λ0 and M0.

(14)

Proof. It should be noted that

ζ±(ξ, y)|s=0 = B±(ξ, ξ, y) a±nn(y) .

By simple calculations, and dropping ± for the sake of shortness, we can write P ˆv(ξ, y) − P0ˆv(ξ, y) = I1+ I2+ I3, (3.53) where

I1 = it(ξ + iτ sγ, y) − it(ξ, y)ann(y) ∂y− τ ϕ0 + it(ξ + iτ sγ, y)ˆv, I2 = ∂y− τ ϕ0+ it(ξ, y)ann(y) it(ξ + iτ sγ, y) − it(ξ, y)ˆv,

and

I3 = a±nn(y)ζ±(ξ, y) − B±(ξ, ξ, y).

By linearity of t with respect to its first argument (see (3.23)) and by (3.38), we have

|t(ξ + iτ sγ, y) − t(ξ, y)| = |t(iτ sγ, y)| ≤ λ−13 sτ, which, together with (3.2) and (3.50), gives the estimate

|I1| ≤ λ−13 λ−10 sτ {|∂yv| + τ (αˆ ±+ β|y|)|ˆv| + λ−13 (|ξ| + sτ )|ˆv|}

≤ Csτ {|∂yˆv| + [τ (α±+ β|y|) + sτ ]|ˆv|}, (3.54) where C depends on λ0 only. On the other hand, by linearity of t and by (3.39), we obtain

|∂y(t(ξ + iτ sγ, y) − t(ξ, y))| = |∂y(t(iτ sγ, y))| ≤ M3sτ, which, together with (3.2), (3.3) and (3.50), gives the estimate

|I2| ≤ Csτ {|∂yv| + [τ (αˆ ±+ β|y|) + sτ ]|ˆv|}, (3.55) where C depends on λ0 and M0 only.

Finally, by (3.22), (3.34) and (3.50),

|I3| = |2isτ B±(ξ, γ, y) − s2τ2B±(γ, γ, y)| ≤ Csτ2 (3.56) where C depends only on λ0. Putting together (3.53), (3.55), (3.54), and (3.56) gives

(3.52).

2

Lemma 3.2 allows us to estimate ||P±0±(ξ, ·)||L2(R±)instead of ||P±±(ξ, ·)||L2(R±). Let us now go further and note that, similarly to (3.28) and (3.29), we have

P+0ˆv+ = E+0a+nn(y)F+0+− ˆv+y a+nn(y)m+(ξ, y), P0ˆv = F0a+nn(y)E0+ ˆvy ann(y)m(ξ, y).

(15)

We can easily obtain, from (3.3) and (3.37), that

|P+0+− E+0a+nn(y)F+0+| ≤ C|ξ||ˆv+| (3.57) and

|P0ˆv− F0a+nn(y)E0| ≤ C|ξ||ˆv|. (3.58) where C depends only on λ0 and M0.

Lemma 3.3 Let τ ≥ 1 and assume (3.50). There exists a positive constant C de- pending only on λ0 and M0 such that, if s0 ≤ 1/C then we have

Λ|a+nn(0)F+0ˆv+(ξ, 0)|2 + Λ3|ˆv+(ξ, 0)|2+ Λ4||ˆv+(ξ, ·)||2L2(R+)+ Λ2||∂y+(ξ, ·)||2L2(R+)

≤ C||P++(ξ, ·)||2L2(R+) (3.59)

and

−Λ|ann(0)E0(ξ, 0)|2− Λ3|ˆv(ξ, 0)|2 + Λ4||ˆv(ξ, ·)||2L2(R)+ Λ2||∂yˆv(ξ, ·)||2L2(R)

≤ C||P(ξ, ·)||2L2(R), (3.60)

where supp(ˆv+(ξ, ·)) ⊂ [0,1β] and supp(ˆv(ξ, ·)) ⊂ [−α, 0].

Proof. Since supp ˆv+(x, y) is compact, ˆv+(ξ, y) ≡ 0 when |y| is large and the same holds for the function ω0+(ξ, y) defined in (3.49). We now compute

||E+0ω+0(ξ, ·)||2L2(R+)

= Z

0

|∂yω+0(ξ, y) + it+(ξ, y)ω+0(ξ, y)|2dy + Z

0

[τ α++ τ βy + m+(ξ, y)]20+(ξ, y)|2dy

−2<

Z 0

[τ α++ τ βy + m+(ξ, y)]¯ω+0(ξ, y)[∂yω0+(ξ, y) + it+(ξ, y)ω+0(ξ, y)]dy. (3.61) Integrating by parts, we easily get

− 2<

Z 0

[τ α++ τ βy + m+(ξ, y)]¯ω0+(ξ, y)[∂yω+0(ξ, y) + it+(ξ, y)ω+0(ξ, y)]dy

= [τ α++ m+(ξ, 0)]|ω0+(ξ, 0)|2+ Z

0

[τ β + ∂ym+(ξ, y)]|ω+0(ξ, y)|2dy.

(3.62)

By (3.50) and (3.37), we have that

τ β + ∂ym+(ξ, y) ≥ τ β − M2|ξ| ≥ τ β − 2τ s0λ−22 M2 ≥ τ β/2 ≥ 0 (3.63) provided 0 < s04Mβλ22

2. Combining (3.51), (3.61), (3.62) and (3.63) yields

||E+0ω+0(ξ, ·)||2L2(R+) ≥ Z

0

[τ α++ τ βy + m+(ξ, y)]20+(ξ, y)|2dy +[τ α++ m+(ξ, 0)]|ω+0(ξ, 0)|2

≥ C−1Λ2 Z

+0(ξ, y)|2dy + C−1Λ|ω+0(ξ, 0)|2, (3.64)

(16)

where C depends only on λ0. Similarly, we have that

λ−20 ||ω+0(ξ, ·)||2L2(R+) ≥ Z

0

|∂y+(ξ, y) + it+(ξ, y)ˆv+(ξ, y)|2dy +

Z 0

[τ α++ τ βy − m+(ξ, y)]2|ˆv+(ξ, y)|2dy + [τ α+− m+(ξ, 0)]|ˆv+(ξ, 0)|2 +

Z 0

[τ β − ∂ym+(ξ, y)]|ˆv+(ξ, y)|2dy. (3.65) The assumption (3.50) and (3.36) imply

τ α++ τ βy − m+(ξ, y) ≥ τ α+− λ−12 |ξ| ≥ τ α+− 2λ−32 τ s0 ≥ τ α+/2 provided 0 < s0α+4λ32. Thus, by choosing

0 < s0 ≤ min

 1, βλ22

4M2

+λ32 4

 , we obtain from (3.63) and (3.65)

C||ω+0(ξ, ·)||2L2(R+) ≥ Z

0

|∂y+(ξ, y) + it+(ξ, y)ˆv+(ξ, y)|2dy + Λ2

Z 0

|ˆv+(ξ, y)|2dy + Λ|ˆv+(ξ, 0)|2. (3.66) Additionally, we can see that

Z 0

|∂yˆv+(ξ, y) + it+(ξ, y)ˆv+(ξ, y)|2dy

≥ ε Z

0

|∂y+(ξ, y)|2− 2|∂y+(ξ, y)||t+(ξ, y)ˆv+(ξ, y)| + |t+(ξ, y)ˆv+(ξ, y)|2 dy

≥ ε Z

0

 1

2|∂y+(ξ, y)|2− |t+(ξ, y)|2|ˆv+(ξ, y)|2

 dy

≥ ε 2

Z 0

|∂y+(ξ, y)|2dy − λ−13 ε|ξ|2 Z

0

|ˆv+(ξ, y)|2dy, (3.67) for any 0 < ε < 1. Choosing ε sufficiently small, we obtain, from (3.66) and (3.67),

C||ω0+(ξ, ·)||2L2(R+)≥ Z

0

|∂y+(ξ, y)|2dy + Λ2 Z

0

|ˆv+(ξ, y)|2dy + Λ|ˆv+(ξ, 0)|2, (3.68) where C depends only on λ0 and M0.

Combining (3.64) and (3.68) yields Λ2

Z 0

|∂y+(ξ, y)|2+ Λ4 Z

0

|ˆv+(ξ, y)|2+ Λ3|ˆv+(ξ, 0)|2+ Λ|ω+0(ξ, 0)|2

≤C||E+0ω+0(ξ, ·)||2L2(R+),

(3.69)

(17)

where C depends only on λ0 and M0. From (3.52), since supp(ˆv+(ξ, ·)) ⊂ [0, 1/β]

and (3.50) holds, we have

||P+0ˆv+(ξ, ·)||2L2(R+) ≤ 2||P+0ˆv+(ξ, ·)||L2(R+) +Cs20

 Λ2

Z 0

|∂y+(ξ, y)|2+ Λ4 Z

0

|ˆv+(ξ, y)|2

 .(3.70) Moreover, by (3.57) and (3.50),

||E+0ω+0(ξ, ·)||2L2(R+) ≤ 2||P+0ˆv+(ξ, ·)||2L2(R+)+ Cs20Λ2 Z

0

|ˆv+(ξ, y)|2. (3.71) Finally, by (3.69), (3.70) and (3.71) we get (3.59), provided s0 is small enough.

Now, we proceed to prove (3.60). Applying the same arguments leading to (3.62), we have that

||F0ω0(ξ, ·)||2L2(R)

≥ Z 0

−∞

[τ α+ τ βy − m(ξ, y)]20(ξ, y)|2dy − [τ α− m(ξ, 0)]|ω0(ξ, 0)|2 +

Z 0

−∞

[τ β − ∂ym(ξ; y)]|ω0(ξ, y)|2dy.

(3.72)

By (3.36) and (3.50) and since supp(ˆv(ξ, ·)) ⊂ [−α, 0], we can see that

τ α+ τ βy − m(ξ, y) ≥ τ α/2 − λ−12 |ξ| ≥ τ α/2 − 2λ−32 τ s0 ≥ τ α/4 (3.73) provided 0 < s0α8λ32. From (3.72) and (3.73), it follows

||F0ω0(ξ, ·)||2L2(R) ≥ α2 16τ2

Z 0

0(ξ, y)|2dy − τ α0(ξ, 0)|2

≥ CΛ2 Z

0

0(ξ, y)|2dy − CΛ|ω0(ξ, 0)|2. (3.74) Arguing as before and recalling (3.51) we obtain (3.60).

2

We now take into account the transmission conditions.

Lemma 3.4 Let τ ≥ 1 and assume (3.50). There exists a positive constant C de- pending only on λ0 and M0 such that if s0 ≤ 1/C then

ΛX

±

|V±(ξ)|2+ Λ3X

±

|ˆv±(ξ, 0)|2+ Λ4X

±

||ˆv±(ξ, ·)||2L2(R±)+ Λ2X

±

||∂yˆv±(ξ, ·)||2L2(R±)

≤ CX

±

||P±±(ξ, ·)||2L2(R±)+ CΛ|η1(ξ)|2 + CΛ30(ξ)|2, (3.75)

where supp(ˆv±(ξ, ·)) ⊂ [−c0,cβ0] with c0 = min (α, 1).

參考文獻

相關文件

In this paper, we have studied a neural network approach for solving general nonlinear convex programs with second-order cone constraints.. The proposed neural network is based on

Section 3 is devoted to developing proximal point method to solve the monotone second-order cone complementarity problem with a practical approximation criterion based on a new

Abstract Based on a class of smoothing approximations to projection function onto second-order cone, an approximate lower order penalty approach for solving second-order cone

Based on a class of smoothing approximations to projection function onto second-order cone, an approximate lower order penalty approach for solving second-order cone

Although we have obtained the global and superlinear convergence properties of Algorithm 3.1 under mild conditions, this does not mean that Algorithm 3.1 is practi- cally efficient,

For the proposed algorithm, we establish a global convergence estimate in terms of the objective value, and moreover present a dual application to the standard SCLP, which leads to

Abstract In this paper, we consider the smoothing Newton method for solving a type of absolute value equations associated with second order cone (SOCAVE for short), which.. 1

We propose a primal-dual continuation approach for the capacitated multi- facility Weber problem (CMFWP) based on its nonlinear second-order cone program (SOCP) reformulation.. The