## Carleman estimate for second order elliptic equations with Lipschitz leading coefficients and

## jumps at an interface

### M. Di Cristo

^{∗}

### E. Francini

^{†}

### C.-L. Lin

^{‡}

### S. Vessella

^{§}

### J.-N. Wang

^{¶}

Abstract

In this paper we prove a Carleman estimate for second order elliptic equa- tions with a general anisotropic Lipschitz coefficients having a jump at an interface. Our approach does not rely on the techniques of microlocal analysis.

We make use of the elementary method so that we are able to impose almost optimal assumptions on the coefficients and, consequently, the interface. It is possible that the framework can be applied to other cases.

### Contents

1 Introduction 2

2 Notations and statement of the main theorem 3

3 Step 1 - A Carleman estimate for leading coefficients depending on

y only 6

3.1 Fourier transform of the conjugate operator and its factorization . . . 7 3.2 Derivation of the Carleman estimate for the simple case . . . 11 3.3 Proof of Proposition 3.1 . . . 12 4 Step 2 - The Carleman estimate for general coefficients 31 4.1 Partition of unity and auxiliary results . . . 31 4.2 Estimate of the left hand side of the Carleman estimate, I . . . 36 4.3 Estimate of the left hand side of the Carleman estimate, II . . . 40

∗Politecnico di Milano, Italy. Email: michele.dicristo@polimi.it

†Universit`a di Firenze, Italy. Email: elisa.francini@unifi.it

‡National Cheng Kung University, Taiwan. Email: cllin2@mail.ncku.edu.tw

§Universit`a di Firenze, Italy. Email: sergio.vessella@unifi.it

¶National Taiwan University, Taiwan. Email: jnwang@math.ntu.edu.tw

### 1 Introduction

Since T. Carleman’s pioneer work [Car], Carleman estimates have been indispensable tools for proving the unique continuation property for partial differential equations.

Recently, Carleman estimates have been successfully applied to study inverse prob-
lems, see for [Is], [KSU]. Most of Carleman estimates are proved under the assump-
tion that the leading coefficients possess certain regularity. For example, for general
second order elliptic operators, Carleman estimates were proved when the leading co-
efficients are at least Lipschitz [H], [H3]. The restriction of regularity on the leading
coefficients also reflects the fact that the unique continuation may fail if the coeffi-
cients are only H¨older continuous in R^{n} with n ≥ 3 (see examples constructed by
Pli´s [P] and [M]). In R^{2}, the unique continuation property holds for W^{1,2} solutions
of second elliptic equations in either non-divergence or divergence forms with essen-
tially bounded coefficients [BJS], [BN], [AM], [S]. It should be noted that the unique
continuation property for the second order elliptic equations in the plane with essen-
tially bounded coefficients is deduced from the theory of quasiregular mappings. No
Carleman estimates are derived in this situation.

From discussions above, Carleman estimates for second order elliptic operators
with general discontinuous coefficients are not likely to hold. However, when the
discontinuities occur as jumps at an interface with homogeneous or non-homogeneous
transmission conditions, one can still derive useful Carleman estimates. This is the
main theme of the paper. There are some excellent works on this subject. We mention
several closely related papers including Le Rousseau-Robbiano [LR1], [LR2], and Le
Rousseau-Lerner [LL]. For the development of the problem and other related results,
we refer the reader to the papers cited above and references therein. Our result is
close to that of [LL], where the elliptic coefficient is a general anisotropic matrix-
valued function. To put our paper in perspective, we would like to point out that the
interface is assumed to be a C^{∞}hypersurface in [LL] and the coefficients are C^{∞}away
from the interface. Here we prove a Carleman estimate near a flat interface from which
it is easy to obtain under a standard change of coordinates a Carleman estimate for
operator with leading coefficients which have a jump discontinuity at a C^{1,1} interface
and are Lipschitz continuous apart from such an interface (see Theorem 2.1 for a
precise statement). The approach in [LL] is close to Calder´on’s seminal work on the
uniqueness of Cauchy problem [Cal] as an application of singular integral operators
(or pseudo-differential operators). Therefore, the regularity assumptions of [LL] are
due to the use of calculus of pseudo-differential operators and the microlocal analysis
techniques.

The aim here is to derive the Carleman estimate using more elementary methods.

Our approach does not rely on the techniques of microlocal analysis, but rather on the straightforward Fourier transform. Thus we are able to relax the regularity assumptions on the coefficients and the interface. We first consider the simple case where the coefficients depend only on the normal variable. Taking advantage of the simple structure of coefficients, we are able to derive a Carleman estimate by

elementary computations with the help of the Fourier transform on the tangential variables. To handle the general coefficients, we rely on some type of partition of unity. In Section 2 after the Theorem 2.1 we give a more detailed outline of our proof. Our approach is very general and is likely to be extended to other equations or systems.

### 2 Notations and statement of the main theorem

Define H_{±} = χ_{R}^{n}

± where R^{n}± = {(x, y) ∈ R^{n−1} × R|y ≷ 0} and χR^{n}± is the char-
acteristic function of R^{n}±. Let us stress that for a vector (x, y) of R^{n}, we mean
x = (x_{1}, . . . , x_{n−1}) ∈ R^{n−1} and y ∈ R. In places we will use equivalently the symbols
D, ∇, ∂ to denote the gradient of a function and we will add the index x or y to
denote gradient in R^{n−1} and the derivative with respect to y respectively.

Let u±∈ C^{∞}(R^{n}). We define

u = H_{+}u_{+}+ H−u−=X

±

H±u±,

hereafter, we denote P

±a±= a_{+}+ a−, and for R^{n−1}× R
L(x, y, ∂)u :=X

±

H_{±}div_{x,y}(A_{±}(x, y)∇_{x,y}u_{±}), (2.1)

where

A±(x, y) = {a^{±}_{ij}(x, y)}^{n}_{i,j=1}, x ∈ R^{n−1}, y ∈ R (2.2)
is a Lipschitz symmetric matrix-valued function satisfying, for given constants λ0 ∈
(0, 1], M_{0} > 0,

λ0|z|^{2} ≤ A±(x, y)z · z ≤ λ^{−1}_{0} |z|^{2}, ∀(x, y) ∈ R^{n}, ∀ z ∈ R^{n} (2.3)
and

|A_{±}(x^{0}, y^{0}) − A_{±}(x, y)| ≤ M_{0}(|x^{0}− x| + |y^{0}− y|). (2.4)
We write

h_{0}(x) := u_{+}(x, 0) − u_{−}(x, 0), ∀ x ∈ R^{n−1}, (2.5)
h_{1}(x) := A_{+}(x, 0)∇_{x,y}u_{+}(x, 0) · ν − A−(x, 0)∇_{x,y}u−(x, 0) · ν, ∀ x ∈ R^{n−1}, (2.6)
where ν = −e_{n}.

Let us now introduce the weight function. Let ϕ be ϕ(y) =

( ϕ+(y) := α+y + βy^{2}/2, y ≥ 0,

ϕ−(y) := α−y + βy^{2}/2, y < 0, (2.7)
where α_{+}, α− and β are positive numbers which will be determined later. In what
follows we denote by ϕ_{+} and ϕ_{−} the restriction of the weight function ϕ to [0, +∞)

and to (−∞, 0) respectively. We use similar notation for any other weight functions.

For any ε > 0 let

ψ_{ε}(x, y) := ϕ(y) −ε
2|x|^{2},
and let

φ_{δ}(x, y) := ψ_{δ} δ^{−1}x, δ^{−1}y . (2.8)
For a function h ∈ L^{2}(R^{n}), we define

ˆh(ξ, y) = Z

R^{n−1}

h(x, y)e^{−ix·ξ}dx, ξ ∈ R^{n−1}.

As usual we denote by H^{1/2}(R^{n−1}) the space of the functions f ∈ L^{2}(R^{n−1}) satisfying
Z

R^{n−1}

|ξ|| ˆf (ξ)|^{2}dξ < ∞,
with the norm

||f ||^{2}_{H}1/2(R^{n−1}) =
Z

R^{n−1}

(1 + |ξ|^{2})^{1/2}| ˆf (ξ)|^{2}dξ. (2.9)
Moreover we define

[f ]_{1/2,R}^{n−1} =

Z

R^{n−1}

Z

R^{n−1}

|f (x) − f (y)|^{2}

|x − y|^{n} dydx

1/2

,

and recall that there is a positive constant C, depending only on n, such that
C^{−1}

Z

R^{n−1}

|ξ|| ˆf (ξ)|^{2}dξ ≤ [f ]^{2}_{1/2,R}n−1 ≤ C
Z

R^{n−1}

|ξ|| ˆf (ξ)|^{2}dξ,

so that the norm (2.9) is equivalent to the norm ||f ||_{L}^{2}_{(R}^{n−1}_{)}+ [f ]_{1/2,R}^{n−1}. We use
the letters C, C0, C1, · · · to denote constants. The value of the constants may change
from line to line, but it is always greater than 1.

We will denote by B_{r}(x) the (n − 1)-ball centered at x ∈ R^{n−1} with radius r > 0.

Whenever x = 0 we denote Br = Br(0).

Theorem 2.1 Let u and A±(x, y) satisfy (2.1)-(2.6). There exist α_{+}, α−, β, δ_{0}, r_{0}
and C depending on λ_{0}, M_{0} such that if δ ≤ δ_{0} and τ ≥ C, then

X

± 2

X

k=0

τ^{3−2k}
Z

R^{n}±

|D^{k}u±|^{2}e^{2τ φ}^{δ,±}^{(x,y)}dxdy +X

± 1

X

k=0

τ^{3−2k}
Z

R^{n−1}

|D^{k}u±(x, 0)|^{2}e^{2φ}^{δ}^{(x,0)}dx

+X

±

τ^{2}[e^{τ φ}^{δ}^{(·,0)}u±(·, 0)]^{2}_{1/2,R}n−1+X

±

[D(e^{τ φ}^{δ,±}u±)(·, 0)]^{2}_{1/2,R}n−1

≤C X

±

Z

R^{n}±

|L(x, y, ∂)(u±)|^{2}e^{2τ φ}^{δ,±}^{(x,y)}dxdy + [e^{τ φ}^{δ}^{(·,0)}h_{1}]^{2}_{1/2,R}n−1

+[∇_{x}(e^{τ φ}^{δ}h_{0})(·, 0)]^{2}_{1/2,R}n−1+ τ^{3}
Z

R^{n−1}

|h_{0}|^{2}e^{2τ φ}^{δ}^{(x,0)}dx + τ
Z

R^{n−1}

|h_{1}|^{2}e^{2τ φ}^{δ}^{(x,0)}dx

. (2.10)

where u = H_{+}u_{+}+ H−u−, u± ∈ C^{∞}(R^{n}) and supp u ⊂ B_{δ/2}× [−δr_{0}, δr_{0}], and φ_{δ} is
given by (2.8).

Remark 2.2 Estimate (2.10) is a local Carleman estimate near x_{n} = 0. As men-
tioned in the Introduction, by an easy change of coordinates, one can derive a local
Carleman estimate near a C^{1,1} interface from (2.10).

Remark 2.3 Let us point out that the level sets

(x, y) ∈ Bδ/2× (−δr_{0}, δr_{0})) : φ_{δ}(x, y) = t

have approximately the shape of paraboloid and, in a neighborhood of (0, 0), ∂yφδ > 0
so that the gradient of φ points inward the halfspace R^{n}+. These features are crucial
to derive from the Carleman estimate (2.10) a H¨older type smallness propagation es-
timate across the interface {(x, 0) : x ∈ R^{n−1}} for weak solutions to the transmission
problem

L(x, y, ∂)u =P

±H±b±· ∇_{x,y}u±+ c±u±,
u_{+}(x, 0) − u_{+}(x, 0) = 0,

A_{+}(x, 0)∇_{x,y}u_{+}(x, 0) · ν − A−(x, 0)∇_{x,y}u−(x, 0) · ν = 0,

(2.11)

where b±∈ L^{∞}(R^{n}, R^{n}) and c± ∈ L^{∞}(R^{n}). More precisely if the error of observation
of u is known in an open set of R^{n}+, we can find a H¨older control of u in a bounded
set of R^{n}−. For more details about such type of estimate we refer to [LR1, Sect. 3.1].

The proof of Theorem 2.1 is divided into two steps as follows.

Step 1. We first consider the particular case of the leading matrices (2.2) inde-
pendent of x and we prove (Theorem 3.1), for the corresponding operator L(y, ∂),
a Carleman estimate with the weight function φ(x, y) = ϕ(y) + sγ · x, where s is
a suitable small number and γ is an arbitrary unit vector of R^{n−1}. The features
of the leading matrices and of the weight function φ allow to factorize the Fourier
transform of the conjugate of the operator L(y, ∂)u with respect to φ. So that we
can follow, roughly speaking, at an elementary level the strategy of [LL] for the oper-
ator L(y, ∂). Nevertheless such an estimate has only a prepatory character to prove
Theorem 2.1, because, due to the particular feature of the weight φ (i.e. linear with
respect to x), the Carleman estimate obtained in Theorem 3.1 cannot yield to any
kind of significant smallness propagation estimate across the interface.

Step 2. In the second we adapt the method described in [Tr, Ch. 4.1] to an operator with jump discontinuity. More precisely, we localize the operator (2.1) with respect to the x variable and we linearize the weight function, again with respect the x variable, and by the Carleman estimate obtained in the Step 1 we derive some local Carleman estimates. Subsequently we put together such local estimates by mean of the unity partition introduced in [Tr].

### 3 Step 1 - A Carleman estimate for leading coef- ficients depending on y only

In this section we consider the simple case of the leading matrices (2.2) independent of x. Moreover, the weight function that we consider is linear with respect to x variable, so that, as explained above, the Carleman estimates we get here are only preliminary to the one that we will get in the general case.

Assume that

A_{±}(y) = {a^{±}_{ij}(y)}^{n}_{i,j=1} (3.1)
are symmetric matrix-valued functions satisfying (2.3) and (2.4), i.e.,

λ_{0}|z|^{2} ≤ A_{±}(y)z · z ≤ λ^{−1}_{0} |z|^{2}, ∀y ∈ R, ∀ z ∈ R^{n} (3.2)

|A±(y^{0}) − A±(y^{00})| ≤ M_{0}|y^{0}− y^{00}|, ∀ y^{0}, y^{00} ∈ R. (3.3)
From (3.2), we have

a^{±}_{nn}(y) ≥ λ_{0} ∀ y ∈ R. (3.4)

In the present case the the differential operator (2.1) became L(y, ∂)u :=X

±

H±div_{x,y}(A±(y)∇_{x,y}u±), (3.5)

where u =P

±H±u±, u± ∈ C^{∞}(R^{n})

We also set, for any s ∈ [0, 1] and γ ∈ R^{n−1} with |γ| ≤ 1

φ(x, y) = ϕ(y) + sγ · x = H_{+}φ_{+}+ H−φ−, (3.6)
where ϕ is defined in (2.7).

Our aim here is to prove the following Carleman estimate.

Theorem 3.1 There exist τ_{0}, s_{0}, r_{0}, C and β_{0} depending only on λ_{0}, M_{0}, such
that for τ ≥ τ0, 0 < s ≤ s0 < 1, and for every w = P

±H±w± with supp w ⊂

B_{1}× [−r_{0}, r_{0}], we have that
X

± 2

X

k=0

τ^{3−2k}
Z

R^{n}±

|D^{k}w±|^{2}e^{2τ φ}^{±}dxdy

+X

± 1

X

k=0

τ^{3−2k}
Z

R^{n−1}

|D^{k}w(x, 0)|^{2}e^{2τ φ(x,0)}dx +X

±

τ^{2}[(e^{τ φ}w±)(·, 0)]^{2}_{1/2,R}n−1

+X

±

[∂_{y}(e^{τ φ}^{±}w±)(·, 0)]^{2}_{1/2,R}n−1 +X

±

[∇_{x}(e^{τ φ}w±)(·, 0)]^{2}_{1/2,R}n−1

≤C Z

R^{n}±

|L(y, ∂)w|^{2}e^{2τ φ}^{±}dxdy + [∇_{x} e^{τ φ(·,0)}(w_{+}(·, 0) − w−(·, 0))]^{2}_{1/2,R}n−1

+ [e^{τ φ(·,0)}(A_{+}(0)∇_{x,y}w_{+}(x, 0) · ν − A−(0)∇_{x,y}w−(x, 0) · ν)]^{2}_{1/2,R}n−1

+ τ Z

R^{n−1}

e^{2τ φ(x,0)}|A_{+}(0)∇_{x,y}w_{+}(x, 0) · ν − A−(0)∇_{x,y}w−(x, 0) · ν|^{2}dx
+τ^{3}

Z

R^{n−1}

e^{2τ φ(x,0)}|w_{+}(x, 0) − w−(x, 0)|^{2}dx

,

(3.7)

with β ≥ β_{0} and α± properly chosen.

### 3.1 Fourier transform of the conjugate operator and its fac- torization

To proceed further, we introduce some operators and find their properties. We use
the notation ∂_{j} = ∂_{x}_{j} for 1 ≤ j ≤ n − 1. Let us denote B_{±}(y) = {b^{±}_{jk}(y)}^{n−1}_{j,k=1}, the
symmetric matrix satisfying, for z = (z_{1}, · · · , z_{n−1}, z_{n}) =: (z^{0}, z_{n}),

B±(y)z^{0}· z^{0} = A±(y)z · z |

zn=−Pn−1 j=1

a± nj(y)zj a±

nn(y)

. (3.8)

In view of (3.2) we have

λ_{1}|z^{0}|^{2} ≤ B±(y)z^{0}· z^{0} ≤ λ^{−1}_{1} |z^{0}|^{2}, ∀ y ∈ R, ∀ z^{0} ∈ R^{n−1}, (3.9)
λ1 ≤ λ0 depends only on λ0.

Notice that

b^{±}_{jk}(y) = a^{±}_{jk}(y) − a^{±}_{nj}(y)a^{±}_{nk}(y)

a^{±}_{nn}(y) , j, k = 1, · · · , n − 1. (3.10)
We define the operator

T_{±}(y, ∂_{x})u_{±}:=

n−1

X

j=1

a^{±}_{nj}(y)

a^{±}_{nn}(y)∂_{j}u_{±}. (3.11)

It is easy to show, by direct calculations ([LL]), that

div_{x,y} A±(y)∇_{x,y}u± = (∂_{y}+ T±)a^{±}_{nn}(y)(∂_{y}+ T±)u±+ div_{x} B±(y)∇_{x}u±. (3.12)
Let w =P

±H±w±, where w± ∈ C^{∞}(R^{n}), and define

θ_{0}(x) := w_{+}(x, 0) − w−(x, 0) for x ∈ R^{n−1}, (3.13)
θ_{1}(x) := A_{+}(0)∇_{x,y}w_{+}(x, 0) · ν − A_{−}(0)∇_{x,y}w_{−}(x, 0) · ν for x ∈ R^{n−1}, (3.14)
where ν = −e_{n}. By straightforward calculations, we get

a^{+}_{nn}(y) ∂_{y} + T_{+}(y, ∂_{x})w+(x, y) |_{y=0} −a^{−}_{nn}(y) ∂_{y}+ T_{−}(y, ∂_{x})w−(x, y) |_{y=0}= −θ_{1}(x).

(3.15)
In order to derive the Carleman estimate (3.7), we investigate the conjugate op-
erator of L(y, ∂) with e^{τ φ} for φ given by (3.6). Let v = e^{τ φ}w and ˜v = e^{−τ sγ·x}v, then
we have

w = e^{−τ φ}v =X

±

H±e^{−τ φ}^{±}v± =X

±

H±e^{−τ ϕ}^{±}v˜±

and therefore

e^{τ φ}L(y, ∂)(e^{−τ φ}v) = e^{τ sγ·x}e^{τ ϕ}L(y, ∂)(e^{−τ ϕ}˜v).

It follows from (3.12) that
e^{τ ϕ}L(y, ∂)(e^{−τ ϕ}˜v) =X

±

H_{±}(∂y− τ ϕ^{0}_{±}+ T_{±})a^{±}_{nn}(y)(∂_{y}− τ ϕ^{0}_{±}+ T_{±})˜v_{±}

+X

±

H±div_{x} B±(y)∇_{x}v˜±,
which leads to

e^{τ φ}L(y, ∂)(e^{−τ φ}v) = e^{τ sγ·x}e^{τ ϕ}L(y, ∂)(e^{−τ ϕ}v)˜

= e^{τ sγ·x}X

±

H±(∂_{y} − τ ϕ^{0}_{±}+ T±)a^{±}_{nn}(y)(∂_{y}− τ ϕ^{0}_{±}+ T±)(e^{−τ sγ·x}v±)
+ e^{τ sγ·x}X

±

H_{±}div_{x} B_{±}(y)∇_{x}(e^{−τ sγ·x}v_{±}).

(3.16)

By the definition of T_{±}(y, ∂_{x}), we get that

T±(y, ∂_{x})(e^{−τ sγ·x}v±) = e^{−τ sγ·x}

n−1

X

j=1

a^{±}_{nj}(y)

a^{±}_{nn}(y)(∂_{j}v±− τ sγ_{j}v±)
:= e^{−τ sγ·x}T±(y, ∂_{x}− τ sγ)v±.

To continue the computation, we observe that

e^{τ sγ·x}(∂_{y}− τ ϕ^{0}_{±}+ T±(y, ∂_{x}))a^{±}_{nn}(y)(∂_{y} − τ ϕ^{0}_{±}+ T±(y, ∂_{x}))(e^{−τ sγ·x}v±)

= ∂_{y}− τ ϕ^{0}_{±}+ T±(y, ∂_{x}− τ sγ)a^{±}_{nn}(y) ∂_{y} − τ ϕ^{0}_{±}+ T±(y, ∂_{x}− τ sγ)v±

(3.17)

and

e^{τ sγ·x}div_{x} B_{±}(y)∇_{x}(e^{−τ sγ·x}v_{±})

=

n−1

X

j,k=1

b^{±}_{jk}(y)∂_{jk}^{2} v±− 2sτ

n−1

X

j,k=1

b^{±}_{jk}(y)∂_{j}v±γ_{k}+ s^{2}τ^{2}

n−1

X

j,k=1

b^{±}_{jk}(y)γ_{j}γ_{k}v±. (3.18)

Combining (3.16), (3.17) and (3.18) yields
e^{τ φ}L(y, ∂)(e^{−τ φ}v)

=X

±

H±

∂_{y} − τ ϕ^{0}_{±}+ T±(y, ∂_{x}− τ sγ)a^{±}_{nn}(y) ∂_{y}− τ ϕ^{0}_{±}+ T±(y, ∂_{x}− τ sγ)v±

+X

±

H_{±}

n−1

X

j,k=1

b^{±}_{jk}(y)∂_{jk}^{2} v_{±}− 2sτ

n−1

X

j,k=1

b^{±}_{jk}(y)∂_{j}v_{±}γ_{k}+ s^{2}τ^{2}

n−1

X

j,k=1

b^{±}_{jk}(y)γ_{j}γ_{k}v_{±}.

(3.19)
We now focus on the analysis of e^{τ φ}L(y, ∂)(e^{−τ φ}v). To simplify it, we introduce
some notations:

f (x, y) = e^{τ φ}L(y, ∂)(e^{−τ φ}v), (3.20)
B±(ξ, γ, y) =

n−1

X

j,k=1

b^{±}_{jk}(y)ξ_{j}γ_{k}, ξ ∈ R^{n−1}, (3.21)

ζ_{±}(ξ, y) = 1

a^{±}_{nn}(y)B±(ξ, ξ, y) + 2isτ B_{±}(ξ, γ, y) − s^{2}τ^{2}B_{±}(γ, γ, y), (3.22)
and

t_{±}(ξ, y) =

n−1

X

j=1

a^{±}_{nj}(y)

a^{±}_{nn}(y)ξ_{j}. (3.23)

By (3.19), we have

f (ξ, y) =ˆ X

±

H±P±vˆ±, (3.24)

where

P±vˆ± := ∂_{y}− τ ϕ^{0}_{±}+ it±(ξ + iτ sγ, y)a^{±}_{nn}(y) ∂_{y}− τ ϕ^{0}_{±}+ it±(ξ + iτ sγ, y)ˆv±

− a^{±}_{nn}(y)ζ±(ξ, y)ˆv±.

(3.25) Our aim is to estimate f (x, y) or, equivalently, its Fourier transform ˆf (ξ, y). In order to do this, we want to factorize the operators P±. For any z = a + ib with (a, b) 6= (0, 0), we define the square root of z,

√z = s

a +√

a^{2}+ b^{2}

2 + i b

q

2(a +√

a^{2}+ b^{2})
.

We remind that the square root √

z is defined with a cut along the negative real axis and note that <√

z ≥ 0. Thus, it needs extra care to estimate its derivative. Now we define two operators

E± = ∂_{y} + it±(ξ + iτ sγ, y) − (τ ϕ^{0}_{±}+p

ζ±), (3.26)

F± = ∂y+ it±(ξ + iτ sγ, y) − (τ ϕ^{0}_{±}−p

ζ±). (3.27)

With all the definitions given above, we obtain that

P_{+}vˆ_{+}= E_{+}a^{+}_{nn}(y)F_{+}ˆv_{+}− ˆv_{+}∂_{y} a^{+}_{nn}(y)p

ζ_{+}, (3.28)

P−vˆ−= F−a^{−}_{nn}(y)E−ˆv−+ ˆv−∂_{y} a^{−}_{nn}(y)p

ζ−. (3.29)

Let us now introduce some other useful notations and estimates that will be intensively used in the sequel. After taking the Fourier transform, the terms on the interface (3.13) and (3.15), become

η_{0}(ξ) := ˆv_{+}(ξ, 0) − ˆv−(ξ, 0) =e^{τ φ(x,0)}\θ_{0}(x) (3.30)
and

η_{1}(ξ) :=−e^{τ φ(x,0)}\θ_{1}(x)

= a^{+}_{nn}(0)∂yˆv+(ξ, 0) − τ α+vˆ+(ξ, 0) + it+(ξ + iτ sγ, 0)ˆv+(ξ, 0)

− a^{−}_{nn}(0)∂yvˆ_{−}(ξ, 0) − τ α_{−}vˆ_{−}(ξ, 0) + it_{−}(ξ + iτ sγ, 0)ˆv_{−}(ξ, 0).

(3.31)

For simplicity, we denote

V±(ξ) := a^{±}_{nn}(0)∂_{y}ˆv±(ξ, 0) − τ α±vˆ±(ξ, 0) + it±(ξ + iτ sγ, 0)ˆv±(ξ, 0), (3.32)
so that

V_{+}(ξ) − V−(ξ) = η_{1}(ξ). (3.33)
Moreover, we define

m_{±}(ξ, y) :=

s

B±(ξ, ξ, y)
a^{±}_{nn}(y) .
From (3.9) we have

λ_{1}|ξ|^{2} ≤ B±(ξ, ξ, y) ≤ λ^{−1}_{1} |ξ|^{2}, ∀ y ∈ R, ∀ ξ ∈ R^{n−1}, (3.34)
and, from (3.3),

|∂_{y}B_{±}(ξ, η, y)| ≤ M_{1}|ξ||η|, ∀ ξ, η ∈ R^{n−1}, (3.35)
where M_{1} depends only on λ_{0} and M_{0}. In a similar way, we list here some useful
bounds, that can be easily obtained from (3.9) and (3.3).

λ_{2}|ξ| ≤ m_{±}(ξ, y) ≤ λ^{−1}_{2} |ξ|, (3.36)

|∂_{y}m±(ξ, y)| ≤ M_{2}|ξ|, (3.37)

|t±(ξ, y)| ≤ λ^{−1}_{3} |ξ|, (3.38)

|∂yt±(ξ, y)| ≤ M3|ξ|, (3.39)

|ζ±(ξ, y)| ≤ (λ_{0}λ_{1})^{−1}(|ξ|^{2} + s^{2}τ^{2}), (3.40)

|∂_{y}ζ_{±}(ξ, y)| ≤ M_{4}(|ξ|^{2}+ s^{2}τ^{2}). (3.41)
Here λ_{2} =√

λ_{0}λ_{1}, λ_{3} depends only on λ_{0}, while M_{2}, M_{3} and M_{4} depends only on λ_{0}
and M_{0}.

### 3.2 Derivation of the Carleman estimate for the simple case

The derivation of the Carleman estimate (3.7) is a simple consequence of the auxiliary Proposition 3.1 stated below and proved in the following Section 3.3 via the inverse Fourier transform. We first set

L := sup

ξ∈R^{n−1}\{0}

m_{+}(ξ, 0)
m−(ξ, 0).

Note that, by (3.36), λ^{2}_{2} ≤ L ≤ λ^{−2}_{2} . Now we introduce the fundamental assumption
on the coefficients α± in the weight function. As in [LL], we choose positive α+ and
α−, such that

L < α_{+}
α−

. (3.42)

This choice will only be conditioned by λ_{0}. These constants will be fixed. Denote
the factor

Λ = (|ξ|^{2}+ τ^{2})^{1/2}.
We now state our main tool.

Proposition 3.1 There exist τ_{0}, s_{0}, ρ, β and C, depending only on λ_{0} and M_{0}, such
that for τ ≥ τ_{0}, supp ˆv_{±}(ξ, ·) ⊂ [−ρ, ρ], s ≤ s_{0} < 1, we have

1 τ

X

±

||∂_{y}^{2}vˆ±(ξ, ·)||^{2}_{L}2(R±)+Λ^{2}
τ

X

±

||∂_{y}vˆ±(ξ, ·)||^{2}_{L}2(R±)

+Λ^{4}
τ

X

±

||ˆv±(ξ, ·)||^{2}_{L}2(R±)+ ΛX

±

|V±(ξ)|^{2}+ Λ^{3}X

±

|ˆv±(ξ, 0)|^{2}

≤ C X

±

||P±ˆv±(ξ, ·)||^{2}_{L}2(R±)+ Λ|η_{1}(ξ)|^{2}+ Λ^{3}|η_{0}(ξ)|^{2}

!

. (3.43)

Here R^{±}= {y ∈ R : y ≷ 0}.

Proof of Theorem 3.1. Substituting (3.24) and the definitions of η_{0}, η_{1} (see (3.30),
(3.31)) into the right hand side of (3.43) implies

1 τ

X

±

||∂_{y}^{2}vˆ±(ξ, ·)||^{2}_{L}2(R±)+Λ^{2}
τ

X

±

||∂_{y}vˆ±(ξ, ·)||^{2}_{L}2(R±)+Λ^{4}
τ

X

±

||ˆv±(ξ, ·)||^{2}_{L}2(R±)

+ ΛX

±

|V±(ξ)|^{2}+ Λ^{3}X

±

|ˆv±(ξ, 0)|^{2}

≤C X

±

||f (ξ, ·)||^{2}_{L}2(R)+ Λ|e^{τ φ(·,0)}\θ_{1}(·)|^{2}+ Λ^{3}|e^{τ φ(·,0)}\θ_{0}(·)|^{2}

! .

(3.44) Recalling (3.32), it is not hard to see that

ΛX

±

|∂_{y}vˆ±(ξ, 0)|^{2} ≤ C ΛX

±

|V±(ξ)|^{2}+ Λ^{3}X

±

|ˆv±(ξ, 0)|^{2}

!

. (3.45)

Since Λ^{4} ≥ |ξ|^{2}τ^{2}+ |ξ|^{4}+ τ^{4}, |ξ|^{3}+ |ξ|^{2}τ + |ξ|τ^{2}+ τ^{3} ≤ CΛ^{3}, and Λ^{3} ≤ C^{0}(|ξ|^{3}+ τ^{3}),
by integrating in ξ, we can deduce from (3.44) and (3.45) that

X

± 2

X

k=0

τ^{3−2k}
Z

R^{n}±

|D^{k}v±|^{2}+X

±

[∇xv±(·, 0)]^{2}_{1/2,R}n−1 +X

±

[∂yv±(·, 0)]^{2}_{1/2,R}n−1

+X

±

τ^{2}[v±(·, 0)]^{2}_{1/2,R}n−1+X

±

τ Z

R^{n−1}

|∇xv±(x, 0)|^{2}dx

+X

±

τ Z

R^{n−1}

|∂_{y}v±(x, 0)|^{2}dx +X

±

τ^{3}
Z

R^{n−1}

|v±(x, 0)|^{2}dx

≤C

||f ||^{2}_{L}2(R^{n})+ [e^{τ φ(·,0)}θ_{1}(·)]^{2}_{1/2,R}n−1 + [∇_{x} e^{τ φ(·,0)}θ_{0}(·)]^{2}_{1/2,R}n−1

+τ Z

R^{n−1}

e^{2τ φ(x,0)}|θ1|^{2}dx + τ^{3}
Z

R^{n−1}

e^{2τ φ(x,0)}|θ0|^{2}dx

.

(3.46)

Replacing v±= e^{τ φ}^{±}w± into (3.46) immediately leads to (3.7).

### 2 3.3 Proof of Proposition 3.1

Let κ be the positive number

κ = 1 2

1 − Lα−

α_{+}

(3.47)

depending only on λ_{0} and M_{0}. The proof of Proposition 3.1 will be divided into three
cases

τ ≥ λ^{2}_{2}|ξ|

2s_{0} ,
m_{+}(ξ, 0)

(1 − κ) α_{+} ≤ τ ≤ λ^{2}_{2}|ξ|

2s_{0} ,
τ ≤ m_{+}(ξ, 0)

(1 − κ) α_{+}.
Recall that λ_{2} =√

λ_{0}λ_{1} (from (3.36)) depends only on λ_{0}. Of course, we first choose
a small s_{0} < 1, depending on λ_{0} and M_{0} only, such that

m_{+}(ξ, 0)
(1 − κ) α+

≤ λ^{2}_{2}|ξ|

2s0

, ∀ ξ ∈ R^{n}.
A smaller value s_{0} will be chosen later in the proof.

We need to introduce here some further notations. First of all, let us denote by
P_{±}^{0}, E_{±}^{0}, and F_{±}^{0}

the operators defined by (3.25), (3.26) and (3.27), respectively, in the special case s = 0. We also give special names to these functions that will be used in the proof:

ω_{+}(ξ, y) = a^{+}_{nn}(y)F_{+}ˆv_{+}(ξ, y), ω−(ξ, y) = a^{−}_{nn}(y)E−vˆ−(ξ, y) (3.48)
and, for the special case s = 0,

ω^{0}_{+}(ξ, y) = a^{+}_{nn}(y)F_{+}^{0}vˆ_{+}(ξ, y), ω_{−}^{0}(ξ, y) = a^{−}_{nn}(y)E_{−}^{0}vˆ_{−}(ξ, y). (3.49)

Case 1:

τ ≥ λ^{2}_{2}|ξ|

2s_{0} (3.50)

Note that, in this case, we have |ξ| ≤ 2λ^{−2}_{2} s_{0}τ , which implies
τ ≤ Λ ≤√

5λ^{−2}_{2} τ. (3.51)

We will need several lemmas. In the first one, we estimate the difference P±vˆ±−P_{±}^{0}vˆ±.
Lemma 3.2 Let τ ≥ 1 and assume (3.50), then we have

|P_{±}vˆ_{±}(ξ, y) − P_{±}^{0}vˆ_{±}(ξ, y)| ≤ Csττ (α±+ 1 + β|y|)|ˆv_{±}(ξ, y)| + |∂_{y}ˆv_{±}(ξ, y)|, (3.52)
where C depends only on λ_{0} and M_{0}.

Proof. It should be noted that

ζ±(ξ, y)|_{s=0} = B±(ξ, ξ, y)
a^{±}_{nn}(y) .

By simple calculations, and dropping ± for the sake of shortness, we can write
P ˆv(ξ, y) − P^{0}ˆv(ξ, y) = I_{1}+ I_{2}+ I_{3}, (3.53)
where

I_{1} = it(ξ + iτ sγ, y) − it(ξ, y)a_{nn}(y) ∂_{y}− τ ϕ^{0} + it(ξ + iτ sγ, y)ˆv,
I_{2} = ∂_{y}− τ ϕ^{0}+ it(ξ, y)a_{nn}(y) it(ξ + iτ sγ, y) − it(ξ, y)ˆv,

and

I_{3} = a^{±}_{nn}(y)ζ±(ξ, y) − B±(ξ, ξ, y).

By linearity of t with respect to its first argument (see (3.23)) and by (3.38), we have

|t(ξ + iτ sγ, y) − t(ξ, y)| = |t(iτ sγ, y)| ≤ λ^{−1}_{3} sτ,
which, together with (3.2) and (3.50), gives the estimate

|I_{1}| ≤ λ^{−1}_{3} λ^{−1}_{0} sτ {|∂_{y}v| + τ (αˆ _{±}+ β|y|)|ˆv| + λ^{−1}_{3} (|ξ| + sτ )|ˆv|}

≤ Csτ {|∂yˆv| + [τ (α±+ β|y|) + sτ ]|ˆv|}, (3.54)
where C depends on λ_{0} only. On the other hand, by linearity of t and by (3.39), we
obtain

|∂y(t(ξ + iτ sγ, y) − t(ξ, y))| = |∂y(t(iτ sγ, y))| ≤ M3sτ, which, together with (3.2), (3.3) and (3.50), gives the estimate

|I_{2}| ≤ Csτ {|∂_{y}v| + [τ (αˆ ±+ β|y|) + sτ ]|ˆv|}, (3.55)
where C depends on λ_{0} and M_{0} only.

Finally, by (3.22), (3.34) and (3.50),

|I_{3}| = |2isτ B±(ξ, γ, y) − s^{2}τ^{2}B±(γ, γ, y)| ≤ Csτ^{2} (3.56)
where C depends only on λ_{0}. Putting together (3.53), (3.55), (3.54), and (3.56) gives

(3.52).

### 2

Lemma 3.2 allows us to estimate ||P_{±}^{0}vˆ±(ξ, ·)||_{L}^{2}_{(R}_{±}_{)}instead of ||P±vˆ±(ξ, ·)||_{L}^{2}_{(R}_{±}_{)}.
Let us now go further and note that, similarly to (3.28) and (3.29), we have

P_{+}^{0}ˆv_{+} = E_{+}^{0}a^{+}_{nn}(y)F_{+}^{0}vˆ_{+}− ˆv_{+}∂_{y} a^{+}_{nn}(y)m_{+}(ξ, y),
P_{−}^{0}ˆv_{−} = F_{−}^{0}a^{+}_{nn}(y)E_{−}^{0}vˆ_{−}+ ˆv_{−}∂_{y} a^{−}_{nn}(y)m_{−}(ξ, y).

We can easily obtain, from (3.3) and (3.37), that

|P_{+}^{0}vˆ_{+}− E_{+}^{0}a^{+}_{nn}(y)F_{+}^{0}vˆ_{+}| ≤ C|ξ||ˆv_{+}| (3.57)
and

|P_{−}^{0}ˆv_{−}− F_{−}^{0}a^{+}_{nn}(y)E_{−}^{0}vˆ_{−}| ≤ C|ξ||ˆv_{−}|. (3.58)
where C depends only on λ_{0} and M_{0}.

Lemma 3.3 Let τ ≥ 1 and assume (3.50). There exists a positive constant C de-
pending only on λ_{0} and M_{0} such that, if s_{0} ≤ 1/C then we have

Λ|a^{+}_{nn}(0)F_{+}^{0}ˆv_{+}(ξ, 0)|^{2} + Λ^{3}|ˆv_{+}(ξ, 0)|^{2}+ Λ^{4}||ˆv_{+}(ξ, ·)||^{2}_{L}2(R+)+ Λ^{2}||∂_{y}vˆ_{+}(ξ, ·)||^{2}_{L}2(R+)

≤ C||P_{+}vˆ_{+}(ξ, ·)||^{2}_{L}2(R+) (3.59)

and

−Λ|a^{−}_{nn}(0)E_{−}^{0}vˆ−(ξ, 0)|^{2}− Λ^{3}|ˆv−(ξ, 0)|^{2} + Λ^{4}||ˆv−(ξ, ·)||^{2}_{L}2(R−)+ Λ^{2}||∂_{y}ˆv−(ξ, ·)||^{2}_{L}2(R−)

≤ C||P−vˆ−(ξ, ·)||^{2}_{L}2(R−), (3.60)

where supp(ˆv_{+}(ξ, ·)) ⊂ [0,^{1}_{β}] and supp(ˆv_{−}(ξ, ·)) ⊂ [−^{α}_{2β}^{−}, 0].

Proof. Since supp ˆv_{+}(x, y) is compact, ˆv_{+}(ξ, y) ≡ 0 when |y| is large and the same
holds for the function ω^{0}_{+}(ξ, y) defined in (3.49). We now compute

||E_{+}^{0}ω_{+}^{0}(ξ, ·)||^{2}_{L}2(R+)

= Z ∞

0

|∂_{y}ω_{+}^{0}(ξ, y) + it_{+}(ξ, y)ω_{+}^{0}(ξ, y)|^{2}dy +
Z ∞

0

[τ α_{+}+ τ βy + m_{+}(ξ, y)]^{2}|ω^{0}_{+}(ξ, y)|^{2}dy

−2<

Z ∞ 0

[τ α_{+}+ τ βy + m_{+}(ξ, y)]¯ω_{+}^{0}(ξ, y)[∂_{y}ω^{0}_{+}(ξ, y) + it_{+}(ξ, y)ω_{+}^{0}(ξ, y)]dy. (3.61)
Integrating by parts, we easily get

− 2<

Z ∞ 0

[τ α_{+}+ τ βy + m_{+}(ξ, y)]¯ω^{0}_{+}(ξ, y)[∂_{y}ω_{+}^{0}(ξ, y) + it_{+}(ξ, y)ω_{+}^{0}(ξ, y)]dy

= [τ α_{+}+ m_{+}(ξ, 0)]|ω^{0}_{+}(ξ, 0)|^{2}+
Z ∞

0

[τ β + ∂_{y}m_{+}(ξ, y)]|ω_{+}^{0}(ξ, y)|^{2}dy.

(3.62)

By (3.50) and (3.37), we have that

τ β + ∂_{y}m_{+}(ξ, y) ≥ τ β − M_{2}|ξ| ≥ τ β − 2τ s_{0}λ^{−2}_{2} M_{2} ≥ τ β/2 ≥ 0 (3.63)
provided 0 < s_{0} ≤ _{4M}^{βλ}^{2}^{2}

2. Combining (3.51), (3.61), (3.62) and (3.63) yields

||E_{+}^{0}ω_{+}^{0}(ξ, ·)||^{2}_{L}2(R+) ≥
Z ∞

0

[τ α_{+}+ τ βy + m_{+}(ξ, y)]^{2}|ω^{0}_{+}(ξ, y)|^{2}dy
+[τ α_{+}+ m_{+}(ξ, 0)]|ω_{+}^{0}(ξ, 0)|^{2}

≥ C^{−1}Λ^{2}
Z ∞

|ω_{+}^{0}(ξ, y)|^{2}dy + C^{−1}Λ|ω_{+}^{0}(ξ, 0)|^{2}, (3.64)

where C depends only on λ_{0}.
Similarly, we have that

λ^{−2}_{0} ||ω_{+}^{0}(ξ, ·)||^{2}_{L}2(R+) ≥
Z ∞

0

|∂_{y}vˆ_{+}(ξ, y) + it_{+}(ξ, y)ˆv_{+}(ξ, y)|^{2}dy
+

Z ∞ 0

[τ α_{+}+ τ βy − m_{+}(ξ, y)]^{2}|ˆv_{+}(ξ, y)|^{2}dy + [τ α_{+}− m_{+}(ξ, 0)]|ˆv_{+}(ξ, 0)|^{2}
+

Z ∞ 0

[τ β − ∂_{y}m_{+}(ξ, y)]|ˆv_{+}(ξ, y)|^{2}dy. (3.65)
The assumption (3.50) and (3.36) imply

τ α_{+}+ τ βy − m_{+}(ξ, y) ≥ τ α_{+}− λ^{−1}_{2} |ξ| ≥ τ α_{+}− 2λ^{−3}_{2} τ s_{0} ≥ τ α_{+}/2
provided 0 < s_{0} ≤ ^{α}^{+}_{4}^{λ}^{3}^{2}. Thus, by choosing

0 < s_{0} ≤ min

1, βλ^{2}_{2}

4M2

,α_{+}λ^{3}_{2}
4

, we obtain from (3.63) and (3.65)

C||ω_{+}^{0}(ξ, ·)||^{2}_{L}2(R+) ≥
Z ∞

0

|∂_{y}vˆ_{+}(ξ, y) + it_{+}(ξ, y)ˆv_{+}(ξ, y)|^{2}dy
+ Λ^{2}

Z ∞ 0

|ˆv_{+}(ξ, y)|^{2}dy + Λ|ˆv_{+}(ξ, 0)|^{2}. (3.66)
Additionally, we can see that

Z ∞ 0

|∂_{y}ˆv_{+}(ξ, y) + it_{+}(ξ, y)ˆv_{+}(ξ, y)|^{2}dy

≥ ε Z ∞

0

|∂_{y}vˆ_{+}(ξ, y)|^{2}− 2|∂_{y}vˆ_{+}(ξ, y)||t_{+}(ξ, y)ˆv_{+}(ξ, y)| + |t_{+}(ξ, y)ˆv_{+}(ξ, y)|^{2} dy

≥ ε Z ∞

0

1

2|∂yvˆ+(ξ, y)|^{2}− |t+(ξ, y)|^{2}|ˆv+(ξ, y)|^{2}

dy

≥ ε 2

Z ∞ 0

|∂_{y}vˆ_{+}(ξ, y)|^{2}dy − λ^{−1}_{3} ε|ξ|^{2}
Z ∞

0

|ˆv_{+}(ξ, y)|^{2}dy, (3.67)
for any 0 < ε < 1. Choosing ε sufficiently small, we obtain, from (3.66) and (3.67),

C||ω^{0}_{+}(ξ, ·)||^{2}_{L}2(R+)≥
Z ∞

0

|∂yvˆ+(ξ, y)|^{2}dy + Λ^{2}
Z ∞

0

|ˆv+(ξ, y)|^{2}dy + Λ|ˆv+(ξ, 0)|^{2}, (3.68)
where C depends only on λ_{0} and M_{0}.

Combining (3.64) and (3.68) yields
Λ^{2}

Z ∞ 0

|∂_{y}vˆ_{+}(ξ, y)|^{2}+ Λ^{4}
Z ∞

0

|ˆv_{+}(ξ, y)|^{2}+ Λ^{3}|ˆv_{+}(ξ, 0)|^{2}+ Λ|ω_{+}^{0}(ξ, 0)|^{2}

≤C||E_{+}^{0}ω_{+}^{0}(ξ, ·)||^{2}_{L}2(R+),

(3.69)

where C depends only on λ_{0} and M_{0}. From (3.52), since supp(ˆv_{+}(ξ, ·)) ⊂ [0, 1/β]

and (3.50) holds, we have

||P_{+}^{0}ˆv+(ξ, ·)||^{2}_{L}2(R+) ≤ 2||P_{+}^{0}ˆv+(ξ, ·)||_{L}^{2}_{(R}_{+}_{)}
+Cs^{2}_{0}

Λ^{2}

Z ∞ 0

|∂_{y}vˆ_{+}(ξ, y)|^{2}+ Λ^{4}
Z ∞

0

|ˆv_{+}(ξ, y)|^{2}

.(3.70) Moreover, by (3.57) and (3.50),

||E_{+}^{0}ω_{+}^{0}(ξ, ·)||^{2}_{L}2(R+) ≤ 2||P_{+}^{0}ˆv_{+}(ξ, ·)||^{2}_{L}2(R+)+ Cs^{2}_{0}Λ^{2}
Z ∞

0

|ˆv_{+}(ξ, y)|^{2}. (3.71)
Finally, by (3.69), (3.70) and (3.71) we get (3.59), provided s0 is small enough.

Now, we proceed to prove (3.60). Applying the same arguments leading to (3.62), we have that

||F_{−}^{0}ω_{−}^{0}(ξ, ·)||^{2}_{L}2(R−)

≥ Z 0

−∞

[τ α−+ τ βy − m−(ξ, y)]^{2}|ω_{−}^{0}(ξ, y)|^{2}dy − [τ α−− m−(ξ, 0)]|ω_{−}^{0}(ξ, 0)|^{2}
+

Z 0

−∞

[τ β − ∂_{y}m−(ξ; y)]|ω_{−}^{0}(ξ, y)|^{2}dy.

(3.72)

By (3.36) and (3.50) and since supp(ˆv−(ξ, ·)) ⊂ [−^{α}_{2β}^{−}, 0], we can see that

τ α_{−}+ τ βy − m_{−}(ξ, y) ≥ τ α_{−}/2 − λ^{−1}_{2} |ξ| ≥ τ α_{−}/2 − 2λ^{−3}_{2} τ s_{0} ≥ τ α_{−}/4 (3.73)
provided 0 < s_{0} ≤ ^{α}^{−}_{8}^{λ}^{3}^{2}. From (3.72) and (3.73), it follows

||F_{−}^{0}ω_{−}^{0}(ξ, ·)||^{2}_{L}2(R−) ≥ α^{2}_{−}
16τ^{2}

Z ∞ 0

|ω^{0}_{−}(ξ, y)|^{2}dy − τ α−|ω_{−}^{0}(ξ, 0)|^{2}

≥ CΛ^{2}
Z ∞

0

|ω^{0}_{−}(ξ, y)|^{2}dy − CΛ|ω_{−}^{0}(ξ, 0)|^{2}. (3.74)
Arguing as before and recalling (3.51) we obtain (3.60).

### 2

We now take into account the transmission conditions.

Lemma 3.4 Let τ ≥ 1 and assume (3.50). There exists a positive constant C de- pending only on λ0 and M0 such that if s0 ≤ 1/C then

ΛX

±

|V±(ξ)|^{2}+ Λ^{3}X

±

|ˆv±(ξ, 0)|^{2}+ Λ^{4}X

±

||ˆv±(ξ, ·)||^{2}_{L}2(R±)+ Λ^{2}X

±

||∂_{y}ˆv±(ξ, ·)||^{2}_{L}2(R±)

≤ CX

±

||P±vˆ±(ξ, ·)||^{2}_{L}2(R±)+ CΛ|η_{1}(ξ)|^{2} + CΛ^{3}|η_{0}(ξ)|^{2}, (3.75)

where supp(ˆv_{±}(ξ, ·)) ⊂ [−_{2β}^{c}^{0},^{c}_{β}^{0}] with c_{0} = min (α_{−}, 1).