ATTENUATION COEFFICIENTS

VICTOR ISAKOV, RU-YU LAI, AND JENN-NAN WANG

Abstract. In this work we consider stability of recovery of the conductivity and attenuation coefficients of the stationary Maxwell and Schr¨odinger equations from a complete set of (Cauchy) boundary data. By using complex geometrical optics solutions we derive some bounds which can be viewed as an evidence of increasing stability in these inverse problems when frequency is growing.

1. Introduction

The main goal of this paper is to demonstrate increasing stability of recovery of the conduc- tivity coefficient from results of all possible electromagnetic boundary measurements when frequency of the stationary waves is growing. When the frequency is large the Maxwell system can not be reduced to a scalar conductivity equation, so we have to handle the full Maxwell system. We do it by reducing the first order system to a vectorial Schr¨odinger type equation containing conductivity coefficient in matrix potential coefficient as in [3], [13] and use ideas in [7], [9] to derive increasing stability for potential coefficient involving variable attenuation.

Use of the full system seems to be crucial, since increasing stability for the conductivity in the scalar equation does not seems to be true. We first consider a simpler case of scalar equation and extend the first and third author’s result [9] on the increasing stability estimate in the case of a constant attenuation to a variable one. Observe, that the method used in [9] works only for the constant attenuation. Extending this method to the case with variable attenuation is not an obvious routine, so the result for the scalar equation is of independent interest.

The problem of recovering the conductivity from boundary measurements has been well studied since the 1980s. The uniqueness issue was first settled by Sylvester and Uhlmann in [15] where they constructed complex geometrical optics (CGO) solutions and proved global uniqueness of potential in the Schr¨odinger equation. A logarithmic stability estimate was obtained by Alessandrini [1]. Later, the log-type stability was shown to be optimal by Man- dache [11]. The logarithmic stability makes it impossible to design reconstruction algorithms with high resolution in practice since small errors in measurements will result in exponen- tially large errors in the reconstruction of the target material parameters. Nonetheless, in some cases, it has been observed numerically that the stability increases if one increases the frequency in the equation. The study of the increasing stability phenomenon has attracted a lot of attention recently. There are several results [7], [8], [9] and [12] which rigorously demonstrated the increasing stability behaviours in different settings.

For the Schr¨odinger equation without attenuation, the first author in [7] derived some bounds in different ranges of frequency which can be viewed as an evidence of increasing stability behaviour. The idea in [7] is to use complex- and real-valued geometrical optics

The first and the second author were supported in part by the National Science Foundation.

The third author was supported in part by the MOST102-2115-M-002-009-MY3.

1

solutions to control low and high frequency ranges, respectively. The proof was simplified in [8] by using only complex-valued geometrical optics solutions. Recently, the first and third author showed in [9] that the increasing stability also holds for the the same problem with constant attenuation. The proofs in [9] use complex and bounded geometrical optics solutions.

Continuing the research in [9] we show in this work even in the case of variable attenuation, the stability is improving when we increase the frequency. More precisely, the stability estimate will consist of two parts, one is the logarithmic part and the other one is the Lipschitz part (see Theorem 2.1). In the high frequency regime, the logarithmic part becomes weaker and the stability behaves like a Lipschitz continuity. Moreover, the constant of the Lipschitz part grows only polynomially in terms of the frequency. We would like to point out that in [12] an increasing stability phenomenon was proved for the acoustic equation in which the constant associated with the Lipschitz part grows exponentially in the frequency. In order to obtain polynomially growing constants as in [7], [8] and [9], it seems like the smallness assumption on the attenuation is needed.

The main result of the paper is a proof of the increasing stability for the Maxwell equations in a conductive medium. The first global uniqueness result for all three electromagnetic parameters of an isotropic medium was proved in [14]. We refer the reader to the latest results in [2] where the uniqueness and stability (log-type) were established using local data. Since our aim here is to demonstrate the increasing stability for the conductivity, we will assume that both the electric permittivity and the magnetic permeability are known constants. As in the previous results on the increasing stability, our proof here relies on CGO solutions. Here we will use CGO solutions constructed in [3]. Theorem 2.2, shows that the logarithmic part becomes weaker and the Lipschitz part becomes dominant when the frequency is increasing.

In order to derive a polynomially frequency-dependent constant in the Lipschitz part, we impose the smallness assumption on the conductivity. Of course, it is an interesting question whether one can remove the smallness assumption and still obtain a polynomially frequency- dependent constant in the Lipschitz part.

The paper is structured as follows. Section 2 introduces the mathematical settings for the Schr¨odinger and the Maxwell equations and presents our main results. The proof of stability estimates for the Schr¨odinger equation is carried out in section 3. In section 4 we derive the stability estimate for the Maxwell equations.

2. Preliminaries and main results

Let Ω be a sub domain of the unit ball B in R^{3}, with Lipschitz boundary ∂Ω. By H^{s}(Ω) we
denote Sobolev spaces of functions on Ω with the standard norm k · k_{(s)}(Ω). These notations
also hold for surfaces instead of Ω and for vector and matrix valued functions.

2.1. Schr¨odinger equation with attenuation. We consider the Schr¨odinger equation with attenuation

−∆u − ω^{2}u + qu = 0 in Ω, q = iωσ + c, (2.1)
where σ, c are real-valued bounded measurable functions in Ω. Since the boundary value
problem for (2.1) does not necessarily has a unique solution, for the study of the inverse
problem we consider the Cauchy data set defined by

C(q) =n

(u, ∂_{ν}u)|_{∂Ω}∈ H^{1/2}(∂Ω) × H^{−1/2}(∂Ω) : u is a H^{1}(Ω)-solution to (2.1)o
.

Hereafter, ν is the unit outer normal vector to ∂Ω. Let q_{j} = iωσ_{j} + c_{j}. To measure the
distance between two Cauchy data sets, we define

dist(C(q_{1}), C(q_{2})) = sup

j6=k

sup

(fj,gj)∈Cj

inf

(fk,gk)∈Ck

k(f_{j}, gj) − (fk, gk)k_{(1/2,−1/2)}(∂Ω)
k(f_{j}, gj)k_{(1/2,−1/2)}(∂Ω) ,

where (f_{j}, g_{j}) = (u_{j}, ∂_{ν}u_{j}), j = 1, 2, are the Cauchy data for solutions u_{j} to the equation
(2.1) with q = qj and

k(f, g)k_{(1/2,−1/2)}(∂Ω) =

kf k^{2}_{(1/2)}(∂Ω) + kgk^{2}_{(−1/2)}(∂Ω)1/2

. Denote

= dist(C_{1}, C_{2}), E = − log .

With these notations, the bound which indicates the increasing stability phenomenon for the Schr¨odinger equation with attenuation can now be stated as follows.

Theorem 2.1. Let s > 3/2 and qj = iωσj+ cj, j = 1, 2. Suppose that supp(q1 − q_{2}) ⊂ Ω
and cj, σj ∈ H^{2s}(Ω) satisfy kcjk_{(2s)}(Ω) ≤ M for some constant M > 0. Let ω > 1, E > 1.

Then there exist constants m > 0 depending on s, M and C depending on s, Ω, M , such that if

kσ_{j}k_{(2s)}(Ω) ≤ m,
then

kq_{1}− q_{2}k_{(−s)}(Ω) ≤ C(ω^{2} + ω(ω + E )^{−(2s−3)/2}). (2.2)
It is an immediate consequence of Theorem 2.1 that if σ1 = σ2 = b in Ω where b is a fixed
constant, |b| ≤ m, then one gets a bound similar to [9].

When ^{5}_{2} < s the logarithmic term is decreasing with respect to ω, so the bound (2.2) implies
improving (better than logarithmic) stability of recovery of q from complete boundary data.

We discuss it in detail.

Obviously,

ω^{2} + ω(ω + E )^{−(2s−3)/2} ≤ ω^{2} + (ω + E )^{−(2s−5)/2}≤ ω^{2} + (ω^{2}+ E^{2})^{−(2s−5)/4}.

The function F (t) = t + (t + E^{2})^{−(2s−5)/4} with the derivative F^{0}(t) = − (2s − 5)/4(t +
E^{2})^{−(2s−1)/4} is increasing on the interval [1, +∞) if a) (2s − 5)/4(1 + E^{2})^{−(2s−1)/4} ≤ and
has minimum when (2s − 5)/4(t + E^{2})^{−(2s−1)/4} = if b) < (2s − 5)/4(1 + E^{2})^{−(2s−1)/4}.
In the exceptional case a) the minimal value of F on [1, +∞) is + (1 + E^{2})^{−(2s−5)/4} ≤

+ (4/(2s − 5))(2s−5)/(2s−1). In the case b) the minimal value is

((4/(2s − 5))^{−4/(2s−1)}− E^{2}) + (4/(2s − 5))(2s−5)/(2s−1)≤

((4/(2s − 5))^{−4/(2s−1)}) + (4/(2s − 5))(2s−5)/(2s−1) =
((4/(2s − 5))^{−4/(2s−1)}+ (4/(2s − 5))(2s−5)/(2s−1))(2s−5)/(2s−1).

In the both cases, given error level one can choose frequency ω to guarantee at least H¨older stability for q which is far better than logarithmic stability. Moreover, this analysis of the stability estimate (2.2) suggests that for given there is an optimal choice of ω for the best reconstruction of q.

2.2. Maxwell equations. Let E and H denote the electric and magnetic vector fields in the medium with the electric permittivity ε, µ is the magnetic permeability µ, and the electric conductivity σ. The Maxwell equations at frequency ω are

∇ × H + iωγE = 0,

∇ × E − iωµH = 0, γ = ε + iσ/ω. (2.3)

We define the function space

H(curl; Ω) = {v ∈ H^{0}(Ω) : ∇ × v ∈ H^{0}(Ω)},
with the norm

kvk_{H(curl;Ω)}= kvk_{(0)}(Ω) + k∇ × vk_{(0)}(Ω).

For any v ∈ H(curl; Ω) the tangential trace of v, ν × v, can be defined as an element of the
space H^{−1/2}(∂Ω). Namely, for any w^{∗}∈ H^{1}(Ω),

hν × v, wi = Z

Ω

(∇ × v) · w − Z

Ω

v · (∇ × w), (see [3]). We define the space

T H(∂Ω) = {w ∈ H^{−1/2}(∂Ω) : there exists v ∈ H(curl; Ω), ν × v = w}

with the norm

kwk_{T H(∂Ω)}= inf{kvk_{H(curl;Ω)} : ν × v = w}.

The Cauchy data set corresponding to (2.3) is defined by
C = {(ν × E|_{∂Ω},ν × H|_{∂Ω}) ∈ (T H(∂Ω))^{2}:

(E, H) is (H(curl; Ω))^{2}− solution of (2.3)}.

In this work we assume that ε = µ = 1. Suppose that C1 and C2 are two Cauchy data
of (2.3) corresponding to conductivities σ1 and σ2, respectively. We measure the distance
between two Cauchy data C_{1}, C_{2} of the Maxwell equations as follows:

dist(C1, C2) = sup

j6=k

sup

(F (j), G(j)) ∈ C_{j},

inf

(F (k),G(k))∈Ck

k(F (j), G(j)) − (F (k), G(k))k_{(T H(∂Ω))}2

kF (j)k_{T H(∂Ω)} ,
where F (j) = ν × E(j), G(j) = ν × H(j) on ∂Ω and E(j), H(j) is a H(curl; Ω) solution to
(2.3) with σ = σ_{j}. Likewise, we denote

= dist(C_{1}, C_{2}), E = − log .

Our main result is the following stability estimate for the Maxwell equations.

Theorem 2.2. Let s > 3/2 be an integer. Suppose that supp(σ_{1} − σ_{2}) ⊂ Ω. Assume that
E > c (depending on s, Ω) and ω > 1.

Then there exist constants m and C depending on s, Ω such that if

kσ_{j}k_{(2s+2)}(Ω) ≤ m (2.4)

for j = 1, 2, then

kσ_{1}− σ_{2}k_{(−s)}(Ω) ≤ Cω^{−1}(ω^{2}+ E^{2})^{3/2}^{1/2}+ C(ω + E )^{−(2s−3)/2}+ C(ω + E )^{−1}. (2.5)
As above, Theorem 2.2 implies increasing stability for recovery of the conductivity coeffi-
cient.

3. Stability estimates for Schr¨odinger equation

In this section we prove Theorem 2.1. We will use some ideas in [8]. C will stay for a generic constant depending only on s, Ω, M . First, following the arguments in [5], it is not hard to see that the following inequality holds.

Proposition 3.1. Let u_{1} and u_{2} be the solutions of (2.1) corresponding to the coefficients
(σ1, c1) and (σ2, c2), respectively. Then

Z

Ω

(q1− q_{2})u1u2

≤ k(u_{1}, ∂νu1)k_{(1/2,−1/2)}(∂Ω) k(u2, ∂νu2)k_{(1/2,−1/2)}(∂Ω). (3.1)
We use CGO solutions for (2.1).

Proposition 3.2. Let ζ ∈ C^{3} satisfy ζ · ζ = ω^{2}. Let s > 3/2. Then there exist constants C0

and C1, depending on s and Ω, such that if |ζ| > C0kqk_{(2s)}(Ω), then there exists a solution
u(x) = e^{iζ·x}(1 + ψ(x))

of the equation (2.1) with

kψk_{(2s)}(Ω) ≤ C1

|ζ|kqk_{(2s)}(Ω).

Proof. By extension theorems for Sobolev spaces there is an extension of q from Lipschitz Ω
onto the unit ball B (denoted also q) with kqk_{(2s)}(B) ≤ C(Ω)kqk_{(2s)}(Ω). We can assume that
q is compactly supported in B.

We observe that since e^{−iζ·x}(−∆)(e^{iζ·x}(1 + ψ(x))) = (−∆ − 2iζ · ∇ + ζ · ζ)(1 + ψ(x)), if
u = e^{iζ·x}(1 + ψ) is a solution of (2.1) if and only if the reminder ψ solves

(−∆ − 2iζ · ∇)ψ = −q(1 + ψ) in Ω. (3.2)

As known [15], there are constants C_{0}, C_{1}, depending on s, such that if C_{0}kqk_{(2s)}(B) < |ζ|,
there exists a solution ψ to (3.2) with

kψk_{(2s)}(B) ≤ C_{1}

|ζ|kqk_{(2s)}(B).

Let ξ ∈ R^{3}. We select e(j) ∈ R^{3} satisfying |e(1)| = |e(2)| = 1 and e(j) · ξ = e(1) · e(2) = 0
for j = 1, 2. Let R > 0. We choose

ζ(1) = −1

2ξ + i R^{2}
2

1/2

e(1) +

ω^{2}+R^{2}
2 − |ξ|^{2}

4

1/2

e(2),

ζ(2) = −1

2ξ − i R^{2}
2

1/2

e(1) −

ω^{2}+R^{2}
2 − |ξ|^{2}

4

1/2

e(2), (3.3)

provided

ω^{2}+R^{2}
2 ≥ |ξ|^{2}

4 . It is clear that

ζ(1) + ζ(2) = −ξ, ζ(j) · ζ(j) = ω^{2}, |ζ(j)|^{2}= R^{2}+ ω^{2}, for j = 1, 2,
which implies that √

2|ζ(j)| ≥ R + ω. We observe that if R + ω ≥

√

2C0(mω + M ), (3.4)

then one has

|ζ(j)| ≥ C_{0}(mω + M ) > C0kiωσ_{j}+ cjk_{(2s)}(Ω).

Note that when

R ≥√

2C_{0}M and √

2C_{0}m ≤ 1, (3.5)

the inequality (3.4) is clearly satisfied. So from Proposition 3.2, there exist CGO solutions
uj(x) = e^{iζ(j)·x}(1 + ψj(x)), kψjk_{(2s)}(Ω) ≤ C1

|ζ(j)|kiωσ_{j}+ cjk_{(2s)}(Ω). (3.6)
To derive the stability, we would like to estimate the Fourier transform of q1− q_{2}. Before
doing so, we need the following estimate on the Cauchy data of CGO solutions u_{j}, j = 1, 2,
constructed above.

Lemma 3.3. If u_{j} are the solutions (3.6), then

k(u_{j}, ∂_{ν}u_{j})k_{(1/2,−1/2)}(∂Ω) ≤ Ce^{√}^{R}^{2}p

R^{2}+ ω^{2}. (3.7)

Proof. Recall that s > 3/2. Thus, by Sobolev embedding theorems
kψ_{j}k_{∞}(Ω) ≤ Ckψ_{j}k_{(2s)}(Ω) ≤ CC_{1}

|ζ(j)|kiωσ_{j}+ c_{j}k_{(2s)}(Ω) ≤ CC_{1}
C0

≤ C and

|u_{j}(x)| = |e^{iζ(j)·x}(1 + ψ_{j}(x))| ≤ e^{|=ζ(j)|}(1 + |ψ_{j}(x)|) ≤ Ce

√R 2. Moreover,

|∇u_{j}(x)| ≤ |iζ(j)e^{iζ(j)·x}(1 + ψj)| + |e^{iζ(j)·x}∇ψ_{j}| ≤ Ce^{√}^{R}^{2}|ζ(j)|(1 + |∇ψ(x)|),
hence

ku_{j}k_{(1)}(Ω) ≤ Ce^{√}^{R}^{2}(1 + |ζ(j)|)(1 + kψ_{j}k_{(1)}(Ω) ≤ Ce^{√}^{R}^{2}p

R^{2}+ ω^{2}
and thus by Trace Theorems

k(u_{j}, ∂νuj)k_{(1/2,−1/2)}(∂Ω) ≤ Ce

√R

2|ζ(j)| = Ce^{√}^{R}^{2}p

R^{2}+ ω^{2}.

We denote ˆf the Fourier transform of the function f . Let ˜q = q_{1}− q_{2} . Lemma 3.3 is used
to bound the low frequency of ˆq as the following lemma shows.˜

Lemma 3.4. Let s > 3/2 and ξ = re with r ≥ 0, |e| = 1. Assume that (3.5) is satisfied.

There exists a constant C, such that

|ˆq(re)| ≤ Cφ(R, ω)Q(ξ) + Ce˜

√2R(R^{2}+ ω^{2}), (3.8)
provided ω^{2}+ R^{2}/2 > r^{2}/4. Here

Q(ξ)^{2} =
Z

hxi^{−4s}
Z

h−η + ξ − xi^{−2s}|ˆq(η)|˜ ^{2}dηdx
and

φ(R, ω) = mω + M R + ω .

Proof. Substituting CGO solutions (3.6) into (3.1) yields

Z

Ω

˜

q(x)e^{−iξ·x}(1 + ψ_{1}(x))(1 + ψ_{2}(x))dx

≤
k(u_{1}, ∂νu1)k_{(1/2,−1/2)}(∂Ω) k(u2, ∂νu2)k_{(1/2,−1/2)}(∂Ω),
which leads to

Z

Ω

˜

qe^{−iξ·x}dx

≤ Z

˜

q(x)e^{−iξ·x}χ(x)Ψ(x)dx

+ k(u1, ∂νu1)k_{(1/2,−1/2)}(∂Ω) k(u2, ∂νu2)k_{(1/2,−1/2)}(∂Ω),
(3.9)
where Ψ(x) = ψ1(x) + ψ2(x) + ψ1(x)ψ2(x), χ ∈ C_{0}^{∞}(Ω), and χ = 1 in supp(˜q).

Since R f gdx = (2π)^{−3}R f ˆˆgdη and cf g = (2π)^{−3}f ∗ ˆˆ g, one has
Z

˜

q(x)e^{−iξ·x}χ(x)Ψ(x)dx

= (2π)^{−3}
Z

bq(η)( \˜ e^{−iξ·x}χΨ)(−η)dη = (2π)^{−6}
Z

b˜

q(η)( \e^{−iξ·x}χ ∗ bΨ)(−η)dη.

Denote χ_{ξ}= e^{−iξ·x}χ and hξi = (1 + |ξ|^{2})^{1/2}. By the H¨older’s inequality, we obtain

| Z

ˆ˜

q(η)( \e^{−iξ·x}χ ∗ bΨ)(−η)dη|

≤ Z

|ˆq(η)||(˜ cχ_{ξ}∗ ˆΨ)(−η)|dη

≤ Z

|ˆq(η)||˜ Z

cχ_{ξ}(−η − x) ˆΨ(x)dx|dη

≤ Z Z

|ˆq(η)||˜ χcξ(−η − x)|dη| ˆΨ(x)|dx

≤ Z

hxi^{−4s}

Z

|ˆq(η)||˜ χc_{ξ}(−η − x)|dη

2

dx

!1/2

kΨk_{(2s)}

≤ Z

hxi^{−4s}

Z

h−η + ξ − xi^{−s}|ˆq(η)|h−η + ξ − xi˜ ^{s}| ˆχ(−η + ξ − x)|dη

2

dx

!1/2

kΨk_{(2s)}

≤

Z

hxi^{−4s}

Z

h−η + ξ − xi^{−2s}|ˆq(η)|˜ ^{2}dη

dx

1/2

kχk_{(s)}kΨk_{(2s)}. (3.10)
From (3.6) and a priori assumptions, we have

kΨk_{(2s)}(Ω) ≤ Cmω + M

R + ω . (3.11)

Therefore, the estimate (3.8) follows from (3.7), (3.9), (3.10) and (3.11). The following lemma is an immediate consequence of Lemma 3.4.

Lemma 3.5. Let

R^{∗} >√

2C0M and √

2C0m ≤ 1, (3.12)

where C0 is the constant given in Proposition 3.2.

If 0 ≤ r ≤ ω + R^{∗}, then

|ˆq(re)| ≤ Cφ(R˜ ^{∗}, ω)Q(ξ) + C(ω^{2}+ R^{∗2})e

√

2R^{∗}; (3.13)

if r ≥ ω + R^{∗}, then

|ˆq(re)| ≤ Cφ(r, ω)Q(ξ) + C(ω˜ ^{2}+ r^{2})e

√2r. (3.14)

Proof. It is enough to take R = R^{∗} when 0 ≤ r ≤ ω + R^{∗} and take R = r when r ≥ ω + R^{∗}

in Lemma 3.4.

With the help of Lemma 3.5 we can prove

Lemma 3.6. Let the conditions of Lemma 3.5 be satisfied. Then for any T ≥ ω + R^{∗}, we
have

k˜qk^{2}_{(−s)}(Ω) ≤ Cφ(R^{∗}, ω)^{2}k˜qk^{2}_{(−s)}(Ω)
+ C(ω^{2}+ T^{2})^{2}(e^{2}

√

2R^{∗}+ χ(T )e^{2}

√

2T)^{2}+ C(mω + M )^{2}T^{−(2s−3)}, (3.15)
where χ(T ) ≤ 1 and χ(T ) = 0 if T = ω + R^{∗}.

Proof. Using polar coordinates, we obtain
k˜qk^{2}_{(−s)}(Ω) =

Z ∞ 0

Z

|e|=1

|ˆq(re)|˜ ^{2}(1 + r^{2})^{−s}r^{2}dedr

=

Z _{ω+R}^{∗}

0

Z

|e|=1

|ˆq(re)|˜ ^{2}(1 + r^{2})^{−s}r^{2}dedr

+ Z T

ω+R^{∗}

Z

|e|=1

|ˆq(re)|˜ ^{2}(1 + r^{2})^{−s}r^{2}dedr
+

Z ∞ T

Z

|e|=1

|ˆq(re)|˜ ^{2}(1 + r^{2})^{−s}r^{2}dedr

=: I_{1}+ I_{2}+ I_{3}. (3.16)

We begin with bounding I_{3}. Since supp ˜q ⊂ Ω by the H¨older’s inequality |ˆq(ξ)| ≤˜
Ck˜qk_{(0)}(Ω) and so

I_{3}≤ Ck˜qk^{2}_{(0)}(Ω)
Z ∞

T

Z

|e|=1

(1 + r^{2})^{−s}r^{2}dedr

≤ Ck˜qk^{2}_{(0)}(Ω)T^{−(2s−3)} ≤ C(ωm + M )^{2}T^{−(2s−3)}. (3.17)
Before evaluating I_{1} and I_{2} terms, we need the following estimate. Let A = {η; | − η +
ξ − x| ≤ |η|/2}. By direct computation, we have

Z
hξi^{−2s}

Z

hxi^{−4s}

Z

A^{c}

h−η + ξ − xi^{−2s}|ˆq(η)|˜ ^{2}dη

dxdξ

≤ C Z

hξi^{−2s}
Z

hxi^{−4s}

Z

A^{c}

hηi^{−2s}|ˆq(η)|˜ ^{2}dη

dxdξ

≤ Ck˜qk^{2}_{(−s)},

with C depends on s. On the other hand, by using the fact that A ⊂ {η; ^{2}_{3}|ξ − x| ≤ |η| ≤
2|ξ − x|} and hxi^{−2s}hξi^{−2s}≤ 2^{s}hx − ξi^{−2s}, one has

Z
hξi^{−2s}

Z

hxi^{−4s}

Z

A

h−η + ξ − xi^{−2s}|ˆq(η)|˜ ^{2}dη

dxdξ

≤ Z

hξi^{−2s}
Z

hxi^{−4s}
Z

2

3|ξ−x|≤|η|≤2|ξ−x|

h−η + ξ − xi^{−2s}|ˆq(η)|˜ ^{2}dηdxdξ

≤ C Z

hxi^{−2s}
Z Z

2

3|ξ−x|≤|η|≤2|ξ−x|

hx − ξi^{−2s}h−η + ξ − xi^{−2s}|ˆq(η)|˜ ^{2}dηdξdx

≤ C Z

hxi^{−2s}
Z Z

2

3|ξ−x|≤|η|≤2|ξ−x|

hηi^{−2s}h−η + ξ − xi^{−2s}|ˆq(η)|˜ ^{2}dηdξdx

≤ C

Z Z Z

hxi^{−2s}h−η + ξ − xi^{−2s}dξdx

|ˆq(η)|˜ ^{2}hηi^{−2s}dη

≤ Ck˜qk^{2}_{(−s)}.

Therefore, we can deduce that Z

hξi^{−2s}Q(ξ)^{2}dξ ≤ Ck˜qk^{2}_{(−s)}. (3.18)
Next, by using (3.13) and (3.18), we yield

I_{1}≤

Z ω+R^{∗}
0

C φ(R^{∗}, ω)Q(ξ) + (ω^{2}+ R^{∗2})e

√2R^{∗}2

(1 + r^{2})^{−s}r^{2}dr

≤ C(φ(R^{∗}, ω)^{2}k˜qk^{2}_{(−s)}(Ω) + (ω^{2}+ R^{∗2})^{2}e

√
2R^{∗}^{2})

Z ω+R^{∗}
0

(1 + r^{2})^{−s}r^{2}dr

≤ Cφ(R^{∗}, ω)^{2}k˜qk^{2}_{(−s)}(Ω) + C(ω^{2}+ R^{∗2})^{2}e

√2R∗^{2}, (3.19)

since due to 2s > 3 we have
Z ω+R^{∗}

0

(1 + r^{2})^{−s}r^{2}dr ≤
Z ∞

0

(1 + r^{2})^{−s}r^{2}dr = C.

In the same way, the estimate (3.14) implies that
I2 ≤ C φ(R^{∗}, ω)^{2}k˜qk^{2}_{(−s)}(Ω) + (ω^{2}+ T^{2})^{2}e

√
2T^{2}

Z T
ω+R^{∗}

(1 + r^{2})^{−s}r^{2}dr

≤ Cφ(R^{∗}, ω)^{2}k˜qk^{2}_{(−s)}(Ω) + C(ω^{2}+ T^{2})^{2}e^{2}

√

2T^{2}. (3.20)

Combining (3.16)-(3.20) and using that I_{2} = 0 when T = ω + R^{∗}, we complete the proof.

We are now ready to prove Theorem 2.1.

Proof of Theorem 2.1. We will use the estimate (3.15). Obviously,
φ(R^{∗}, ω) = mω + M

R^{∗}+ ω ≤ m + M
R^{∗}.
Therefore, we can choose m < 1/(√

2C0) and R^{∗}>√

2C0M so that
Cφ(R^{∗}, ω)^{2}< 1

2.

The choice of m depends only on s, Ω and of R^{∗} only depends on s, Ω, M . Thus, the first
term on the right hand side of (3.15) can be absorbed by the left side, and we obtain

k˜qk^{2}_{(−s)}(Ω) ≤ C(ω^{2}+ T^{2})^{2}(e^{2}

√2R^{∗}+ χ(T )e^{2}

√2T)^{2}+ Cω^{2}T^{−(2s−3)}. (3.21)

We consider the following two cases:

(i) ω + R^{∗}≤ 1

2E and (ii) ω + R^{∗}≥ 1
2E.

In the case (i) we choose T = ^{E}_{2}. We want to show that there exists C2 such that
(ω^{2}+ T^{2})^{2}e^{2}

√

2T^{2}≤ C_{2}ω^{2}(ω + E )^{−(2s−3)} (3.22)
and

ω^{2}T^{−(2s−3)} ≤ C_{2}ω^{2}(ω + E )^{−(2s−3)}. (3.23)
It is clear that (2.2) is a consequence of (3.21), (3.22), and (3.23).

Due to our choice of T , (3.22) is satisfied if
(ω + E )^{4}^{2−}

√2 ≤ C(ω + E)^{−2s+3},
or

(ω + E )^{4+2s−3}^{2−}

√2 ≤ C,
which follows from the elementary inequality (E )^{4+2s−3}^{2−}

√ 2 ≤ C.

Note that (3.23) is equivalent to

C_{2}^{−1/(2s−3)}(ω + E ) ≤ T = E

2. (3.24)

Obviously,

ω + E ≤ ω + R^{∗}+ E ≤ 3E
2 ,

according to (i). Now we see that (3.24) holds whenever C_{2}^{−1/(2s−3)}≤ 1/3.

Next we consider case (ii). We choose T = ω + R^{∗}. Then from (3.21) we have
k˜qk^{2}_{H(−s)}(Ω) ≤ C(ω^{4}+ R^{∗4})e^{2}

√

2R^{∗}^{2}+ Cω^{2}(ω + R^{∗})^{−(2s−3)}.
Since R^{∗}< C and in the case (ii) ω + R^{∗} ≥ ^{ω}_{2} +^{1}_{4}E, the estimate (2.2) follows.

The proof now is complete.

4. Stability estimates for Maxwell equations

In this section we will prove Theorem 2.2. We first discuss a reduction that transforms the Maxwell equations to the vectorial Schr¨odinger equation by adapting Section 1 in [4] to the special case of ε = µ = 1. We then use arguments similar to the previous section to derive a bound of the difference of the conductivities.

4.1. Reduction to the Schr¨odinger equation. Let Ω be a bounded domain in R^{3} with
smooth boundary. We consider the time-harmonic Maxwell equations (2.3) with ε = µ = 1,

∇ × H + iωγE = 0,

∇ × E − iωH = 0, (4.1)

where γ = 1 + iσ/ω, σ ∈ C^{2}(Ω) and σ > 0 in Ω. From (4.1), one has the following
compatibility conditions for E and H

∇ · (γE) = 0,

∇ · H = 0. (4.2)

Let α = log γ (the principal branch). Combining (4.1) and (4.2) gives the following eight equations

∇ · E + ∇α · E = 0,

∇ × E − iωH = 0,

∇ · H = 0,

∇ × H + iωγE = 0, which leads to the 8 × 8 system

∗ 0 ∗ D·

∗ 0 ∗ −D×

∗ D· ∗ 0

∗ D× ∗ 0

+

∗ 0 ∗ Dα·

∗ ωI_{3} ∗ 0

∗ 0 ∗ 0

∗ 0 ∗ ωγI_{3}

0 H

0 E

= 0.

Here Ij is the (j × j)-identity matrix,

D· = −i(∂_{1} ∂_{2} ∂_{3}), D = −i(∂_{1} ∂_{2} ∂_{3})^{t}
and

D× = −i

0 −∂_{3} ∂2

∂_{3} 0 −∂_{1}

−∂_{2} ∂1 0

.

∗ means that we obtain the same equations regardless of values of ∗.

Define the operators P+(D) =

0 D·

D D×

, P−(D) =

0 D·

D −D×

acting on 4-vectors. It is easy to check that

P_{+}(D)P−(D) = P−(D)P_{+}(D) = −∆I_{4}
and

P_{+}(D)^{∗}= P−(D), P−(D)^{∗} = P_{+}(D).

Also, we can see that the 8 × 8 operator P =

0 P−(D)

P+(D) 0

is a self-adjoint, i.e., P^{∗}= P . For ζ ∈ C^{3}, we set
P (ζ) = −i

0 P−(ζ) P+(ζ) 0

.

Let X = (Φ_{1}, H^{t}, Φ_{2}, E^{t})^{t}be 8-vectors, where Φ_{1} and Φ_{2} are additional scalar fields. The
augmented Maxwell’s system is

(P + V )X = 0 in Ω, where

V =

ω 0 0 Dα·

0 ωI3 Dα 0

0 0 ωγ 0

0 0 0 ωγI3

.

Note that when Φj = 0 the solution X corresponds to the solution of the original Maxwell’s system in Ω. Now we want to rescale the augmented system. Let

Y =

I_{4} 0
0 γ^{1/2}I_{4}

X, then

(P + V )X =

γ^{−1/2}I4 0
0 I_{4}

(P + W )Y, where

W = κI_{8}+1
2

0 0 0 Dα·

0 0 Dα D × α

0 0 0 0

0 0 0 0

with κ = ωγ^{1/2}. Hence

(P + V )X = 0 if and only if (P + W )Y = 0.

The advantage of rescaling is that no first order terms appear in (4.3)-(4.5). The proof of next lemma can be found in [4].

Lemma 4.1.

(P + W )(P − W^{t}) = −∆I_{8}+ Q, (4.3)

(P − W^{t})(P + W ) = −∆I_{8}+ Q(1), (4.4)
(P + W^{∗})(P − W ) = −∆I8+ Q(2), (4.5)
where the matrix potentials are given by

Q = 1 2

∆α

2∇^{2}α − ∆αI_{3}

− κ^{2}I_{8}−

1

4(Dα · Dα)I4 2Dκ·

2Dκ 2Dκ·

2Dκ

, (4.6)

Q(1) = −1 2

∆α

2∇^{2}α − ∆αI_{3}

− κ^{2}I8−

2Dκ×

−2Dκ×

1

4(Dα · Dα)I4

,

and

Q(2) = −1 2

∆α

2∇^{2}α − ∆αI3

− κ^{2}I8−

−2Dκ×

2Dκ×

1

4(Dα · Dα)I_{4}

.

Here ∇^{2}f = (∂_{ij}^{2}f ) is the Hessian of f and only non zero elements are shown in 8×8 matrices
Q, Q(1), and Q(2). Moreover, W^{t} denotes the transpose of W and W^{∗} = W^{t}.

4.2. Construction of CGO solutions. We recall that Ω ⊂ B. We extend σ to a function
in H^{2s+2}(R^{3}), still denoted by σ, so that supp σ in B. Therefore, ω^{2}I_{8}+ Q, ω^{2}I_{8}+ Q(1) and
ω^{2}I8+ Q(2) are compactly supported in B.

Let Y = (f^{1}, (u^{1})^{t}, f^{2}, (u^{2})^{t})^{t}∈ H^{2s}(B). We recall the following construction of CGO
solutions for Maxwell equations (4.1) in [3].

Proposition 4.2. [3, Proposition 9] Let s > 3/2 be an integer. Assume that a, b, ζ ∈ C^{3} and
ζ · ζ = ω^{2}. Then there exists a constant C_{0} depending on s, such that if

|ζ| > C_{0}

X

j=1,2

kω^{2}+ qjk_{(2s)}(B) + kω^{2}I8+ Qk_{(2s)}(B)

, where

q1 = −κ^{2}, q2= −1

2∆α − κ^{2}− 1

4(Dα · Dα), then there exists a solution

Z = e^{iζ·x}(A + Ψ)
of (−∆I8+ Q)Z = 0 with

A = 1

|ζ|

ζ · a

ωb ζ · b

ωa

,

satisfying

kΨk_{(2s)}(Ω) ≤ C_{0}

|ζ||A|kω^{2}I_{8}+ Qk_{(2s)}(B).

Moreover, Y = (P − W^{t})Z is a solution of (P + W )Y = 0 and
Y = (0, H^{t}, 0, γ^{1/2}E^{t})^{t}
where E, H solve (4.1).

Proof. The proposition can be proved following [3, Proposition 9]. The only difference is that
we use the estimate from H_{δ+1}^{2s} to H_{δ}^{2s} with −1 < δ < 0, that is,

kG_{ζ}f k_{H}2s

δ ≤ C(δ)

|ζ| kf k_{H}2s

δ+1, (4.7)

instead of [3, (22)] where the estimate is from L^{2}_{δ+1} to L^{2}_{δ}, where G_{ζ} is the convolution
operator with the fundamental solution of (−∆ − 2iζ · ∇). This estimate (4.7) holds due
to the easy fact (−∆ − 2iζ · ∇) and (I − ∆)^{2s} commute. Due to the fact that ω^{2}I_{8}+ Q is
compactly supported in B, one can replace the H_{δ+1}^{2s} norm on the right hand side of (4.7) by
H^{2s}(B). Recall that H^{2s}(B) is a Banach algebra for s > 3/4. The rest of the proof follows

[3, Proposition 9].

We also need CGO solutions for the rescaled system. These solutions can be constructed as in [3, Lemma 10, Proposition 11].

Proposition 4.3. [3, Lemma 10, Proposition 11] Let a^{∗}, b^{∗}, ζ ∈ C^{3} and ζ · ζ = ω^{2}. Then
there exists a constant C0 depending on s, such that if

|ζ| > C_{0}kω^{2}I8+ Q(2)k_{(2s)}(B),
then there is a solution

Y^{∗} = e^{iζ·x}(A^{∗}+ Ψ^{∗})
of (P + W^{∗})Y^{∗} = 0, where

A^{∗} = 1

|ζ|

ζ · a^{∗}

−ζ × a^{∗}
ζ · b^{∗}
ζ × b^{∗}

and Ψ^{∗}= P Ψ∗+ iP (ζ)Ψ∗− W A_{∗}− W Ψ_{∗} with A∗ = _{|ζ|}^{1} (0, b^{∗t}, 0, a^{∗t})^{t},
kΨ_{∗}k_{(2s)}(Ω) ≤ C_{0}

|ζ||A_{∗}|kω^{2}I_{8}+ Q(2)k_{(2s)}(B) (4.8)
and

kP Ψ_{∗}k_{(2s)}(Ω) ≤ C_{0}(|A∗| + kΨ_{∗}k_{(2s)}(Ω))kω^{2}I_{8}+ Q(2)k_{(2s)}(B) (4.9)
Moreover,

kΨ^{∗}k_{(2s)}(Ω) ≤ C0|A_{∗}|

1 +kσk_{(2s+2)}(B)ω

|ζ|

kω^{2}I8+ Q(2)k_{(2s)}(B) + kW k_{(2s)}(B) . (4.10)
4.3. Stability estimates. Suppose that ε = µ = 1, kσjk_{(2s+2)}(B) ≤ m < 1 and supp(σ1−
σ_{2}) ⊂ Ω. Hence we have supp(γ_{1}− γ_{2}) ⊂ Ω. We first prove an inequality that connects the
unknowns and the boundary measurements.

In the proofs C denote generic constants depending on s, Ω.

Theorem 4.4. Assume that σ_{1} and σ_{2} belong to H^{2s+2}(Ω) and supp(σ_{1}− σ_{2}) ⊂ Ω. There
exists a constant C dependent on Ω, s such that, for any Z1 ∈ H^{2s}(Ω) satisfying Y1 = (P −
W_{1}^{t})Z1= (0, H_{1}^{t}, 0, γ_{1}^{1/2}E_{1}^{t})^{t} with (E1, H1) solution to (4.1) in Ω with coefficient σ1, and
any H^{2s}(Ω) solution Y2 = (f^{1}, (u^{1})^{t}, f^{2}, (u^{2})^{t})^{t} of (P + W_{2}^{∗})Y2 = 0, one has

|((Q_{1}− Q_{2})Z1, Y2)Ω| ≤ C dist(C_{1}, C2)kY1k_{(1)}(Ω)kY2k_{(1)}(Ω). (4.11)
Note that Qj is defined in (4.6) corresponding to σj for j = 1, 2. The matrix-valued
functions W_{1}, W_{2} are defined similarly.

Proof. Using supp(σ_{1}− σ_{2}) ⊂ Ω, from the first identity in the proof of [3, Proposition 7] one
has

((Q1− Q_{2})Z1, Y2)Ω= (Y1, P Y2)Ω− (P Y_{1}, Y2)Ω. (4.12)
Moreover, using the estimate of the right hand of (4.12) derived in [3] (see the inequality
before [3, Proposition 7]) and the support assumption of σ1− σ_{2}, we can obtain that

|(Y_{1}, P Y_{2})_{Ω}− (P Y_{1}, Y_{2})_{Ω}|

≤ C(kν × E_{1}− ν × E_{2}k_{T H(∂Ω)}+ kν × H_{1}− ν × H_{2}k_{T H(∂Ω)})kY_{2}k_{(1)}(Ω)

≤ C dist(C_{1}, C_{2})kν × E_{1}k_{T H(∂Ω)}kY_{2}k_{(1)}(Ω)

≤ C dist(C_{1}, C2)kY1k_{(1)}(Ω)kY2k_{(1)}(Ω), (4.13)
where (ν × E2, ν × H2) ∈ C2. The required estimate follows from (4.12) and (4.13).

Let ξ ∈ R^{3}. We select e(j) ∈ R^{3} satisfying |e(1)| = |e(2)| = 1 and e(j) · ξ = e(1) · e(2) = 0
for j = 1, 2. Let R > 0. We choose

ζ(1) = −1

2ξ + i R^{2}
2

1/2

e(1) +

ω^{2}+R^{2}
2 − |ξ|^{2}

4

1/2

e(2),

ζ(2) = 1

2ξ − i R^{2}
2

1/2

e(1) +

ω^{2}+R^{2}
2 −|ξ|^{2}

4

1/2

e(2), where we assume

ω^{2}+R^{2}
2 ≥ |ξ|^{2}

4 . Then

ζ(1) − ζ(2) = −ξ, ζ(j) · ζ(j) = ω^{2}, |ζ(j)|^{2}= R^{2}+ ω^{2}, for j = 1, 2.

To construct CGO solutions of Proposition 4.2 with Q = Q_{1} corresponding to σ_{1}, we
choose a(1) = 0 and

b(1) = −ie(1)

√

2 +e(2)

√ 2 . By direct calculation, we can see that

kω^{2}+ qjk_{(2s)}(B) + kω^{2}I8+ Q1k_{(2s)}(B) ≤ C2kσ_{1}k_{(2s+2)}(B)ω + C3,
where C_{2}, C_{3} are independent of ω. If we take

R + ω > 2^{1/2}(C_{0}C_{2}kσ_{1}k_{(2s+2)}(B)ω + C_{0}C_{3}), (4.14)
then

|ζ(1)| ≥ 2^{−1/2}(R + ω) > C_{0} kω^{2}+ q_{j}k_{(2s)}(B) + kω^{2}I_{8}+ Q_{1}k_{(2s)}(B) .
By Proposition 4.2, there exist solutions

Z_{1} = e^{iζ(1)·x}(A(1) + Ψ_{1})
of (−∆I8+ Q1)Z1 = 0 with

A(1) = 1

|ζ(1)|

0 ωb(1) ζ(1) · b(1)

0

and

kΨ_{1}k_{(2s)}(Ω) ≤ C_{0}

|ζ(1)||A(1)|kω^{2}I_{8}+ Q(1)k_{(2s)}(B).

Likewise, if

R + ω > 2^{1/2}(C_{0}C_{2}kσ_{2}k_{(2s+2)}(B)ω + C_{0}C_{3}), (4.15)
then one has

|ζ(2)| > C_{0}k(ω^{2}I_{8}+ Q(2)_{2}k_{(2s)}(B),

where Q(2)2 is the matrix-valued function Q(2) corresponding to σ = σ2. By choosing a^{∗} = 0
and

b^{∗} = b(2) = ie(1)

√2 +e(2)

√2

in Proposition 4.3 with Q(2) corresponding to σ_{2}, there exist solutions
Y2= e^{iζ(2)·x}(A^{∗}(2) + Ψ^{∗}_{2})

of (P + W_{2}^{∗})Y_{2}= 0 with

A^{∗}(2) = 1

|ζ(2)|

0 0 ζ(2) · b(2) ζ(2) × b(2)

and Ψ^{∗}_{2} = P Ψ2∗+ iP (ζ(2))Ψ2∗− W_{2}A2∗− W_{2}Ψ2∗, where A2∗ = |ζ(2)|^{−1}(0, b(2), 0, 0)^{t}
and Ψ_{2∗} satisfies (4.8), (4.9). Moreover,

kΨ^{∗}_{2}k_{(2s)}(Ω) ≤ C_{0}

1

|ζ(2)| +kσ_{2}k_{(2s+2)}(B)ω

|ζ(2)|^{2}

kω^{2}I_{8}+ Q(2)_{2}k_{(2s)}(B)
+ C_{0}

1

|ζ(2)| +kσ_{2}k_{(2s+2)}(B)ω

|ζ(2)|^{2}

kW_{2}k_{(2s)}(B) (4.16)
(see (4.10)).

For Y2, we can obtain the following estimate.

Lemma 4.5. Let Y2 be the CGO solution of (P + W_{2}^{∗})Y2 = 0 as above. Then for ω ≥ 1 there
exists a constant C depending on s, Ω such that

kY_{2}k_{(1)}(Ω) ≤ C (1 + |ζ(2)|) e^{2}^{−1/2}^{R}. (4.17)
Proof. Since ∇(e^{iζ(2)·x}A^{∗}(2)) = iζ(2)e^{iζ(2)·x}A^{∗}(2) and |A^{∗}(2)| ≤ 2^{1/2}, one gets

ke^{iζ(2)·x}A^{∗}(2)k_{(1)}(Ω) ≤ C(e^{2}^{−1/2}^{R}|A^{∗}(2)| + |ζ(2)|e^{2}^{−1/2}^{R}|A^{∗}(2)|) ≤ C(1 + |ζ(2)|)e^{2}^{−1/2}^{R}.
By direct calculation,

kω^{2}I8+ Q(2)2k_{(2s)}(B) ≤ Cω, kW2k_{(2s)}(B) ≤ Cω,

where we used the assumption kσ2k_{(2s+2)}(Ω) ≤ m < 1. Thus, from (4.16) we have kΨ^{∗}_{2}k_{(1)}(Ω) ≤
C_{|ζ(2)|}^{ω} which implies that

ke^{iζ(2)·x}Ψ^{∗}_{2}k_{(1)}(Ω) ≤ ke^{iζ(2)·x}Ψ^{∗}_{2}k_{(0)}(Ω) + kiζ(2)e^{iζ(2)·x}Ψ^{∗}_{2}k_{(0)}(Ω) + ke^{iζ(2)·x}∇Ψ^{∗}_{2}k_{(0)}(Ω)

≤ C(1 + |ζ(2)|) ω

|ζ(2)|e^{2}^{−1/2}^{R}≤ C(1 + |ζ(2)|)e^{2}^{−1/2}^{R}.

The proof is complete.

Similarly, we can prove that

Lemma 4.6. Let Z1 be the CGO solution of (−∆I8+ Q1)Z1 = 0 and Y1 = (P − W_{1}^{t})Z1 as
above. Then for ω ≥ 1 there exists a constant C depending on s, Ω such that

kY_{1}k_{(1)}(Ω) ≤ C (1 + |ζ(1)|)^{2}e^{2}^{−1/2}^{R}. (4.18)
Proof. Substituting Z1= e^{iζ(1)·x}(A(1) + Ψ1) gives

Y1 = (P − W_{1}^{t})Z1= (P − W_{1}^{t})(e^{iζ(1)·x}(A(1) + Ψ1)).

We evaluate

k(P − W_{1}^{t})(e^{iζ(1)·x}A(1))k_{(1)}(Ω)

≤ C|ζ(1)|ke^{iζ(1)·x}A(1)k_{(1)}(Ω) + CkW_{1}^{t}e^{iζ(1)·x}A(1)k_{(1)}(Ω)

≤ C(1 + |ζ(1)|)^{2}e^{2}^{−1/2}^{R}

and

k(P − W_{1}^{t})(e^{iζ(1)·x}Ψ_{1})k_{(1)}(Ω)

≤ C|ζ(1)|ke^{iζ(1)·x}Ψ_{1}k_{(1)}(Ω) + CkW_{1}^{t}e^{iζ(1)·x}Ψ_{1}k_{(1)}(Ω) + Cke^{iζ(1)·x}P Ψ_{1}k_{(1)}(Ω)

≤ C(1 + |ζ(1)|)^{2}e^{2}^{−1/2}^{R},

which completes the proof.

We now choose m < 1 satisfying 2^{1/2}C0C2m < 1 and R > 2^{1/2}C0C3, i.e., (4.14) and (4.15)
hold. Combining (4.11), (4.17) and (4.18) yields that

|((Q_{1}− Q_{2})Z_{1}, Y_{2})_{Ω}| ≤ C dist(C_{1}, C_{2})kY_{1}k_{(1)}(Ω)kY_{2}k_{(1)}(Ω)

≤ C(1 + |ζ(2)|)^{3}e^{2}^{1/2}^{R}dist(C1, C2). (4.19)
Lemma 4.7. Let s > 3/2 and ξ = re with r ≥ 0, |e| = 1. Under the assumptions of Theorem
4.4, there exist a constant C_{Ω} depending on Ω only and a constant C depending on s, Ω such
that

|(ˆσ_{1}− ˆσ_{2})(re)| ≤ C_{Ω}+ Cm

β(ξ)^{2} G(∇^{2}σ)(ξ) +˜ C_{Ω}β(ξ) + Cm(ω + β(ξ))

β(ξ)^{2} G(∇˜σ)(ξ)
+ CΩω^{−1}β(ξ) + Cm(ω^{2}+ ωβ(ξ))

β(ξ)^{2} G(˜σ)(ξ) + C(ω^{2}+ R^{2})^{7/2}

ωβ(ξ)^{2} e^{2}^{1/2}^{R}dist(C1, C2),
(4.20)
where we denote

β(ξ) = R +

2ω^{2}+ R^{2}− |ξ|^{2}
2

1/2

,

˜

σ = σ_{1}− σ_{2}, and

G(f )(ξ)^{2} =
Z

hxi^{−4s}
Z

h−η + ξ − xi^{−2s}| ˆf (η)|^{2}dηdx.

Proof. Using Z_{1}= e^{iζ(1)·x}(A(1) + Ψ_{1}), Y_{2} = e^{iζ(2)·x}(A^{∗}(2) + Ψ^{∗}_{2}) and the definition of Q_{1}, Q_{2},
we have

((Q_{1}− Q_{2})Z_{1}, Y_{2})_{Ω}=
Z

Ω

e^{−iξ·x}(A^{∗}(2) + Ψ^{∗}_{2})^{t}(Q_{1}− Q_{2})(A(1) + Ψ_{1})dx

= Z

Ω

e^{−iξ·x}A^{∗}(2)^{t}(Q1− Q_{2})A(1)dx +
Z

Ω

e^{−iξ·x}Ψ^{∗}_{2}^{t}(Q1− Q_{2})A(1)dx
+

Z

Ω

e^{−iξ·x}(A^{∗}(2) + Ψ^{∗}_{2})^{t}(Q_{1}− Q_{2})Ψ_{1}dx

=: I1+ I2+ I3. (4.21)

We will evaluate each term in (4.21). We begin with I_{1}. Since A(1) = |ζ(1)|^{−1}(0, ωb(1), ζ(1)·

b(1), 0)^{t} and A^{∗}(2) = |ζ(2)|^{−1}(0, 0, ζ(2) · b(2), ζ(2) × b(2))^{t}, we deduce that
I1 = (ζ(1) · b(1))(ζ(2) · b(2))

|ζ(1)||ζ(2)|

Z

Ω

e^{−iξ·x}(κ^{2}_{2}− κ^{2}_{1})dx

+

2ω ζ(2) · b(2)

|ζ(1)||ζ(2)|

Z

Ω

e^{−iξ·x}D(κ_{2}− κ_{1}) · b(1)dx

. (4.22)

To bound I_{2}, we recall that Ψ^{∗}_{2} = P Ψ_{2∗}+ iP (ζ(2))Ψ_{2∗}− W_{2}A_{2∗}− W_{2}Ψ_{2∗}, then
I2 =

Z

Ω

e^{−iξ·x}(P Ψ2∗)^{t}(Q1− Q_{2})A(1)dx +
Z

Ω

e^{−iξ·x}(iP (ζ(2))Ψ2∗)^{t}(Q1− Q_{2})A(1)dx
+

Z

Ω

e^{−iξ·x}(−W_{2}A_{2∗})^{t}(Q_{1}− Q_{2})A(1)dx +
Z

Ω

e^{−iξ·x}(−W_{2}Ψ_{2∗})^{t}(Q_{1}− Q_{2})A(1)dx

=: J1+ J2+ J3+ J4.

We now bound Jk , k = 1, 2, 3, 4. Applying (4.8), (4.9) and using that m < 1, ω ≤ |ζ(j)|, we can deduce that

kΨ_{2∗}k_{(2s)}(Ω) ≤ C mω

|ζ(2)|^{2} , kP Ψ2∗k_{(2s)}(Ω) ≤ C mω

|ζ(2)|. (4.23)

By an argument similar to Lemma 3.4, we have

|J_{1}| ≤ CQ(ξ)kP Ψ_{2∗}k_{(2s)}(Ω)

≤ C mω

|ζ(2)|Q(ξ), where

Q(ξ) =

1

|ζ(1)|G(∇^{2}σ)(ξ) +˜ ω + ζ(1) · b(1)

|ζ(1)| G(∇˜σ)(ξ) + ω^{2}+ ω(ζ(1) · b(1))

|ζ(1)| G(˜σ)(ξ)

.
Using the bound kP (ζ(2))Ψ2∗k_{(2s)}(Ω) ≤ Cmω|ζ(2)|^{−1} and following a similar argument in
Lemma 3.4, we can derive

|J_{2}| ≤ C mω

|ζ(2)|Q(ξ).

For J_{4}, it follows from the definition of W_{2} that kW_{2}k_{(2s)}(Ω) ≤ Cω, then applying (4.23) and
the proof of Lemma 3.4, we yield

|J_{4}| ≤ C mω

|ζ(2)|Q(ξ).

Finally, we would like to estimate J3. Note that

W2A2∗= 1

|ζ(2)|

κ2I8+ 1 2

0 0 0 Dα_{2}·
0 0 Dα2 D × α2

0 0 0 0

0 0 0 0

0 b(2)

0 0

= ωγ_{2}^{1/2}

|ζ(2)|

0 b(2)

0 0

.

(4.24)
In view of the definition of Q_{j}, direct calculations show that

(Q_{1}− Q_{2})A(1)

= ω

2|ζ(1)|

0

2(∇^{2}(α_{1}− α_{2}))b(1) − (∆(α_{1}− α_{2}))b(1)
0

0

−iω(σ_{1}− σ_{2})

|ζ(1)|

0 ωb(1) ζ(1) · b(1)

0

− 1

|ζ(1)|

0

1

4ω(Dα_{1}· Dα_{1}− Dα_{2}· Dα_{2})b(1) + 2(ζ(1) · b(1))D(κ_{1}− κ_{2})
2ωb(1) · D(κ_{1}− κ_{2})

0

. (4.25)