## Uniqueness and increasing stability in electromagnetic inverse source problems

### Victor Isakov

^{∗}

### Jenn-Nan Wang

^{†}

Abstract

In this paper we study the uniqueness and the increasing stability in the inverse source problem for electromagnetic waves in homogeneous and inhomogeneous media from boundary data at multiple wave numbers. For the unique determination of sources, we consider inho- mogeneous media and use tangential components of the electric field and magnetic field at the boundary of the reference domain. The proof relies on the Fourier transform with respect to the wave numbers and the unique continuation theorems. To study the increasing stability in the source identification, we consider homogeneous media and measure the absorbing data or the tangential component of the electric field at the boundary of the reference domain as addi- tional data. By using the Fourier transform with respect to the wave numbers, explicit bounds for analytic continuation, Huygens’ principle and bounds for initial boundary value problems, increasing (with larger wave numbers intervals) stability estimate is obtained.

Keywords: Electromagnetic waves, Inverse source problem, Increasing stability.

### 1 Introduction

The main theme of this paper is to investigate the inverse source problem for the Maxwell equations when the source is supported inside a bounded domain Ω. We consider the scattering solution of the Maxwell equations due to the existence of the source. We measure suitable tangential compo- nents of the electric field and the magnetic field on ∂Ω or a part of Ω to retrieve the information of the source. Inverse source problems have enormous applications in practice. For example, detec- tion of submarines and of anomalies in various industrial objects like material defects [14], [18] can be regarded as recovery of acoustic sources from boundary measurements of the pressure. Other applications include antenna synthesis [5], biomedical imaging (magnetoencephalography and ultra- sound tomography) [4], fluorescent microscopy, and geophysics, in particular, to locating sources of earthquakes.

Inverse source problems are linearisations of inverse problems of determining coefficients of partial differential equations. From the boundary data for one single linear differential equation or system (that is, single wave number), it is not possible to find the source uniquely [21, Ch.4]. This non- uniqueness phenomenon also appears in the Maxwell equations due to the existence of non-radiating sources [1], [3]. However, if we use the data collected for various wave numbers in (0, K), the uniqueness can be restored, at least for divergence-free sources. For applications, the important issue is the stability of the source recovery. It is widely known that most of inverse problems for elliptic equations are ill-posed having a feature of logarithmic type stability estimates, which results in a robust recovery of only few parameters describing the source and yields very low resolution

∗Department of Mathematics, Statistics, Physics, Wichita State University, Wichita, KS 67260-0033, USA. Email:

victor.isakov@wichita.edu

†Institute of Applied Mathematical Sciences, NCTS, National Taiwan University, Taipei 106, Taiwan.

Email:jnwang@math.ntu.edu.tw

numerically. In this work, we will show that for the Maxwell equations the stability of identifying divergence-free sources using absorbing boundary data on the whole ∂Ω with wave numbers in (0, K) increases (getting nearly Lipschitz) when K is getting large.

To describe main results, we will use mostly standard notations. Let k·k(l)denote the H^{l}Sobolev
norm of a scalar or a vector-valued functions, Ω be a bounded domain in R^{3} with connected R^{3}\ ¯Ω
and the boundary ∂Ω ∈ C^{2}. C denotes a generic constant depending only on Ω, 0, µ0 whose value
may vary from line to line. Consider the time-harmonic Maxwell equations in an inhomogeneous
medium:

(curl E − iωµH = Jµ in R^{3},

curl H + iωE = J_{} in R^{3}, (1.1)

where E, H are electric and magnetic fields, ω > 0 is the wave number, and µ are 3 × 3 real positive-definite matrices with time independent entries which are positive constants outside Ω, i.e., for some 0> 0, µ0> 0

(x) = 0I3 and µ(x) = µ0I3, x ∈ R^{3}\ ¯Ω, (1.2)
and J, Jµ are the (real vector valued) electric and magnetic current densities that is assumed to be
supported in Ω

supp J, supp Jµ⊆ Ω. (1.3)

We are interested in the scattering solution for (1.1). In this case, E, H are required to satisfy the Silver-M¨uller radiation condition:

lim

|x|→∞|x|(√

µ_{0}H × σ −√

_{0}E)(x) = 0, lim

|x|→∞|x|(√

_{0}E × σ +√

µ_{0}H)(x) = 0 (1.4)
where σ = x/|x|. One can show that for any J_{}, J_{µ}∈ H(div, Ω) satisfying (1.3) there exists a unique
(E, H) ∈ H(curl , R^{3}) × H(curl , R^{3}) satisfying (1.1) and (1.4), where for any open set D ⊆ R^{3} we
define H(div, D) = {u ∈ [L^{2}(D)]^{3} : div u ∈ L^{2}(D)} and H(curl , D) = {u ∈ [L^{2}(D)]^{3} : curl u ∈
[L^{2}(D)]^{3}}. The corresponding graph norm of H(curl , D) is defined by

kuk_{H(curl ,D)}=

kuk^{2}_{[L}2(D)]^{3}+ kcurl uk^{2}_{[L}2(D)]^{3}

^{1/2}

(1.5)
and H_{0}(curl , D) is the completion of [C_{0}^{∞}(D)]^{3} with respect to the norm (1.5).

The first main result is uniqueness from the minimal data

E( , ω) × ν, H( , ω) × ν on Γ ⊂ ∂Ω, for K_{∗}< ω < K, (1.6)
where 0 ≤ K_{∗}< K.

Theorem 1.1. Let J_{µ}, J_{} ∈ H(curl , Ω) satisfy (1.3). We further assume that , µ ∈ C^{2}( ¯Ω) and
there exists a scalar function λ(x) ∈ C^{2}( ¯Ω) such that

(x) = λ(x)µ(x), x ∈ Ω. (1.7)

Moreover, let J_{}, J_{µ} be divergence-free, i.e.,

divJ_{}= 0, divJ_{µ} = 0 in R^{3}. (1.8)

Then J_{}, J_{µ} in (1.1), (1.4) are uniquely determined by (1.6).

Observe that this result implies that E( , ω) × ν on ∂Ω with K∗< ω < K under the conditions
of theorem 1.1 uniquely determines J, Jµ on Ω. Indeed, due to the uniqueness for the exterior
boundary value problem for the Maxwell system E( , ω) × ν on ∂Ω uniquely determine (E, H) on
R^{3}\ Ω and hence the data (1.6) which implies uniqueness of J, J_{µ} .

The second main result of this paper is an improving stability of recovery of divergence-free sources J, Jµ from the absorbing boundary data (also called Leontovich condition)

E(·, ω) × ν − α(·)H_{τ}(·, ω) on ∂Ω, for 0 < ω < K, (1.9)
or the tangential component of the electric field

E( , ω) × ν on ∂Ω, for 0 < ω < K,

where ν is the unit outer normal of ∂Ω and Hτ = H − (H · ν)ν is the tangential projection of H on

∂Ω. Here we assume that α(x) ∈ L^{∞}(∂Ω) and α(x) ≥ c > 0 on ∂Ω. The case of α ≡ 1 corresponds
to the Silver-M¨uller boundary condition [7]. In the next result we assume = _{0}, µ = µ_{0}.

Theorem 1.2. Assume that 1 < K, sources Jµ, J satisfy (1.3), (1.8), and

kJk^{2}_{(1)}(Ω) + kJµk^{2}_{(1)}(Ω) ≤ M_{1}^{2} (1.10)
or

kJk^{2}_{(2)}(Ω) + kJ_{µ}k^{2}_{(2)}(Ω) ≤ M_{2}^{2} (1.11)
for some M_{0}, M_{1}> 0. Then there exists C, depending on diam Ω, _{0}, µ_{0}, such that

kJk^{2}_{(0)}(Ω) + kJ_{µ}k^{2}_{(0)}(Ω) ≤ C ε^{2}_{0}+ M_{1}^{2}
1 + K^{4}^{3}E_{0}^{2}^{3}

!

, (1.12)

or

kJk^{2}_{(0)}(Ω) + kJµk^{2}_{(0)}(Ω) ≤ C ε^{2}_{1}+ M_{2}^{2}
1 + K^{4}^{3}E_{1}^{2}^{3}

!

, (1.13)

for all (E, H) ∈ [H^{1}(Ω)]^{6} solving (1.1), (1.4) where

ε^{2}_{0}=
Z K

0

kE(, ω) × ν − αHτ(, ω)k^{2}_{(0)}(∂Ω)dω, E0= | ln ε0|,
and

ε^{2}_{1}=
Z K

0

kE(, ω) × νk^{2}_{(1)}(∂Ω)dω, E_{1}= | ln ε_{1}|.

Observe that the stability bound (1.12) or (1.13) contain a Lipschitz stable part Cε^{2}_{0} or Cε^{2}_{1}
and a conditional logarithmic stable part. This logarithmic part is natural and necessary since we
deal with elliptic systems. However with growing K logarithmic part is decreasing and the stable
bound is dominated by the Lipschitz part. Before going further, we would like to point out that the
divergence-free condition (1.8) in Theorem 1.1 and 1.2 is not for the technical reason. It is necessary
for the uniqueness of our inverse problem. To see this, let ϕ, ψ ∈ C^{1}(R^{3}) be supported in Ω and
E = ^{∇ϕ}_{iω}, H = −^{∇ψ}_{iω}, then (E, H) satisfies (1.1) with = µ = 1 and J = ∇ϕ, Jµ = ∇ψ. Such
examples provide with non uniqueness to the determination of the source ∇ϕ, ∇ψ from E(, ω), H(, ω)
given outside Ω.

The determination of a source using multiple frequencies has received a lot of attention in recent years. For the Helmholtz equation, uniqueness and numerical results were obtained in [14]. First increasing stability results were presented in [5] for some particular cases. These results were proved by direct spatial Fourier analysis methods. In [9], using a different method involving a temporal Fourier transform, sharp bounds of the analytic continuation to higher wave numbers, and exact observability bounds for associated hyperbolic equations, increasing stability bounds were derived for the three dimensional Helmholtz equation. Later in [15] the methods and results of [9] are extended to the more complicated case of the two dimensional Helmholtz equation. We would like

to point out that in the works mentioned above one uses the complete Cauchy data on ∂Ω instead of Dirichlet-like data, which is much more realistic. For instance, the common measuring acoustical devise (microphone) registers only pressure, while in seismic one typically collects displacements.

Those data only register the Dirichlet boundary value on ∂Ω. It should be mentioned that in [28] a spherical Ω was considered and there was a result on increasing stability from only Dirichlet data on

∂Ω, but the used norm of the data was not the standard norm. It involved the operator of solution of the exterior Dirichlet problem. In the recent preprint [6], some results similar to [28] are obtained for the elastic and electromagnetic waves.

The idea in the proof of our increasing stability result in Theorem 1.2 is motivated by the recent paper by Entekhabi and the first author [16], where increasing stability bounds are obtained for the acoustic and elastic waves using the most natural Sobolev norms of the Dirichlet type data on an arbitrary domain Ω. As in [9] and [16], in this work we use the Fourier transform in time to reduce our inverse source problem to identification of the initial data in the time-dependent Maxwell equations by data on the lateral boundary. We derive our increasing stability estimate by using sharp bounds of analytic continuation of the data from (0, K) onto (0, +∞) given in [9]

and then subsequently utilized in [15], [28], [6]. A new idea introduced in [16] is to make use of the Huygens’s principle and known Sakamoto type energy bounds for the corresponding hyperbolic initial boundary value problem (backward in time). These techniques enable them to avoid a need in the complete Cauchy data on ∂Ω and in a direct use of the exact boundary controllability results.

For time-dependent Maxwell equations in homogeneous media, the Huygens’ principle is valid. On the other hand, in our problem, in addition to Sakamoto type energy bounds, we also need the regularity estimate for the Maxwell equations with absorbing boundary condition or the tangential component of the electric field on the lateral boundary [10], [13].

The rest of this paper is organized as follows. In Section 2, we will prove the uniqueness theorem, Theorem 1.1. We prove the increasing stability in Section 3 and 4. In Section 3, we use the methods of [9], [16], in particular bounds of the analytic continuation of the needed norms of the boundary data from (0, K) onto a sector of the complex plane ω = ω1+ iω2, and use them and sharp bounds in [9] of the harmonic measure of (0, K) in this sector to derive explicit bounds of the analytic continuation of this norms from (0, K) onto the real axis. In Section 4, we use the Fourier transform in time to transform the source problem of the time-harmonic Maxwell equations to the time-dependent homogeneous Maxwell equations with initial conditions. The derivation of increasing stability relies on the quantitative analytic continuation established in Section 3, the Huygens’ principle for the Maxwell equations in homogeneous media, and the regularity estimates using boundary conditions.

### 2 Proof of uniqueness

We first show solvability of the direct scattering problem and analyticity of its solution with respect to the wave number ω.

Theorem 2.1. Assume that (1.2), (1.3) are satisfied and J, Jµ∈ H(div, Ω). Then there is a unique
solution (E( , ω), H( , ω)) ∈ [Hloc(curl , R^{3})]^{2}to the scattering problem (1.1), (1.4). This solution has
an (complex) analytic with respect to ω = <ω +i=ω continuation onto a neighbourhood of the quarter
plane {0 < <ω, 0 ≤ =ω} which for 0 < =ω satisfies the equation (1.1) and exponentially decays for
large |x|:

|E(x, ω)| + |H(x, ω)| ≤ Ce^{−C}^{−1}^{|x|} (2.14)
with some constant C depending only on E, H, ω.

We first prove a unique result from boundary data.

Lemma 2.2. Assume that and µ are C^{1}(R^{3}) positive-definite matrix-valued functions. Let ˜Ω be
a domain in R^{3}. If ω 6= 0, curl E − iωµH = curl H + iωE = 0 on ˜Ω, and E × ν = H × ν = 0 on
Γ ⊂ ∂ ˜Ω, then E = H = 0 on ˜Ω.

Before proving this lemma we remind that, as widely known, the Maxwell equations are invariant
under a change of coordinates. To be precise, let the coordinate transform x → x^{0} and J = (Jkl)
with Jkl = ∂x^{0}_{k}/∂xlbe the associated Jacobian matrix. Then in the new coordinates x^{0}, we have

(curl^{0}H^{0}= −iω^{0}E^{0},
curl^{0}E^{0}= iωµ^{0}H^{0},
where

E^{0}= (J^{T})^{−1}E, H^{0}= (J^{T})^{−1}H, ^{0} =J J^{T}

detJ, µ^{0}= J µJ^{T}
detJ .
We now prove Lemma 2.2.

Proof. First we observe that by elliptic regularity (E, H) ∈ C^{1}( ˜Ω). Let P ∈ Γ. We claim that
E(P ) = H(P ) = 0. Not losing a generality we assume that P is the origin and Γ near P is
the graph of the function x3 = γ(x1, x2) and moreover ∂1γ(0) = ∂2γ(0) = 0. Let the change of
coordinates x → x^{0} be defined by x^{0}_{1}= x1, x^{0}_{2}= x2, x^{0}_{3}= x3− γ(x1, x2) near 0. Then we have

J =

1 0 0

0 1 0

−∂1γ −∂2γ 1

and detJ = 1.

In the new coordinates the unit outer normal ν^{0} = (0, 0, −1), E^{0}× ν^{0}= H^{0}× ν^{0}= 0 implies
E_{1}^{0} = E_{2}^{0} = 0, H_{1}^{0} = H_{2}^{0} = 0 on {x^{0}_{3}= 0}.

In particular ∂_{2}^{0}H_{1}^{0}(0) = ∂_{1}^{0}H_{2}^{0}(0) = 0, i.e., ∂^{0}_{1}H_{2}^{0}(0) − ∂_{2}^{0}H_{1}^{0}(0) = 0. Next from the third component
in the equation curl^{0}H^{0}= iω^{0}E^{0}, we see that

−iω^{0}_{33}(0)E_{3}^{0}(0) = ∂^{0}_{1}H_{2}^{0}(0) − ∂^{0}_{2}H_{1}^{0}(0) = 0

and thus E_{3}^{0}(0) = 0. Transforming back to the original coordinates immediately gives E(0) = 0.

Likewise, we can show that H(0) = 0. In other words, we can prove that E = H = 0 on Γ. We now apply the unique continuation result obtained in [29] to conclude that E = H = 0 in ˜Ω.

We also need the well-posedness and the regularity of the boundary value problem related to the Maxwell equations

curl E^{∗}− iωµH^{∗} = J_{µ}^{∗} in B,
curl H^{∗}+ iωE^{∗}= J_{}^{∗} in B,
E^{∗}× ν = 0 on ∂B,

(2.15)

where B is a ball and the source J^{∗}= (J_{µ}^{∗}, J_{}^{∗}) ∈ [L^{2}(B)]^{6}.
Lemma 2.3. There exists a discrete set

T = {· · · , ω_{−2}, ω_{−1}, ω1, ω2, · · · }

of nonzero real values, where −∞ ← · · · ≤ ω_{−2} ≤ ω_{−1} ≤ ω1 ≤ ω2 ≤ · · · → ∞, such that for any
ω /∈ T ∪ {0} there is a unique solution (E^{∗}(ω; J^{∗}), H^{∗}(ω; J^{∗})) to (2.15) and (E^{∗}(ω; ), H^{∗}(ω; )) is a
continuous linear operator from [L^{2}(B)]^{6} into H(curl , B)^{2} which is analytic in ω ∈ C \ (T ∪ {0}).

Let {ωk(B)}^{∞}_{k=−∞}and {ωk(B^{0})}^{∞}_{k=−∞}denote the discrete sets described above corresponding to balls
B and B^{0}. Then if B ⊂ B^{0}, then ωk(B^{0}) < ωk(B) if k > 0 and ωk(B^{0}) > ωk(B) if k < 0.

Proof. We first study the eigenvalue problem

curl u − iωµv = 0 in B, curl v + iωu = 0 in B, u × ν = 0 on ∂B.

(2.16)

We can see that the eigenvalue problem (2.16) is equivalent to the eigenvalue problem for u
(curl (µ^{−1}curl u) = ω^{2}u in B,

u × ν = 0 on ∂B. (2.17)

For, it is clear that if ω 6= 0 is an eigenvalue of (2.16), then ω^{2}is an eigenvalue of (2.17). Conversely, if
ω^{2}is an eigenvalue of (2.17) with eigenfunction u, then setting v = µ^{−1}curl u/iω gives curl u−iωµv =
0 and curl v + iωu = 0.

The eigenvalue problem (2.17) was completely analyzed in [25]. Recall from [25, Theorem 4.34,
page 193] that there exists an infinite number of positive eigenvalues ω_{k}^{2} with corresponding eigen-
function uk∈ V0, to (2.17), where

V_{0,}= {u ∈ H_{0}(curl , B) : (u, ψ)_{L}2(B)= 0 for all ψ ∈ H_{0}(curl , B), curl ψ = 0 in B}.

The eigenvalues {ω_{k}^{2} > 0} have finite multiplicities and tend to infinity as k → ∞. Moreover,
{u_{k}}^{∞}_{k=1}form a complete orthonormal system of (V_{0,}, (·, ·)_{µ,}), where the inner product

(u, v)_{µ,}=
Z

B

µ^{−1}curl u · curl ¯vdx +
Z

B

u · ¯vdx.

Consequently, we have the formula
λ_{k}:= 1

1 + ω_{k}^{2} = (u_{k}, u_{k})_{L}2(B)= (uk, uk)_{L}2(B)

(uk, uk)µ,

.

Note that λ1≥ λ2 ≥ · · · → 0. It is not difficult to prove the following variational characterization of λk, that is,

λ_{k}= max

U ⊂V0,,dim U =k min

u∈U ,u6=0

(u, u)_{L}2(B)

(u, u)µ,

, k = 1, 2, · · · . (2.18)
An easy consequence of (2.18) is that if B ⊂ B^{0}, then

λ_{k}(B) ≤ λ_{k}(B^{0}) for each k, (2.19)

where λk(B) = _{1+ω}^{1}2

k(B), λk(B^{0}) = _{1+ω}^{1}2

k(B^{0}) and ω_{k}^{2}(B), ω_{k}^{2}(B^{0}) are eigenvalues of (2.17) corre-
sponding to B and B^{0}, respectively. We actually want to show that the strict monotonicity holds,
i.e., for each k

λk(B) < λk(B^{0}) if B ⊂ B^{0}, (2.20)
which is equivalent to

ω^{2}_{k}(B) > ω_{k}^{2}(B^{0}) if B ⊂ B^{0}.

We adopt the argument from [33, Theorem 2.3]. We will prove (2.20) by contradiction. Assume
that λk(B) = λk(B^{0}). Since every λk(B^{0}) has finite multiplicity and λk(B^{0}) → 0, there exists
λn(B^{0}) < λk(B^{0}) for some n. We now partition B^{0} into n balls satisfying

B = B1⊂ B2⊂ · · · ⊂ Bn= B^{0}.
Then (2.19) implies

λ_{k}(B) = λ_{k}(B_{1}) ≤ λ_{k}(B_{2}) ≤ · · · ≤ λ_{k}(B_{n}) = λ_{k}(B^{0}).

Denote uk,j the eigenfunction corresponding to λk(Bj) with kuk,jkµ,= 1, j = 1, 2, · · · , n. To abuse
the notation, we also use uk,j to denote the zero extension of uk,j originally defined on Bj to B^{0}.
Still, we have kuk,jkµ,= 1 with integral evaluated over B^{0}.

Now we would like to show that {uk,j}^{n}_{j=1}are linearly independent. Assume thatPn

j=1ajuk,j =
0 in B^{0}, but an 6= 0, then uk,n= 0 in B^{0}\ Bn−1. By the unique continuation property in Lemma 2.2,
we have that uk,n ≡ 0 in B^{0}, which is a contradiction. Other coefficients are treated similarly.

Considering the subspace spanned by {u_{k,j}}^{n}_{j=1} in the variational characterization of λ_{n}(B^{0}) in
(2.18), we obtain that λ_{k}(B^{0}) ≤ λ_{n}(B^{0}), which is a contradiction.

To show the unique solvability of (2.15) for ω 6∈ T ∪ {0}, we consider the operator L : D(L) →
X := [L^{2}(B)]^{3}× [L^{2}(B)]^{3} given by

L =

0 −iµ^{−1}curl
i^{−1}curl 0

,

where D(L) = H0(curl , B) × H(curl , B). It is not hard to check that L is self-adjoint in X with respect to the inner product

Du1

u2

,v1

v2

E

= Z

B

(µu1· v1+ u2· v2),

the range of L, Ran(L), is closed (see [27, Corollary 8.10]). Also, X admits the orthogonal decom- position

X = Ker(L) ⊕ Ran(L).

Let P be the orthogonal projection of X onto Ran(L).

Let (J_{µ}^{∗}, J_{}^{∗}) ∈ [L^{2}(B)]^{3}× [L^{2}(B)]^{3}, i.e., F := (−iµ^{−1}J_{µ}^{∗}, −i^{−1}J_{µ}^{∗}) ∈ X, then to solve (2.15), we
consider

(L − ω)W = F,

where W ∈ D(L). If ω 6∈ T ∪ {0}, then L − ω is invertible. Hence the solution W is given by
W = (L − ω)^{−1}P F − ω^{−1}(I − P )F,

for

(L − ω)W = (L − ω)(L − ω)^{−1}P F − (L − ω)ω^{−1}(I − P )F = (I − P )F + P F = F.

Moreover, we can see that the solution W is analytic in ω ∈ C \ (T ∪ {0}).

Remark 2.4. When = 0I3 and µ = µ0I3, we denote the corresponding spectrum of L by T0. We now prove Theorem 2.1.

Proof. Let 0 < <ω and 0 ≤ =ω. We first establish the uniqueness. In other words, we want to
prove that if (E, H) satisfies (1.1) with J_{}≡ Jµ ≡ 0 and (1.4), then E = H = 0 in R^{3}. Let Ω_{0} be
an open set containing ¯Ω with closure contained in a ball B. By the Gauss divergence theorem and
the Maxwell equations (1.1), we have that

Z

∂B

ν × E · ¯HdS = Z

B

(curl E · ¯H − E · curl ¯H) = Z

B

(iωµH · ¯H − i¯ωE · ¯E) and hence

<

Z

∂B

ν × E · ¯H dS = −=ω Z

B

(µH · ¯H + E · ¯E)dx ≤ 0. (2.21) On the other hand, by (2.21), we can see that

=(ω Z

∂B

ν × E · curl ¯E dS) = =(ω Z

∂B

ν × E · (−i¯ω ¯H) dS)

=|ω|^{2}=(−i
Z

∂B

ν × E · ¯H dS) = −|ω|^{2}<

Z

∂B

ν × E · ¯H dS ≥ 0.

(2.22)

In view of (2.22), using [11, Theorem 4.17], we obtain that E = 0 in R^{3}\ B. Similarly, we can prove
that H = 0 in R^{3}\ B. Combine this and Lemma 2.2 concludes that E = H = 0 in R^{3}.

We will prove the existence by the Lax-Phillips method. Let ω ∈ {0 < <ω, 0 ≤ =ω} and Ω0

be an open set containing ¯Ω. In view of the strict monotonicity of eigenvalues with respect to the
domain proved in Lemma 2.3, one can choose a ball B, ¯Ω0 ⊂ B, so that ω /∈ T ∪ T0. Let φ be a
cut-off C^{∞}(R^{3}) function φ with φ = 1 on Ω and φ = 0 outside of Ω0. We look for a solution

E H

=Φ Ψ

− φΦ Ψ

− E^{∗}
H^{∗}

(2.23)

to system (1.1), where E^{∗}
H^{∗}

(·, J^{∗}) with J^{∗} = J_{µ}^{∗}
J_{}^{∗}

being a solution to the boundary value problem

curl E^{∗}− iωµH^{∗} = J_{µ}^{∗} in B,
curl H^{∗}+ iωE^{∗}= J_{}^{∗} in B,
E^{∗}× ν = 0 on ∂B,

(2.24)

and J^{∗} ∈ [H(div, B)]^{2} with supp J^{∗} ⊂ B will be determined later. Moreover,Φ
Ψ

is the solution

to (

curl Φ − iωµ0Ψ = J_{µ}^{∗} in R^{3},
curl Ψ + iω_{0}Φ = J_{}^{∗} in R^{3}

(2.25) satisfying the radiation condition

lim

|x|→∞|x|(√

0Φ × σ +√

µ0Ψ)(x) = 0, lim

|x|→∞|x|(√

µ0Ψ × σ −√

0Φ)(x) = 0. (2.26) It is well known (see [8], p. 78, Theorem 2) that

Φ(x, ω) = Z

Ω

exp(iκ|x − y|)

4π|x − y| (iωµ0J_{}^{∗}(y) + curl J_{µ}^{∗}(y))dy,
Ψ(x, ω) =

Z

Ω

exp(iκ|x − y|)

4π|x − y| (−iω_{0}J_{µ}^{∗}(y) + curl J_{}^{∗}(y))dy, κ = ω√

_{0}µ_{0}.

(2.27)

Since φ = 1 in Ω, we have

J_{µ}^{∗}= Jµ and J_{}^{∗}= J in Ω.

In R^{3}\ ¯Ω, we have

curl E − iωµH = curl (Φ − φ(Φ − E^{∗})) − iωµ0(Ψ − φ(Ψ − H^{∗}))

= curl Φ − φcurl (Φ − E^{∗}) − ∇φ × (Φ − E^{∗}) − iωµ0(Ψ − φ(Ψ − H^{∗}))

= J_{µ}^{∗}− ∇φ × (Φ − E^{∗}) − φ[curl (Φ − E^{∗}) − iωµ_{0}(Ψ − H^{∗})]

= J_{µ}^{∗}− ∇φ × (Φ − E^{∗})
and similarly

curl H + iωE = J_{}^{∗}− ∇φ × (Ψ − H^{∗}).

We introduce the operator

A(ω)J^{∗}= −∇φ × (Φ − E^{∗})

−∇φ × (Ψ − H^{∗})

.

Hence E H

is a scattering solution of (1.1) iff

J = J^{∗}+ AJ^{∗}. (2.28)

Note that supp AJ^{∗}⊂ Ω0\ ¯Ω.

To prove the existence for (2.28), we first show that I + A is Fredholm from [H(div , B)]^{2} into
itself. It follows from [2] that Φ(J^{∗}), Ψ(J^{∗}), E^{∗}(J^{∗}), H^{∗}(J^{∗}) are continuous linear operators from
[H(div , B)]^{2} into [H^{1}(B)]^{6}. Moreover, by direct calculations,

div (∇φ × E^{∗}) = −∇φ · curl E^{∗}= −∇φ · (iωµ0H^{∗}+ J_{µ}^{∗}),
div (∇φ × Φ) = −∇φ · curl Φ = −∇φ · (iωµ_{0}Ψ + J_{µ}^{∗}),
due to (2.24), since µ = µ0 outside Ω and ∇φ = 0 on Ω. Hence

div (∇φ × (Φ − E^{∗})) = ∇φ · (iωµ0(H^{∗}− Ψ)).

Similarly,

div (∇φ × (Ψ − H^{∗})) = −∇φ · (iω0(E^{∗}− Φ)).

Summing up, A is a continuous linear operator from [H(div , B)]^{2}into [H^{1}(div , B)]^{2}, where H^{1}(div , B) =
{u ∈ [H^{1}(B)]^{3}: div u ∈ H^{1}(B)} with the natural norm. Since H^{1}(B) is compactly embedded into
L^{2}(B), A is compact from [H(div , B)]^{2} into itself.

Now to establish the existence, it suffices to prove the injectivity of I + A. Let 0 = J^{∗}+ AJ^{∗}.
Since J = 0, by the uniqueness which was shown at the beginning of the proof, we have E = H = 0
on B thus from (2.23)

Φ Ψ

= φΦ Ψ

− E^{∗}
H^{∗}

.

Since φ = 0 on B \ Ω0 we have Φ = 0 on ∂B. Now from (2.24), (2.25) we yield

curl (Φ − E^{∗}) − iωµ(Ψ − H^{∗}) = 0 in B,
curl (Ψ − H^{∗}) + iω(Φ − E^{∗}) = 0 in B,
(Φ − E^{∗}) × ν = 0 on ∂B.

By the choice of B a solution to this boundary value problem is unique, we get Φ − E^{∗}= Ψ − H^{∗}= 0
on B and hence Φ = Ψ = 0, so AJ^{∗}= 0 and from (2.28) we conclude that J^{∗}= 0.

Summing up, the Fredholm operator I + A(ω) is injective, and hence has the inverse. Since A(ω)
is analytic with respect to ω, so is the inverse and therefore J^{∗}. In view of the explicit representation
formulas for Φ, Ψ in (2.25) (see for example [8, (47)]) and the analyticity of (E^{∗}, H^{∗}) in ω proved
in Lemma 2.3, the analyticity of (E(, ω), H(, ω)) follows. The exponential decay (2.14) follows from
(2.23), (2.27).

Now we prove Theorem 1.1.

Proof. Due to the linearity it suffices to show that E × ν = H × ν = 0 on Γ, K_{∗}< ω < K implies
that J_{}= J_{µ}= 0. Let (e, h) be a solution to the dynamical initial boundary value problem:

∂_{t}(e) − curl h = 0, ∂_{t}(µh) + curl e = 0 in R^{3}× (0, ∞), (2.29)
e = −√

2π^{−1}J, h = −√

2πµ^{−1}Jµ on R^{3}× {0}. (2.30)

Thanks to (1.8) and (2.29), in addition to (2.30), we have the following compatibility conditions
div(e) = 0, div(µh) = 0 in R^{3}× (0, ∞). (2.31)
As known, see for example [17], there is a unique solution (e, h) ∈ L^{∞}((0, T ); [H(curl , R^{3})]^{6}) of
this problem for any T > 0 and moreover by using the standard energy estimates, i.e. scalarly
multiplying (2.29) by (e, h) exp(−γ0t) and integrating by parts over R^{3}× (0, t) we have

ke(t, ·)k(0)(R^{3}) + kh(t, ·)k_{(0)}(R^{3}) ≤ C_{0}exp(γ_{0}t), (2.32)

where positive γ0 and C0 might depend on , µ, J . Then the following Fourier-Laplace transforms are well defined

E_{∗}(x, ω) = 1

√2π Z ∞

0

e(t, x) exp(iωt)dt, H_{∗}(x, ω) = 1

√2π Z ∞

0

e(t, x) exp(iωt)dt (2.33)
with ω = ω_{1}+ iγ, γ_{0}< γ.

Approximating J, Jµ by smooth functions and integrating by parts, we have 0 =

Z ∞ 0

(∂t(e) − curl h)(t, ·) exp(iωt)dt

= −e(0, ·) − iω Z ∞

0

e(t, ·) exp(iωt)dt − curl Z ∞

0

h(t, ·) exp(iωt)dt and

0 = Z ∞

0

(∂_{t}(µh) + curl e)(t, ·) exp(iωt)dt

= −µh(0, ·) − iω Z ∞

0

µh(t, ·) exp(iωt)dt + curl Z ∞

0

e(t, ·) exp(iωt)dt

Hence (E_{∗}, H_{∗}) solves (1.1) with ω = ω1+ iγ, γ0 < γ. In addition, (E_{∗}, H_{∗}) exponentially decays
for large |x|. Indeed, due to the finite speed of the propagation in the hyperbolic problems we have
e(t, x) = 0 for x ∈ R^{3}\ B(R) if t < θR − R0 for some θ = θ(, µ) > 0, where R0 > 0 satisfies
Ω ⊂ B(R¯ 0) and R > R0. Hereafter, B(R) denotes the ball of radius R centered at 0. Hence from
(2.33), for any δ > 0

Z

B(R+4)\B(R)

|E∗|^{2}≤
Z

B(R+4)\B(R)

Z +∞

θR−R_{0}

e(t, x) exp(iω1t − (γ − δ)t) exp(−δt)dt

2

dx

≤ Z

B(R+4)\B(R)

Z +∞

θR−R0

|e(t, x)|^{2}exp(−2(γ − δ)t)dt
Z +∞

θR−R0

exp(−2δt)dt

dx

≤ C(C_{0}, δ, R_{0}) exp(−2δθR)
Z +∞

θR−R_{0}

ke(t, ·)k^{2}_{(0)}(R^{n}\ B(R)) exp(−2γ_{0}t) exp(−2(γ − γ_{0}− δ)t)dt

≤ C(C0, δ, R_{0}) exp(−2δθR)
Z +∞

θR−R_{0}

exp(−2(γ − γ_{0}− δ)t)dt

≤ C(C0, γ, γ0, R0) exp(−2δθR),

where we have used (2.32) and chosen δ = ^{γ−γ}_{2}^{0}. The same bound holds for H∗, and we yield
Z

B(R+4)\B(R)

(|E_{∗}|^{2}+ |H_{∗}|^{2}) ≤ C(C0, γ, γ0, R0) exp(−2δθR) (2.34)

By Theorem 2.1 the vector function (E( , ω), H( , ω)) solving (1.1),(1.4) has a complex analytic
extension from (0, ∞) into a neighbourhood in the first quarter plane {0 < <ω, 0 ≤ =ω}, by the
uniqueness of the analytic continuation this extension satisfies (1.1),1.4 and decays exponentially
as |x| → ∞ when 0 < =ω, in particular, we have for E, H the bound (2.34). To show that
E = E_{∗}, H = H_{∗}, we let E^{0} = E_{∗}− E, H^{0} = H_{∗}− H. Since (E, H), (E_{∗}, H_{∗}) solve the Maxwell
system (1.1) we see that curl E^{0}− iωµH^{0}= 0, curl H^{0}+ iωE^{0}= 0 in R^{3}. Let the cut off C^{1}(R^{3})-
function ϕ = 1 on B(R), ϕ = 0 on R^{3}\ B(R + 1), 0 ≤ ϕ ≤ 1. Using the homogeneous Maxwell
equations for (E^{0}, H^{0}), we obtain

curl (ϕE^{0}) − iωµϕH^{0}= ∇ϕ × E^{0}, curl (ϕH^{0}) + iωϕE^{0}= ∇ϕ × H^{0}. (2.35)

By (2.35), integrating by parts, and using ϕ = 0 on ∂B(R + 4) imply 0 =

Z

B(R+4)

(curl (ϕE^{0}) · (ϕ ¯H^{0}) − (ϕE^{0}) · curl (ϕ ¯H^{0}))

= Z

B(R+4)

(iω(ϕµH^{0}) · (ϕ ¯H^{0}) − (ϕE^{0}) · (i¯ωϕ ¯E^{0}))
+

Z

B(R+4)\B(R)

((∇ϕ × E^{0}) · (ϕ ¯H^{0}) − (ϕE^{0}) · (∇ϕ × ¯H^{0})).

Therefore, taking the real part of the above relation yields

=ω Z

B(R+4)

ϕ^{2}(E^{0}· ¯E^{0}+ µH^{0}· ¯H^{0}) = <

Z

B(R+4)\B(R)

((∇ϕ × E^{0}) · (ϕ ¯H^{0}) − (ϕE^{0}) · (∇ϕ × ¯H^{0}))

and hence from (2.34) and the exponential decay of E, H we derive Z

B(R)

(E^{0}· ¯E^{0}+ H^{0}· ¯H^{0}) ≤ C(C_{0}, γ, γ_{0}, R_{0})exp(−δθR).

Letting R → +∞ we conclude that E^{0}= H^{0}= 0 on R^{3} and so

E( , ω) = E^{∗}( , ω), H( , ω) = H^{∗}( , ω) (2.36)
when ω = ω1+ iγ, 0 < ω1, and γ0< γ.

By Lemma 2.2, E( , ω) = H( , ω) = 0 on R^{3}\ Ω, when K_{∗} < ω < K, and due to uniqueness
of the analytic continuation with respect to ω in {0 < <ω, 0 ≤ =ω}, it follows from (2.36) that
E^{∗}( , ω) = H^{∗}( , ω) = 0 on ∂Ω for ω = ω1+ iγ, 0 < ω1, γ0 < γ. Now by the uniqueness in the
Fourier-Laplace transform (2.33), we finally conclude that e = h = 0 on ∂Ω × (0, ∞). In view of the
structure assumption (1.7), by the uniqueness of the Cauchy problem for the dynamical Maxwell
system (2.29), (2.31), we conclude that e = h = 0 in Ω × {T0} for some (large) T0 [12]. Since
e = h = 0 in ∂Ω × (0, T_{0}), by uniqueness in the hyperbolic (backward) initial boundary value
problem with the initial data at t = T_{0}, we have e = h = 0 in Ω × (0, T_{0}). Hence e = h = 0 on
Ω × {0} and from (2.30) it follows that J_{}= J_{µ}= 0. The proof is complete.

### 3 Quantitative analytic continuation

We will start preparations for a proof of Theorem 1.2. Since = _{0}, µ = _{0}, (1.1) becomes
(curl E − iωµ0H = Jµ in R^{3},

curl H + iω0E = J in R^{3}, (3.37)

with the radiation condition (1.4) provided ω > 0. As in (2.27), these equations and the radiation condition are equivalent to the integral representation

E(x, ω) = Z

Ω

exp(iκ|x − y|)

4π|x − y| (iωµ_{0}J_{}(y) + curl J_{µ}(y))dy,
H(x, ω) =

Z

Ω

exp(iκ|x − y|)

4π|x − y| (−iω0Jµ(y) + curl J(y))dy,

(3.38)

where κ = ω√

0µ0. Moreover, as follows from the standard representation of radiating solutions of the Helmholtz equations, (3.38) is equivalent to the Helmholtz equations

∆E + κ^{2}E = −iωµ_{0}J_{}− curl Jµ, ∆H + κ^{2}H = iω_{0}J_{µ}− curl J in R^{3} (3.39)

and the Sommerfeld radiation condition.

In particular, we have E(x, ω) × ν(x) =

Z

Ω

exp(iκ|x − y|)

4π|x − y| (iωµ_{0}J_{}(y) + curl J_{µ}(y)) × ν(x)dy, x ∈ ∂Ω, (3.40)
and

E(x, ω) × ν(x) − α(x)H_{τ}(x, ω) =
Z

Ω

exp(iκ|x − y|)

4π|x − y| G(x, y, ω)dy, x ∈ ∂Ω, (3.41) where

G(x, y, ω) =(iωµ_{0}J_{}(y) + curl J_{µ}(y)) × ν(x)

− α(x){(−iω0Jµ(y) + curl J(y)) − [(−iω0Jµ(y) + curl J(y)) · ν(x)]ν(x)}.

Observe that the formulae on the right sides of (3.40) and (3.41) can be defined even when ω < 0.

Therefore, for ω < 0, we define E(x, ω)×ν(x) and E(x, ω)×ν(x)−α(x)H_{τ}(x, ω) in terms of formulae
(3.40) and (3.41), respectively.

Now it follows from (3.41) that Z ∞

−∞

kE × ν(, ω) − αH_{τ}(, ω)k^{2}_{(0)}(∂Ω)dω = I_{0}(k) +
Z

k<|ω|

kE × ν(, ω) − αH_{τ}(, ω)k^{2}_{(0)}(∂Ω)dω, (3.42)

where I0(k) is defined as

I0(k) = 2 Z k

0

Z

∂Ω

Z

Ω

exp(iκ|x − y|)

4π|x − y| G(x, y, ω)dy

·

Z

Ω

exp(−iκ|x − y|)

4π|x − y| G(x, y, −ω)dy

dΓ(x)dω.

(3.43) As above, we write

Z ∞

−∞

kE × ν(, ω)k^{2}_{(1)}(∂Ω)dω = I1(k) +
Z

k<|ω|

kE × ν(, ω)k^{2}_{(1)}(∂Ω)dω, (3.44)
where

I1(k) = 2 Z k

0

Z

∂Ω

|(E × ν)(x, ω)|^{2}+ |∇∂Ω(E × ν)(x, ω)|^{2} dΓ(x)dω.

We observe that ∇E is viewed as the vector with 9 components (∂_{j}E_{k}) and ∇_{∂Ω}E is the tangential
projection of the gradient (the 9 dimensional vector formed of tangential projections of gradients of
3 components of E). We have

|(E × ν)(x, ω)|^{2}= (E × ν)(x, ω) · (E × ν)(x, ω) = (E × ν)(x, ω) · (E × ν)(x, −ω),
due to (3.40). Remind that we assume that J_{µ} and J_{} are real-valued. Similarly,

|∇∂Ω(E × ν)(x, ω)|^{2}= ∇∂Ω(E × ν)(x, ω) · ∇∂Ω(E × ν)(x, −ω),

and hence, again using (3.40), the integrand in I_{1}(k) can be extended to an entire analytic function
of ω. Hence

I_{1}(k) = 2
Z k

0

Z

∂Ω

(E × ν)(x, ω) · (E × ν)(x, −ω) + ∇_{∂Ω}(E × ν)(x, ω) · ∇_{∂Ω}(E × ν)(x, −ω)dΓ(x)dω.

(3.45)
Since (due to (3.40), (3.41), (3.43), (3.45)) the integrands are entire analytic functions of ω =
ω1+iω2, we can analytically extend I0(k) and I1(k) from k > 0 to k ∈ C and, moreover, the integrals
in (3.43), (3.45) with respect to ω can be taken over any path joining points 0 and k = k1+ ik2of
the complex plane. Thus I_{0}(k) and I_{1}(k) are entire analytic functions of k ∈ C.

Due to the definitions of the norms of the boundary data
ε^{2}_{0}=I_{0}(K)

2 and ε^{2}_{1}= I_{1}(K)

2 . (3.46)

The truncation level k in (3.42) and (3.44) is important to keep balance between the known data and the unknown information when k ∈ [K, ∞).

We will need the following elementary estimate for I_{0}(k).

Lemma 3.1. Let supp J_{}, supp J_{µ} ⊂ Ω, then for k = k_{1}+ ik_{2}

|I0(k)| ≤ C(1 + |k|^{3})(kJk^{2}_{(1)}(Ω) + kJµk^{2}_{(1)}(Ω)) exp(2d√

0µ0|k2|), (3.47) where d = sup |x − y| over x, y ∈ Ω.

Proof. Using the parametrization ω = ks, s ∈ (0, 1) in the line integral and the elementary inequality

| exp(i√

0µ0ω|x − y|)| ≤ exp(√

0µ0|k2|d) it is easy to derive from (3.43), (3.41) that

|I0(k)|

≤C Z 1

0

|k|

Z

∂Ω

Z

Ω

{|k|s(|J_{}(y)| + |J_{µ}(y)|) + |curl J_{}(y)| + |curl J_{µ}(y)|}exp(√

_{0}µ_{0}|k_{2}|d)

|x − y| dy

^{2}
dΓ(x)

! ds

≤C|k|

Z

∂Ω

Z

Ω

{|k|^{2}(|J(y)|^{2}+ |Jµ(y)|^{2}) + |curl J(y)|^{2}+ |curl Jµ(y)|^{2}}dy

Z

Ω

exp(2√

0µ0|k2|d)

|x − y|^{2} dy

dΓ(x), where the Cauchy inequality is used for the integrals with respect to y. Since

Z

Ω

1

|x − y|^{2}dy ≤ C,
we yield

|I0(k)| ≤ C|k|

Z

Ω

{|k|^{2}(|J_{}(y)|^{2}+ |J_{µ}(y)|^{2}) + |∇J_{}(y)|^{2}+ |∇J_{µ}(y)|^{2}}dy

exp(2√

_{0}µ_{0}|k2|d)
and complete the proof of (3.47).

Lemma 3.2. Let supp J, supp Jµ ⊂ Ω, then for k = k1+ ik2

|I1(k)| ≤ C(1 + |k|^{3})(kJk^{2}_{(2)}(Ω) + kJµk^{2}_{(2)}(Ω)) exp(2√

0µ0|k2|d), (3.48) where d = sup |x − y| over x, y ∈ Ω.

Proof. We C^{1}extend the vector field ν from ∂Ω onto some neighbourhood V of ∂Ω and denote this
extension again as ν. Using the parametrization ω = ks, 0 ≤ s ≤ 1, in the integral (3.45) we have

|I1(k)|

=2|k|

Z 1 0

Z

∂Ω

(|(E × ν)(x, ω)||(E × ν)(x, −ω)| + |∇∂Ω(E × ν)(x, ω)||∇∂Ω(E × ν)(x, −ω)|) dΓ(x)ds

≤2|k|

Z 1 0

Z

∂Ω

(|E × ν)(x, ω)||(E × ν)(x, −ω)| + |∇(E × ν)(x, ω)||∇(E × ν)(x, −ω)|) dΓ(x)ds
(3.49)
when k = k_{1}+ ik_{2}.

From (3.40), by the Cauchy inequality, it follows that

|(E × ν)(x, ω)|^{2}≤ C exp(2√

0µ0|k2|d)

Z

Ω

1

|x − y|^{2}dy

Z

Ω

(|ω|^{2}|J|^{2}+ |∇Jµ|^{2})(y)dy.

Using

∂

∂x_{j}|x − y| = − ∂

∂y_{j}|x − y|

and integrating by parts with respect to y imply

∂

∂xj

Z

Ω

exp(iκ|x − y|)

|x − y| (iωµ_{0}J_{}(y) + curl J_{µ}(y)) × ν(x)dy

= Z

Ω

exp(iκ|x − y|)

|x − y|

∂

∂y_{j}(iωµ0J(y) + curl Jµ(y)) × ν(x)dy
+

Z

Ω

exp(iκ|x − y|)

|x − y| (iωµ0J(y) + curl Jµ(y)) × ∂

∂x_{j}ν(x)dy.

Hence, from (3.40) and using the Cauchy inequality for the integrals with respect to y, we derive that

|∇(E × ν)(x, ω)|^{2}≤ C exp(2√

0µ0|k2|d)

Z

Ω

1

|x − y|^{2}dy

Z

Ω

((1 + |k|^{2})(|J|^{2}+ |Jµ|^{2}+ |∇J|^{2}+ |∇Jµ|^{2}) + (|∇^{2}J|^{2}+ |∇^{2}Jµ|^{2})dy

≤ C exp(2√

0µ0|k2|d)(|k|^{2}+ 1)(kJk^{2}_{(2)}(Ω) + kJµk^{2}_{(2)}(Ω)),
and combining with (3.49) complete the proof of (3.48).

The following steps are needed to link the unknown values of I0(k) for k ∈ [K, ∞) to the known
values ε0, ε1in (3.46). Let S be the sector {k : −^{π}_{4} < arg k < ^{π}_{4}} and µ(k) be the harmonic measure
of the interval [0, K] in S \ [0, K]. Observe that |k2| ≤ k1 ( and hence |k| ≤ 2k1) when k ∈ S, so
from (3.47) we have

|I0(k) exp(−2(d + 1)√

0µ0k)|

≤C(1 + k_{1}^{3})(kJk^{2}_{(1)}(Ω) + kJµk^{2}_{(1)}(Ω)) exp(2√

0µ0dk1) exp(−2(d + 1)√

0µ0k1)

≤C((1 + k^{3}_{1})M_{1}^{2}exp(−2√

0µ0k1) ≤ CM_{1}^{2}
with generic constants C. Due to (3.46),

|I_{0}(k) exp(−2(d + 1)√

_{0}µ_{0}k)| ≤ 2ε^{2}_{0} when k ∈ [0, K],

so as in [26, page 55, Theorem 2] and [21, page 67], we conclude that when K < k < +∞

|I0(k) exp(−2(d + 1)√

_{0}µ_{0}k)| ≤ Cε^{2µ(k)}_{0} M_{1}^{2}. (3.50)
Using (3.48) instead of (3.47) and carrying out the same computations as above, we can obtain that
for K < k < +∞

|I1(k) exp(−2(d + 1)√

_{0}µ_{0}k)| ≤ Cε^{2µ(k)}_{1} M_{2}^{2}. (3.51)
We need the following lower bound of the harmonic measure µ(k) given in [9], Lemma 2.2.

Lemma 3.3. If 0 < k < 2^{1}^{4}K, then

1

2 ≤ µ(k).

On the other hand, if 2^{1}^{4}K < k, then
1
π

k K

^{4}

− 1

!−^{1}_{2}

≤ µ(k). (3.52)

### 4 Time-dependent Maxwell and wave equations

Let e be a solution to the initial value problem of the wave equation:

0µ0∂_{t}^{2}e − ∆e = 0 in R^{3}× (0, ∞),
e(x, 0) = −

√2π

0

J, ∂te(x, 0) =

√2π

0µ0

curl Jµ on R^{3}× {0}, (4.53)
and h be a solution to the initial value problem

0µ0∂_{t}^{2}h − ∆h = 0 in R^{3}× (0, ∞),
h(x, 0) =

√2π

µ_{0} Jµ, ∂th(x, 0) =

√2π

_{0}µ_{0}curl J on R^{3}× {0}. (4.54)
Observe that if div J = 0, then div e = 0 for all t > 0.

As shown in [9], (3.39) implies that E(x, ω) = 1

√2π Z +∞

0

e(x, t) exp(iωt)dt, H(x, ω) = 1

√2π Z +∞

0

h(x, t) exp(iωt)dt, Setting e(x, t) = h(x, t) = 0 for t < 0, we can write

E(x, ω) = 1

√2π Z +∞

−∞

e(x, t) exp(iωt)dt, H(x, ω) = 1

√2π Z +∞

−∞

h(x, t) exp(iωt)dt. (4.55) .

To proceed, we need to estimate the remainders in (3.42), (3.44). We first prove the next result, which is similar to [9, Lemma 4.1] and [16, Lemma 2.3].

Lemma 4.1. Let (E, H) be the electric and magnetic fields satisfying (3.37), (1.4) with supp J, supp Jµ ⊂
Ω. Then if J, Jµ∈ H^{1}(Ω) satisfy (1.10), we have

Z

k<|ω|

kE × ν(, ω) − αHτ(, ω)k^{2}_{(0)}(∂Ω)dω ≤ Ck^{−2}(kJk^{2}_{(1)}(Ω) + kJµk^{2}_{(1)}(Ω)) ≤ Ck^{−2}M_{1}^{2}. (4.56)

On the other hand, if J, Jµ ∈ H^{2}(Ω)and (1.11) holds, then
Z

k<|ω|

kE × ν(, ω)k^{2}_{(1)}(∂Ω)dω ≤ Ck^{−2}(kJ_{}k^{2}_{(2)}(Ω) + kJ_{µ}k^{2}_{(2)}(Ω)) ≤ Ck^{−2}M_{2}^{2}. (4.57)
Proof. We first prove (4.56). By Plancherel’s formula, we have that

Z

k<|ω|

kE × ν(, ω) − αHτ(, ω)k^{2}_{(0)}(∂Ω)dω

≤k^{−2}
Z

k<|ω|

ω^{2}kE × ν(, ω) − αHτ(, ω)k^{2}_{(0)}(∂Ω)dω

≤k^{−2}
Z

R

ω^{2}kE × ν(, ω) − αHτ(, ω)k^{2}_{(0)}(∂Ω)dω

=k^{−2}
Z

R

k∂te(, t) × ν − ∂t(αhτ(, t))k^{2}_{(0)}(∂Ω)dt

≤Ck^{−2}
Z

R

(k∂te(, t)k^{2}_{(0)}(∂Ω) + k∂th(, t)k^{2}_{(0)}(∂Ω))dt.

(4.58)

Combining the Huygens’ principle

e(, t) = h(, t) = 0 on Ω, when √

_{0}µ_{0}d < t, (4.59)

and the following estimate
kek^{2}_{(1)}(∂Ω × (0,√

_{0}µ_{0}d)) + khk^{2}_{(1)}(∂Ω × (0,√

_{0}µ_{0}d)) ≤ C(kJ_{}k^{2}_{(1)}(Ω) + kJ_{µ}k^{2}_{(1)}(Ω)),
which follows from the generalization [19] of Sakamoto energy estimates [30] to the transmission
problems (see also [16, (2.31)]), (4.56) is an easy consequence of (4.58).

To prove (4.57), it follows from the similar argument that Z

k<|ω|

kE × ν(, ω)k^{2}_{(1)}(∂Ω)dω ≤ Ck^{−2}
Z

R

(k∂_{t}e(, t)k^{2}_{(0)}(∂Ω) + k∂_{t}∇e(, t)k^{2}_{(0)}(∂Ω))dt.

By the Huygens’s principle and the above generalization of Sakamoto energy estimates applied to

∂je, we have

k∂t∇ek^{2}_{(0)}(∂Ω × (0,√

0µ0d)) ≤ C(kJk^{2}_{(2)}(Ω) + kJµk^{2}_{(2)}(Ω))
(see [9, (4.20)]). (4.57) follows easily. The proof is complete.

Now we consider the initial value problem for the time-dependent Maxwell equations

µ0∂th^{∗}+ curl e^{∗}= 0 in R^{3}× (0, ∞),

0∂te^{∗}− curl h^{∗}= 0 in R^{3}× (0, ∞),
h^{∗}(, 0) =

√2π µ0

Jµ, e^{∗}(, 0) = −

√2π

0

J on R^{3}× {0}.

(4.60)

Since div J= 0 = div Jµ, by the uniqueness of the initial value problem, we have div e^{∗}(, t) = 0 =
div h^{∗}(, t). So from (4.60) we obtain

_{0}µ_{0}∂^{2}_{t}e^{∗}= µ_{0}curl ∂_{t}h^{∗}= −curl curl e^{∗}= ∆e^{∗}
and similarly

0µ0∂_{t}^{2}h^{∗}= ∆h^{∗}.

Also it is easy to see that e^{∗}, h^{∗}satisfy the same initial conditions (4.53), (4.54). Using the uniqueness
in the initial value problem for the wave equations we derive that

e = e^{∗}, h = h^{∗}. (4.61)

From (4.61), Maxwell system (4.60), and (4.59) we have

µ0∂th + curl e = 0 in Ω × (0, T ),

_{0}∂_{t}e − curl h = 0 in Ω × (0, T ),
h(, T ) = 0, e(, T ) = 0 on Ω,
where T =√

_{0}µ_{0}d. For this backward initial value problem, using the estimates of Proposition 1.1
in [10] (for e × ν − αh_{τ}) or Corollary 1.4 in [13] (for e × ν), we can obtain the following key energy
bounds.

Lemma 4.2. Let (e, h) be a solution to (4.60) with J, Jµ ∈ [L^{2}(Ω)]^{3} and supp J, supp Jµ ⊂ Ω.

Then

kJk^{2}_{(0)}(Ω) + kJµk^{2}_{(0)}(Ω) ≤ Cke × ν − αhτk^{2}_{(0)}(∂Ω × (0, T )) (4.62)
and

kJk^{2}_{(0)}(Ω) + kJµk^{2}_{(0)}(Ω) ≤ Cke × νk^{2}_{(1)}(∂Ω × (0, T )). (4.63)
Now we are ready to prove the increasing stability results (1.12), (1.13) of Theorem (1.2).