A New Operator Splitting Method for the Euler Elastica Model for Image
Smoothing^{\ast }

Liang-Jian Deng^{\dagger }, Roland Glowinski^{\ddagger }, and Xue-Cheng Tai^{\S }

Abstract. Euler's elastica model has a wide range of applications in image processing and computer vision.

However, the nonconvexity, the nonsmoothness, and the nonlinearity of the associated energy func- tional make its minimization a challenging task, further complicated by the presence of high order derivatives in the model. In this article we propose a new operator-splitting algorithm to minimize the Euler elastica functional. This algorithm is obtained by applying an operator-splitting based time discretization scheme to an initial value problem (dynamical flow) associated with the optimality system (a system of multivalued equations). The subproblems associated with the three fractional steps of the splitting scheme have either closed form solutions or can be handled by fast dedicated solvers. Compared with earlier approaches relying on ADMM (Alternating Direction Method of Multipliers), the new method has, essentially, only the time discretization step as free parameter to choose, resulting in a very robust and stable algorithm. The simplicity of the subproblems and its modularity make this algorithm quite efficient. Applications to the numerical solution of smoothing test problems demonstrate the efficiency and robustness of the proposed methodology.

Key words. Euler's elastica energy, operator splitting, image smoothing, space projection AMS subject classifications. 68U10, 94A08

DOI. 10.1137/18M1226361

1. Introduction. In imaging applications, the generalized Euler elastica energy is defined by

(1.1) E(v) =

\int

\Omega

\Biggl(

a+b

\bigm|

\bigm|

\bigm|

\bigm|

\nabla \cdot \nabla v

| \nabla v|

\bigm|

\bigm|

\bigm|

\bigm|

2\Biggr)

| \nabla v| dx,

where in (1.1), \Omega is a bounded domain ofR^{2} (a rectangle, typically),aand bare two positive
parameters, v is a function of two variables belonging to an appropriate functional space
containing (in principle) the underlying image, and dx=dx_{1}dx_{2}.

\ast

Received by the editors November 13, 2018; accepted for publication (in revised form) April 16, 2019; published electronically June 25, 2019.

http://www.siam.org/journals/siims/12-2/M122636.html

Funding: The work of the first author was supported by National Natural Science Foundation of China grants 61702083, 61772003, and 61876203. The work of the second author was supported by the Hong Kong Baptist University and by the Kennedy Wong Foundation. The work of the third author was supported by Hong Kong Baptist University startup grants RG(R)-RC/17-18/02-MATH and FRG2/17-18/033.

\dagger

School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan, 611731, China (liangjian.deng@uestc.edu.cn), and Department of Mathematics, Hong Kong Baptist University, Kowloon Tong, Hong Kong.

\ddagger

Department of Mathematics, University of Houston, Houston, TX 77204 (roland@math.uh.edu), and Department of Mathematics, Hong Kong Baptist University, Kowloon Tong, Hong Kong.

\S Department of Mathematics, Hong Kong Baptist University, Kowloon Tong, Hong Kong (xuechengtai@

hkbu.edu.hk).

Downloaded 06/26/19 to 113.54.193.205. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

The Euler elastica energy defined by (1.1) has found applications in image processing, such as denoising [48,57,36], segmentation [63,20,58,2], inpainting [45,6,48,55], zooming [48], illusory contour [41, 37, 47], and segmentation with depth [42, 24, 62]. In [45], Shen, Kang, and Chan discussed the mathematical foundation of the Euler elastica model and its mathematical properties, motivated by applications to image inpainting. In addition, the authors of [45] also discussed a numerical PDE method, in order to solve the associated nonlinear problem. In [48], Tai, Hahn, and Chung proposed an augmented Lagrangian method (ALM) to handle the Euler elastica energy and applied the resulting algorithm to the solution of imaging problems in denoising, inpainting, and zooming (we denote this method as the THC method). In [21], Duan et al. proposed a fast augmented Lagrangian method (FALM) to solve the Euler elastica problem for image denoising, inpainting, and zooming based on the framework of the THC method. More recently, in [57] Zhang et al. proposed a linearized augmented Lagrangian method (LALM) to simplify the THC method and applied it to the solution of image denoising problems. In [55], two numerical algorithms were proposed to solve inpainting related problems involving the Euler elastica energy (1.1): The first algorithm is an improved variant of the ALM based algorithm discussed in [48]. The second algorithm is obtained by applying a split-Bregman method to a linearized elastica model proposed in [1].

Following an idea from [40], Masnou and Morel proposed in [41] a novel method to handle the elastica energy functional and applied it to the solution of illusory contour problems. In [37], Kang, Zhu, and Shen used the Euler elastica energy as an effective tool to fuse the scattered corner bases. In [47], Tai and Duan combined level set and binary representation of interfaces to solve, via the Euler elastica model, inpainting, segmentation and illusory control problems.

In [5], Bredies, Pock, and Wirth suggested using as smoothing functional a convex, lower semicontinuous approximation of the Euler elastica energy and applied this approximation to the solution of some imaging problems; combined with tailored discretization of measures, the functional introduced in [5] has produced promising results. Taking image restoration as an illustration, in order to solve the image restoration problem, via Euler's elastica energy, we need to solve the following minimization problem:

(1.2) min

v

\Biggl[

\int

\Omega

\Biggl(

a+b

\bigm|

\bigm|

\bigm|

\bigm|

\nabla \cdot \nabla v

| \nabla v|

\bigm|

\bigm|

\bigm|

\bigm|

2\Biggr)

| \nabla v| dx+1

2

\int

\Omega

| f - v| ^{2}dx

\Biggr]

,

where v is as in (1.1), and f is the image we are trying to denoise. The first term in the functional in (1.2) is a regularizing one; it captures the image geometrical features. The second term is the fidelity one; it enforces the underlying image to be close to f.

The main goal of this article is to develop a robust, stable, and ``almost"" parameter free method to solve problem (1.2), and close variants of it.

The nonconvexity, the nonsmoothness, and the high-order of the derivatives it contains make the fast and robust solution of problem (1.2) a very challenging task. So far, there are only a few methods to solve problems such as (1.2); let us mention among them two graph-cut based methods [23, 1], an integer linear programming (ILP) method [44], a method based on the approximation of the Euler elastica energy [5], and the THC method [48]. The THC method in [48] is a particular realization of the alternating direction method of multipliers (ADMM), a well-known method from mathematical programming (see, e.g., [17] and references

Downloaded 06/26/19 to 113.54.193.205. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

therein for more details). ADMM is a primal-dual method, closely related to the Douglas-- Rachford alternating direction method (a well-known operator-splitting method). Following the THC method [48], several extensions were proposed for solving, via the Euler elastica energy functional, a large variety of imaging problems (see [63,21,54, 2]). Actually, readers can find further curvature-based methods in [7, 35, 25, 3, 14]. Primal-dual methods have been applied also to derive fast algorithms to handle the total variation (TV) imaging model, introduced in [43] by Rudin, Osher, and Fattorini (ROF). For instance, Droske and Bertozzi in [19] combined the regularization techniques with active contour models to segment polygonal objects in aerial images. This method could avoid losing features by using TV-based inverse scale-space techniques on the input data. See [15,51,10,60, 56,26,27,49,52,46,53,4,59, 12,9,11,38] and the references therein for more details.

In this article, we propose a novel and (relatively) simple operator-splitting method for the solution of problem (1.2). The principle of the method is very simple: (i) We introduce the vector-valued functionsq(=\nabla v) and\bfitmu (=q/| q| ). (ii) Using appropriate indicator functionals, we reformulate problem (1.2) as an unconstrained minimization problem with respect to the triple (v,q,\bfitmu ). (iii) We derive an optimality system and associate with it an initial-value problem (gradient flow). (iv) We use the Lie scheme to time-discretize the above initial value problem and capture its steady state solutions. The subproblems associated with the Lie scheme fractional steps have either closed form solutions or can be solved by fast dedicated algorithms (such as FFT). Numerous applications to image smoothing show the efficiency of the proposed method.

When compared to the THC method in [48], the method introduced in this article has the following advantages:

\bullet The time-discretization step is, essentially, the only parameter one has to choose, while

the THC method requires the balancing of three augmentation parameters.

\bullet The results produced by the new method are less sensitive to parameter choice than

those obtained by the THC method.

\bullet For the same stopping criterion tolerance, the new method needs less iterations than

the THC counterpart. Moreover, the new method has a lower cost per iteration than the THC method.

This article is structured as follows: The novel method is described in sections 2 and 3, while its finite difference implementation is discussed in section 4. Section 5 is dedicated to image smoothing, with some experiments designed to show the superiority of the new method. Some conclusions are drawn in section 6. Finally, an appendix dedicated to the Lie and Marchuk--Yanenko operator-splitting schemes is added. Indeed, we feel justified adding this appendix since these two schemes are highly popular in computational fluid dynamics but much less in imaging science.

To conclude this section we would like to mention that various derivations in the following sections are largely mathematically formal (that is, lacking sometimes rigorous mathematical foundations). This follows, in particular, from the fact that, to the best of our knowledge, one has not identified, yet, the proper functional framework to formulate problem (1.2). Accord- ingly, existence of minimizers and convergence of the proposed schemes are tasks for further studies.

Downloaded 06/26/19 to 113.54.193.205. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

2. A reformulation of problem (1.2). From section 1, Euler's elastica problem reads as

(2.1) min

v\in \scrV

\Biggl[

\int

\Omega

\Biggl(

a+b

\bigm|

\bigm|

\bigm|

\bigm|

\nabla \cdot \nabla v

| \nabla v|

\bigm|

\bigm|

\bigm|

\bigm|

2\Biggr)

| \nabla v| dx+1

2

\int

\Omega

| f - v| ^{2}dx

\Biggr]

,

where \scrV is a functional space that needs to be chosen properly. As already mentioned in
section1, formulation (2.1) is largely formal since we do not know much about space\scrV , which
has to be, obviously, a subspace of L^{2}(\Omega ). At any rate, the discrete problems largely ignore
these functional analysis considerations, and we will say no more about the proper choice of

\scrV . An important issue with formulation (2.1) is that it makes no sense on those parts of \Omega where \nabla v vanishes (implying that (2.1) is a typical formal mathematical formulation). An obvious (and once popular) way to overcome this difficulty is to replace| \nabla v| by\sqrt{}

\epsilon ^{2}+| \nabla v| ^{2},

\epsilon being a small parameter. A more sophisticated way we borrow from viscoplasticity (see,

e.g., [22,17,34]) is to replace | \nabla v| ^{\nabla v} by a vector-valued function\bfitmu verifying

(2.2) \bfitmu \cdot \nabla v=| \nabla v| , | \bfitmu | \leq 1, with | \bfitmu | =\sqrt{}

\mu ^{2}_{1}+\mu ^{2}_{2} \forall \bfitmu = (\mu 1, \mu 2), which is used in some imaging works; see, e.g., [3], and

then problem (2.1) by

(2.3) min

(v,\bfitmu )\in \scrW

\biggl[ \int

\Omega

\Bigl(

a+b| \nabla \cdot \bfitmu | ^{2}\Bigr)

| \nabla v| dx+1

2

\int

\Omega

| f - v| ^{2}dx

\biggr]

, where (formally)

\scrW =\{ (v,\bfitmu )\in \scrH ^{1}(\Omega )\times \scrH (\Omega ,div), \bfitmu \cdot \nabla v=| \nabla v| , | \bfitmu | \leq 1\} ,
with

\scrH (\Omega ,div) =\{ \bfitmu \in (\scrL ^{2}(\Omega ))^{2},\nabla \cdot \bfitmu \in \scrL ^{2}(\Omega )\} .

A simple, but computationally important, result is provided by the following (semiformal) proposition.

Proposition 1. Suppose that (u,\bfitlambda ) is a solution of problem (2.3), then u and f have the same average grey value, that is,

(2.4)

\int

\Omega

udx=

\int

\Omega

f dx.

Proof. Consider the pair (u+c,\bfitlambda ), wherec\in R. Since\nabla (u+c) =\nabla u, the pair (u+c,\bfitlambda ) belongs also to \scrW . Let us denote by J1 (resp., J2) the left (resp., right) integral in (2.3).

Since \nabla (u+c) =\nabla u we have J_{1}(u+c,\bfitlambda ) =J_{1}(u,\bfitlambda ). On the other hand,
(2.5) J2(u+c,\bfitlambda ) = 1

2

\int

\Omega

| u+c - f| ^{2}dx=J2(u,\bfitlambda ) +c

\int

\Omega

(u - f)dx+| \Omega | c^{2}

2,

with | \Omega | = measure of \Omega . The function u being fixed, the quadratic function of c in the
right-hand side of (2.5) takes its minimal value for c= c_{m} = | \Omega | ^{1}

\int

\Omega (f - u)dx. Suppose that

\int

\Omega (f - u)dx\not = 0; then

J_{2}(u+c_{m},\bfitlambda )< J_{2}(u,\bfitlambda ),

implying that (u,\bfitlambda ) is not a minimizer ofJ_{1}+J_{2}. We have, thus, necessarily \int

\Omega udx=\int

\Omega f dx.

Remark 2.1. As we do not have the existence of the minimizer of problem (2.3), thus the proof above is largely formal mathematically. However, if one is willing to consider the discretized problems in finite dimensions, the existence is not a problem and the proof is correct. This remark is also applicable to some similar issues related to existence of minimizers later.

Remark 2.2. It is a common practice to assume periodicity when working with image processing problems. Proposition1still holds if one considers the minimization of the elastica functional in a space of sufficiently smooth periodic functions (functions defined over a two- dimensional (2D) torus). In section3.8we will return to the case where\scrV is a space of smooth functions periodic in the Ox1 and Ox2 directions

Let us define the sets \Sigma f and S by

\Sigma _{f} =

\biggl\{

q\in (\scrL ^{2}(\Omega ))^{2}, \exists v\in \scrH ^{1}(\Omega ) such that q=\nabla v and

\int

\Omega

(v - f)dx= 0

\biggr\}

and

S=\{ (q,\bfitmu )\in (\scrL ^{2}(\Omega ))^{2}\times (\scrL ^{2}(\Omega ))^{2}, q\cdot \bfitmu =| q| , | \bfitmu | \leq 1\} .
There is then (formal) equivalence between problem (2.3) and

(2.6)

(\bfq ,\bfitmu )\in (\scrL ^{2}(\Omega ))min^{2}\times \scrH (\Omega ,\mathrm{d}\mathrm{i}\mathrm{v})

\biggl[ \int

\Omega

\bigl(

a+b| \nabla \cdot \bfitmu | ^{2}\bigr)

| q| dx+1

2

\int

\Omega

| v_{\bfq } - f| ^{2}dx+I_{\Sigma }_{f}(q) +I_{S}(q,\bfitmu )

\biggr]

,
whereI_{\Sigma }_{f} and I_{S} are indicator functionals defined by

(2.7) I\Sigma _{f}(q) =

\Biggl\{

0 if q\in \Sigma _{f},

+\infty if q\in (\scrL ^{2}(\Omega ))^{2}\setminus \Sigma _{f}
and

(2.8) IS(q,\bfitmu ) =

\Biggl\{

0 if (q,\bfitmu )\in S,

+\infty if (q,\bfitmu )\in (\scrL ^{2}(\Omega ))^{2}\times (\scrL ^{2}(\Omega ))^{2}\setminus S

v\bfq being the unique solution of the following problem:

(2.9)

\left\{

\nabla ^{2}v\bfq =\nabla \cdot q in \Omega ,
(\nabla v_{\bfq } - q)\cdot n= 0 on \partial \Omega ,

\int

\Omega

v\bfq dx=

\int

\Omega

f dx,

where in (the Neuman) problem (2.9), n denotes the outward unit normal vector on the
boundary \partial \Omega of \Omega . If q\in (\scrL ^{2}(\Omega ))^{2}, then problem (2.9) has a unique solution in \scrH ^{1}(\Omega ).

3. An operator-splitting method for the solution of problem (2.6).

3.1. Optimality conditions and associated dynamical flow problem. Let us denote by J1 andJ2 the functionals defined by

(3.1)

\left\{

J1(q,\bfitmu ) =

\int

\Omega

\bigl(

a+b| \nabla \cdot \bfitmu | ^{2}\bigr)

| q| dx, J2(q) = 1

2

\int

\Omega

| v_{\bfq } - f| ^{2}dx,

and suppose that (p,\bfitlambda ) is a minimizer of the functional in (2.6). We have that u =v\bfp (see the definition of v\bfq given in (2.9)) is a solution to problem (2.1), and the following system of (necessary) optimality conditions holds (formally, at least):

(3.2)

\Biggl\{

\partial _{\bfq }J_{1}(p,\bfitlambda ) +DJ_{2}(p) +\partial I_{\Sigma }_{f}(p) +\partial _{\bfq }I_{S}(p,\bfitlambda )\ni 0,

D\bfitmu J1(p,\bfitlambda ) +\partial \bfitmu I_{S}(p,\bfitlambda )\ni 0,

where the Ds (resp., the \partial s) denotes classical differentials (resp., generalized differentials
(subdifferentials in the case of nonsmooth convex functionals, I_{\Sigma }_{f} being a typical one)). We
associate with (3.2) the following initial value problem (dynamical flow):

(3.3)

\left\{ \partial p

\partial t +\partial \bfq J1(p,\bfitlambda ) +DJ2(p) +\partial I\Sigma _{f}(p) +\partial \bfq IS(p,\bfitlambda )\ni 0 in \Omega \times (0,+\infty ),

\gamma \partial \bfitlambda

\partial t +D_{\bfitmu }J_{1}(p,\bfitlambda ) +\partial _{\bfitmu }I_{S}(p,\bfitlambda )\ni 0 in \Omega \times (0,+\infty ),
(p(0),\bfitlambda (0)) = (p0,\bfitlambda 0),

with\gamma >0 (the choice of \gamma will be discussed in section3.5).

Let us denote the pair (\bfitp ,\bfitlambda ) by \bfitX . Problem (3.3) is clearly of the following form:

\left\{

\partial \bfitX

\partial t +

4

\sum

j=1

Aj(\bfitX )\ni 0 in (0,+\infty ),

\bfitX (0) =\bfitX 0(= (\bfitp 0,\bfitlambda 0)),

implying (see Appendix A) that the initial value problem (3.3) is a natural candidate to a solution method of the operator-splitting type, the Lie and Marchuk--Yanenko schemes, in particular. The idea is to capture the steady state solutions of (3.3) (necessarily solutions of (3.2)) by integrating (approximately) (3.3) over the time interval (0,+\infty ). This approach will be discussed in section3.2.

Remark 3.1. For the initial value, we advocate taking (p_{0},\bfitlambda _{0}) \in S in (3.3). Related to
subdifferentials, it is known that the subdifferentials of the sum of two functionals may not
equal the sum of the subdifferentials of each functional. Due to the complexity of the problem,
we will not dwell upon this issue here.

3.2. An operator-splitting method for the solution of the dynamical flow problem (3.3)
. Following [32] (and the Appendix A; see also [33] for applications of operator-splitting to
imaging), we will use a Lie scheme to time-discretize problem (3.3). Let \tau (> 0) be a time
discretization step; we denote (n+\alpha )\tau byt^{n+\alpha }. Among the many possible splitting schemes
of the Lie type one can employ to solve problem (3.3), we advocate the following:

(3.4) (p^{0},\bfitlambda ^{0}) = (p0,\bfitlambda 0).

1st fractional step. Solve

(3.5)

\left\{ \left\{ \partial p

\partial t +\partial \bfq J1(p,\bfitlambda )\ni 0,

\gamma \partial \bfitlambda

\partial t +D\bfitmu J1(p,\bfitlambda ) =0

in \Omega \times (t^{n}, t^{n+1}),

(p(t^{n}),\bfitlambda (t^{n})) = (p^{n},\bfitlambda ^{n}),

and set

(3.6) (p^{n+1/3},\bfitlambda ^{n+1/3}) = (p(t^{n+1}),\bfitlambda (t^{n+1})).

2nd fractional step. Solve

(3.7)

\left\{ \left\{ \partial p

\partial t +\partial _{\bfq }I_{S}(p,\bfitlambda )\ni 0,

\gamma \partial \bfitlambda

\partial t +\partial \bfitmu IS(p,\bfitlambda )\ni 0

in \Omega \times (t^{n}, t^{n+1}),

(p(t^{n}),\bfitlambda (t^{n})) = (p^{n+1/3},\bfitlambda ^{n+1/3}),

and set

(3.8) (p^{n+2/3},\bfitlambda ^{n+2/3}) = (p(t^{n+1}),\bfitlambda (t^{n+1})).

3rd fractional step. Solve

(3.9)

\left\{ \left\{ \partial p

\partial t +DJ2(p) +\partial I\Sigma _{f}(p)\ni 0,

\gamma \partial \bfitlambda

\partial t =0

in \Omega \times (t^{n}, t^{n+1}),

(p(t^{n}),\bfitlambda (t^{n})) = (p^{n+2/3},\bfitlambda ^{n+2/3}),

and set

(3.10) (p^{n+1},\bfitlambda ^{n+1}) = (p(t^{n+1}),\bfitlambda ^{n+2/3}).

The Lie scheme (3.5)--(3.10) is only semidiscrete since we have not yet specified how to time-discretize the initial value problems (3.5), (3.7), and (3.9). In order to do so, we suggest using the following time discretization scheme (of the Marchuk--Yanenko type):

(3.11) (p^{0},\bfitlambda ^{0}) = (p_{0},\bfitlambda _{0}).

Then, for n \geq 0, (p^{n},\bfitlambda ^{n}) \rightarrow (p^{n+1/3},\bfitlambda ^{n+1/3}) \rightarrow (p^{n+2/3},\bfitlambda ^{n+2/3}) \rightarrow (p^{n+1},\bfitlambda ^{n+1}) as
follows:

(3.12)

\left\{

p^{n+1/3} - p^{n}

\tau +\partial \bfq J1(p^{n+1/3},\bfitlambda ^{n})\ni 0,

\gamma \bfitlambda ^{n+1/3} - \bfitlambda ^{n}

\tau +D\bfitmu J1(p^{n+1/3},\bfitlambda ^{n+1/3}) =0

in \Omega \Rightarrow (p^{n+1/3},\bfitlambda ^{n+1/3}),

(3.13)

\left\{

p^{n+2/3} - p^{n+1/3}

\tau +\partial \bfq IS(p^{n+2/3},\bfitlambda ^{n+2/3})\ni 0,

\gamma \bfitlambda ^{n+2/3} - \bfitlambda ^{n+1/3}

\tau +\partial _{\bfitmu }I_{S}(p^{n+2/3},\bfitlambda ^{n+2/3})\ni 0

in \Omega \Rightarrow (p^{n+2/3},\bfitlambda ^{n+2/3}),

(3.14)

\left\{

p^{n+1} - p^{n+2/3}

\tau +DJ_{2}(p^{n+1}) +\partial I_{\Sigma }_{f}(p^{n+1})\ni 0,

\gamma \bfitlambda ^{n+1} - \bfitlambda ^{n+2/3}

\tau =0

in \Omega \Rightarrow (p^{n+1},\bfitlambda ^{n+1}).

In the following subsections we are going to discuss the solution of the various subproblems encountered when applying scheme (3.11)--(3.14) to the solution of problem (2.6).

Remark 3.2. The nonconvexity of problem (2.1) implies thenonmonotonicity of some of the operators encountered in the (formal) necessary conditions (3.2) and the associated initial value problem (3.3) (nondifferentiability further complicates the situation). Due to these difficulties, the convergence of algorithms (3.4)--(3.10) and (3.11)--(3.14) (and of their finite dimensional analogues) as n\rightarrow +\infty , are questions that cannot be answered at this moment.

These difficult mathematical issues are beyond the scope of this article.

3.3. Computing p\bfitn +\bfone /\bfthree from (3.12). The multivalued equation verified by p^{n+1/3} in
(3.12) is nothing but the (formal) Euler--Lagrange equation of the following minimization
problem:

(3.15) p^{n+1/3} = arg min

\bfq \in (\scrL ^{2}(\Omega ))^{2}

\biggl[

1 2

\int

\Omega

| q - p^{n}| ^{2}dx+\tau

\int

\Omega

\bigl(

a+b| \nabla \cdot \bfitlambda ^{n}| ^{2}\bigr)

| q| dx

\biggr]

. Problems such as (3.15) are very common in image processing and viscoplasticity. The closed form solution of problem (3.15) is given by (see [28,31,18,50,48])

(3.16) p^{n+1/3} =max

\biggl\{

0,1 - c

| p^{n}|

\biggr\}

p^{n},
wherec=\tau a+\tau b| \nabla \cdot \bfitlambda ^{n}| ^{2}.

3.4. Computing \bfitlambda \bfitn +\bfone /\bfthree from (3.12). The equation verified by \bfitlambda ^{n+1/3} in (3.12) is the
(formal) Euler--Lagrange equation of the following minimization problem:

(3.17) \bfitlambda ^{n+1/3}= arg min

\bfitmu \in \scrH (\Omega ,\mathrm{d}\mathrm{i}\mathrm{v})

\biggl[

\gamma

\int

\Omega

| \bfitmu - \bfitlambda ^{n}| ^{2}

2\tau dx+J_{1}(\bfitmu ,p^{n+1/3})

\biggr]

,
where\bfitlambda ^{n} and p^{n+1/3} are known.

From the Euler--Lagrange equation of (3.17), we get that the solution\bfitlambda ^{n+1/3}is the solution
of following linear elliptic system with variable coefficients:

(3.18)

\left\{

\gamma \bfitlambda ^{n+1/3} - \bfitlambda ^{n}

\tau - 2b\nabla (| p^{n+1/3}| \nabla \cdot \bfitlambda ^{n+1/3}) =0in \Omega ,
b| p^{n+1/3}| \nabla \cdot \bfitlambda ^{n+1/3} = 0 on \partial \Omega .

Problem (3.18) is (formally) well posed. Properly approximated by either finite difference or finite element methods, problem (3.18) leads to linear systems associated with symmetric positive definite matrices making these systems solvable by a large variety of efficient linear solvers. For those cases where \Omega is a rectangle (the most common situation), an alternative to the above mentioned approximation methods is provided by cosine expansions-based spectral methods.

In section 3.8, we will encounter the variant of system (3.18) associated with periodic
boundary conditions in the Ox_{1} and Ox_{2} directions. Indeed, it is common to use periodic
boundary conditions for image processing problems. One can justify this approach by assum-
ing that the image is defined on a 2D torus, for example. As shown in [48, section 3.2.5]

and [50], periodic boundary conditions simplify the efficient solution of the periodic variant of problem (3.18) by fast Fourier transform (FFT). We will assume periodic boundary conditions in sections4 and 5.

3.5. Computing \bigl(

p\bfitn +\bftwo /\bfthree , \bfitlambda \bfitn +\bftwo /\bfthree \bigr)

from (3.13).

3.5.1. Decomposition of problem (3.13). One can view system (3.13) as the Euler-- Lagrange equation of the following minimization problem:

(3.19) min

(\bfq ,\bfitmu )\in S

\biggl[ \int

\Omega

| q - p^{n+1/3}| ^{2}dx+\gamma

\int

\Omega

| \bfitmu - \bfitlambda ^{n+1/3}| ^{2}dx

\biggr]

.

Problem (3.19) can be solved pointwise, reducing, a.e. on \Omega , to the following finite dimen- sional constrained minimization problem:

(3.20) (p^{n+2/3}(x),\bfitlambda ^{n+2/3}(x)) = argmin(\bfq ,\bfitmu )\in \sigma j_{n+1/3}(q,\bfitmu ;x),
where\sigma =\{ (q,\bfitmu )\in R^{2}\times R^{2}, q\cdot \bfitmu =| q| , | \bfitmu | \leq 1\} and

j_{n+1/3}(q,\bfitmu ;x) =

\bigm|

\bigm|

\bigm| q - p^{n+1/3}(x)

\bigm|

\bigm|

\bigm|

2

+\gamma

\bigm|

\bigm|

\bigm| \bfitmu - \bfitlambda ^{n+1/3}(x)

\bigm|

\bigm|

\bigm|

2\forall (q,\bfitmu )\in R^{2}\times R^{2}.
Let us define \sigma 0 and \sigma 1 by

\sigma 0 =\{ (q,\bfitmu )\in R^{2}\times R^{2},q=0,| \bfitmu | \leq 1\} , \sigma _{1} =\{ (q,\bfitmu )\in R^{2}\times R^{2},q\not =0,q\cdot \bfitmu =| q| , | \bfitmu | = 1\} .
We clearly have\sigma =\sigma 0\cup \sigma _{1}, implying that to compute\bigl(

p^{n+2/3}(x),\bfitlambda ^{n+2/3}(x)\bigr)

, we may proceed as follows:

(i) Solve the following two uncoupled minimization problems:

\Bigl(

p^{n+2/3}_{0} (x),\bfitlambda ^{n+2/3}_{0} (x)

\Bigr)

= argmin(\bfq ,\bfitmu )\in \sigma _{0}j_{n+1/3}(q,\bfitmu ;x),
(3.21)

\Bigl(

p^{n+2/3}_{1} (x),\bfitlambda ^{n+2/3}_{1} (x)\Bigr)

= argmin(\bfq ,\bfitmu )\in \sigma 1j_{n+1/3}(q,\bfitmu ;x),
(3.22)

(ii) Choose the one that gives the smallest value to j_{n+1/3} as the minimizer of (3.20), i.e.,

\Bigl(

p^{n+2/3}(x),\bfitlambda ^{n+2/3}(x)\Bigr)

= argmin

\biggl[

j_{n+1/3}(p_{0}(x)^{n+2/3},\bfitlambda _{0}(x)^{n+2/3};x),
(3.23)

j_{n+1/3}(p_{1}(x)^{n+2/3},\bfitlambda _{1}(x)^{n+2/3};x)

\biggr]

,a.e. on \Omega .

In what follows, we first introduce a strategy of adaptively choosing \gamma in section 3.5.2.

After that, in sections 3.5.3 and 3.5.4, we will discuss the minimization of the functional in
(3.20) over \sigma _{0} and \sigma _{1}, respectively.

3.5.2. Selection of the parameter \bfitgamma . We intend to select the parameter \gamma so that the
two terms in j_{n+1/3} are balanced. We note that

\bfitlambda (t) = p(t)

| p(t)| .

Thus

(3.24) \partial \bfitlambda

\partial t = lim

\tau \rightarrow 0

1

\tau

\biggl(

p(t+\tau )

| p(t+\tau )| - p(t)

| p(t)|

\biggr) . Due to the relation

\bigm|

\bigm|

\bigm|

\bigm|

p

| p| - q

| q|

\bigm|

\bigm|

\bigm|

\bigm|

2

= | p| ^{2}

| p| ^{2} +| q| ^{2}

| q| ^{2} - 2p\cdot q

| p| | q| = 2

\biggl(

1 - p\cdot q

| p| | q|

\biggr)

\leq | p| ^{2}+| q| ^{2} - 2p\cdot q

| p| | q| = | p - q| ^{2}

| p| | q|

one has

\bigm|

\bigm|

\bigm|

\bigm|

p(t+\tau )

| p(t+\tau )| - p(t)

| p(t)|

\bigm|

\bigm|

\bigm|

\bigm|

\leq | p(t+\tau ) - p(t)|

\sqrt{}

| p(t+\tau )| | p(t)|

Let\tau \rightarrow 0, we get from (3.24) that

\bigm|

\bigm|

\bigm|

\bigm|

\partial \bfitlambda

\partial t

\bigm|

\bigm|

\bigm|

\bigm|

\leq 1

| p|

\bigm|

\bigm|

\bigm|

\bigm|

\partial p

\partial t

\bigm|

\bigm|

\bigm|

\bigm|

. For small\tau , the minimizer of (3.19) verifies

(3.25) | p^{n+2/3} - p^{n+1/3}| ^{2}

2\tau +\gamma | \bfitlambda ^{n+2/3} - \bfitlambda ^{n+1/3}| ^{2}
2\tau \approx \tau

2

\biggl( \bigm|

\bigm|

\bigm|

\bigm|

\partial p

\partial t(t^{n+1/3})

\bigm|

\bigm|

\bigm|

\bigm|

2

+\gamma

\bigm|

\bigm|

\bigm|

\bigm|

\partial \bfitlambda

\partial t(t^{n+1/3})

\bigm|

\bigm|

\bigm|

\bigm|

2\biggr) . According to the above estimate, to balance these two terms, we just need to choose

\gamma =| p^{n+1/3}| ^{2}.

In order to avoid the case| p^{n+1/3}| \approx 0, we choose, in practice,
(3.26) \gamma = max(| p^{n+1/3}| ^{2},\alpha ),\^

where \^\alpha is a given positive small number. In this work, we empirically choose \^\alpha =\surd

\tau .
3.5.3. Minimizing the functional in (3.20) over \bfitsigma _{\bfzero }. Over\sigma _{0} the minimization problem
(3.21) reduces to

(3.27) min

\bfitmu \in \bfR ^{2},| \bfitmu | \leq 1

\bigm|

\bigm|

\bigm| \bfitmu - \bfitlambda ^{n+1/3}(x)

\bigm|

\bigm|

\bigm| . Clearly, the solution of problem (3.27) is given by

(3.28) \bfitlambda ^{n+1/3}_{0} (x) = \bfitlambda ^{n+1/3}(x)
max(1,| \bfitlambda ^{n+1/3}(x)| ).
Concerningp^{n+1/3}_{0} (x), we have, obviously,

(3.29) p^{n+1/3}_{0} (x) =0.

3.5.4. Minimizing the functional in (3.20) over \bfitsigma _{\bfone }. Over\sigma _{1}, the minimization problem
(3.22) reduces to

(3.30) inf

(\bfq ,\bfitmu )\in \bfR ^{2}\times \bfR ^{2},\bfq \not =\bfzero ,\bfq \cdot \bfitmu =| \bfq | ,| \bfitmu | =1

\Bigl[

| q - p^{n+1/3}(x)| ^{2}+\gamma | \bfitmu - \bfitlambda ^{n+1/3}(x)| ^{2}\Bigr]

.

For notational simplicity, we introduce x and y defined by x = p^{n+1/3}(x) and y =

\bfitlambda ^{n+1/3}(x), respectively. Using this notation and taking relation| \bfitmu | = 1 into account, problem

(3.30) takes the following simplified formulation:

(3.31) inf

(\bfq ,\bfitmu )\in \bfR ^{2}\times \bfR ^{2},\bfq \not =\bfzero ,\bfq \cdot \bfitmu =| \bfq | ,| \bfitmu | =1

\biggl[

1

2| q| ^{2} - q\cdot x - \gamma \bfitmu \cdot y

\biggr]

. Let us denote| q| by\theta ; since\bfitmu =q/| q| , the above relations imply that (3.32) q=\theta \bfitmu , \theta >0.

Relation (3.32) allows us to replace problem (3.31) by the following constrained minimiza-
tion problem inR^{\bfthree }:

(3.33) inf

(\theta ,\bfitmu )\in \bfR \times \bfR ^{2},\theta >0,| \bfitmu | =1

\biggl[

1

2\theta ^{2} - \theta \bfitmu \cdot x - \gamma \bfitmu \cdot y

\biggr]

.

In order to solve problem (3.33), we observe that the above problem is equivalent to

(3.34) inf

\theta >0 min

\bfitmu \in \bfR ^{2},| \bfitmu | =1

\biggl[

1

2\theta ^{2} - \theta \bfitmu \cdot x - \gamma \bfitmu \cdot y

\biggr]

.

In order to minimize on a closed set of R^{3}, the problem that we finally consider is the
following variant of problem (3.34):

(3.35) min

\theta \geq 0 min

\bfitmu \in \bfR ^{2},| \bfitmu | =1

\biggl[

1

2\theta ^{2} - \theta \bfitmu \cdot x - \gamma \bfitmu \cdot y

\biggr]

.
With the parameter \theta being fixed, the solution \bfitmu ^{\ast }(\theta ) of problem

\bfitmu \in \bfR min^{2},| \bfitmu | =1

\biggl[

1

2\theta ^{2} - \theta \bfitmu \cdot x - \gamma \bfitmu \cdot y

\biggr]

is given by \bfitmu ^{\ast }(\theta ) = \theta \bfx +\gamma \bfy

| \theta \bfx +\gamma \bfy | , implying that problem (3.35) reduces to

(3.36) min

\theta \geq 0

\biggl[

1

2\theta ^{2} - | \theta x+\gamma y|

\biggr]

.

There are many ways to solve problem (3.36), such as Newton's method, bisection or golden section methods, and a variety of fixed point methods ([8]). The method we have chosen is a fixed point one and has shown fast convergence properties. Let us denote by E the function defined by

E(\theta ) = 1

2\theta ^{2} - | \theta x+\gamma y| .

We clearly have

dE

d\theta (\theta ) =\theta - x\cdot (\theta x+\gamma y)

| \theta x+\gamma y| .

In order to solve equation ^{dE}_{d\theta }(\theta ) = 0, we advocate the simple followingfixed point method:

(3.37)

\left\{

\theta ^{0} =| x| ,

fork\geq 0, \theta ^{(k)}\rightarrow \theta ^{(k+1)},

\theta ^{(k+1)} = max

\Biggl(

0,x\cdot (\theta ^{(k)}x+\gamma y)

| \theta ^{(k)}x+\gamma y|

\Biggr) .

A more detailed presentation of our fixed point method reads as follows.

Algorithm 1. Fixed point solution of problem (3.36) Input: x,y,\gamma

Output: \theta ^{\ast }

Initialization: \theta ^{(0)} =| x| ,k= 0

While: | \theta ^{(k+1)} - \theta ^{(k)}| > tol and k < Mit

(1) compute \theta ^{(k+1)} by

\theta ^{(k+1)}= max

\Bigl(

0,\bfx \cdot (\theta ^{(k)}\bfx +\gamma \bfy )

| \theta ^{(k)}\bfx +\gamma \bfy |

\Bigr) , (2)k=k+ 1

End While.

(3) One gets the final\theta ^{\ast } when iterations stop

In Algorithm 1, tol and M_{it} denote a positive tolerance value and the maximum number
of iterations, respectively. Actually, Algorithm 1 is not sensitive to these values. For all of
the experiments reported in this article we tooktol= 10^{ - 3} and Mit = 100.

Once \theta ^{\ast } is known, we obtain the vectors \bfitlambda ^{n+2/3}_{1} (x) and p^{n+2/3}_{1} (x) (we defined them in

section 3.5.1) via the following relations:

(3.38) \bfitlambda ^{n+2/3}_{1} (x) = \theta ^{\ast }p^{n+1/3}(x) +\gamma \bfitlambda ^{n+1/3}(x)

| \theta ^{\ast }p^{n+1/3}(x) +\gamma \bfitlambda ^{n+1/3}(x)|

and

(3.39) p^{n+2/3}_{1} (x) =\theta ^{\ast }\bfitlambda ^{n+2/3}_{1} (x),

respectively. A more rigorous notation would have been to use\theta ^{\ast }_{n}(x) instead of \theta ^{\ast }, since the
solution of problem (3.36) varies with x and n (we recall that, in (3.36), x=p^{n+1/3}(x) and

y=\bfitlambda ^{n+1/3}(x)).

Once we compute \bigl(

p^{n+2/3}_{1} (x),\bfitlambda ^{n+2/3}_{1} (x)\bigr)

from (3.28)--(3.29) and \bigl(

p^{n+2/3}_{1} (x),\bfitlambda ^{n+2/3}_{1} (x)\bigr)
from (3.38)--(3.39), we obtain the minimizer of (3.20) through (3.23).

3.6. Computing p\bfitn +\bfone and \bfitlambda \bfitn +\bfone from (3.14). We clearly have
(3.40) \bfitlambda ^{n+1}=\bfitlambda ^{n+2/3}.

On the other hand, the multivalued equation verified by p^{n+1} in (3.14) is the Euler--
Lagrange equation of the following minimization problem:

(3.41) p^{n+1}= arg min

\bfq \in \Sigma _{f}

\biggl[

1 2

\int

\Omega

\bigm|

\bigm|

\bigm| q - p^{n+2/3}

\bigm|

\bigm|

\bigm|

2

dx+\tau 2

\int

\Omega

| v_{\bfq } - f| ^{2}dx

\biggr]

, the functionv\bfq being defined by (2.9).

From the definition of \Sigma _{f} (see section2), problem (3.41) is equivalent to

(3.42)

\left\{

p^{n+1}=\nabla u^{n+1} with

u^{n+1}= arg min

v\in \scrH ^{1}(\Omega )

\biggl[

1 2

\int

\Omega

| \nabla v| ^{2}dx+\tau

2

\int

\Omega

| v - f| ^{2}dx -

\int

\Omega

p^{n+2/3}\cdot \nabla vdx

\biggr]

.

Functionu^{n+1}is the unique solution of the following well-posed linear variational problem
in\scrH ^{1}(\Omega ):

(3.43)

\left\{

u^{n+1}\in \scrH ^{1}(\Omega ),

\int

\Omega

\nabla u^{n+1}\cdot \nabla vdx+\tau

\int

\Omega

u^{n+1}vdx=

\int

\Omega

p^{n+2/3}\cdot \nabla vdx+\tau

\int

\Omega

f vdx \forall v\in \scrH ^{1}(\Omega ).

Problems (3.42) and (3.43) have a unique solution which is the weak solution of the following problem:

(3.44)

\Biggl\{

- \nabla ^{2}u^{n+1}+\tau u^{n+1}= - \nabla \cdot p^{n+2/3}+\tau f in \Omega ,

(\nabla u^{n+1} - p^{n+2/3})\cdot n= 0 on \partial \Omega .

The linear elliptic problem (3.44) is of the Neumann type with constant coefficients. The numerical solution of this type of problem has motivated a very large number of methods and associated software. In the particular case of rectangular domains, many efficient solvers are available for the solution of the discrete finite element analogues of problem (3.44) obtained by symmetry preserving finite difference discretization (sparse Cholesky, conjugate gradient, cyclic reduction, etc.). In section3.8, we will encounter the variant of (3.44) associated with periodic boundary conditions. Its discrete analogues are particularly well suited to FFT based solvers.

3.7. Summary. The subproblems (3.12), (3.13), and (3.14) encountered in our splitting method aim at minimizing consecutively the various components of the elastica cost functional.

Our proposed algorithm is summarized in Algorithm 2.

Algorithm 2. A schematic description of the algorithm solving problem (2.1) Input: The input imagef, the parameters a,b, and\tau .

Output: The computed imageu^{\ast }.

Initialization: n= 0,u^{0}=f,p^{0}=\nabla f,\bfitlambda ^{0}(x) =

\Biggl\{

p^{0}(x)/| p^{0}(x)| ifp^{0}(x)\not = 0,
0 otherwise. x\in \Omega .
While: \| u^{n+1} - u^{n}\| /\| u^{n+1}\| > tol and n < Miter

1. Using the methods discussed in sections 3.3and 3.4, solve system (3.12) to obtain \bigl(

p^{n+1/3},\bfitlambda ^{n+1/3}\bigr)
.

2. Use the method discussed in section3.5to obtain \bigl(

p^{n+2/3},\bfitlambda ^{n+2/3}\bigr)

from (3.13).

3. Use the method discussed in section3.6to obtain \bigl(

u^{n+1},p^{n+1},\bfitlambda ^{n+1}\bigr)

from (3.14).

4. Check convergence and go to the next iteration or stop.

End While.

If iterations stop, take u^{\ast } =u^{n+1}.

In Algorithm 2,tolis the stopping criterion tolerance,Miter is the maximum of iterations and the norm| | \cdot | | is theL2 norm. All of the subproblems encountered when using Algorithm 2 have either closed form solutions or can be solved by dedicated fast solvers. Due to the semi-implicit nature of the operator-splitting scheme, we can use (relatively) large values of\tau and our numerical experiments show that the overall iteration number is (relatively) low. We have, however, to keep\tau small enough so that the resulting splitting error is small as well (see AppendixA). The model parametersaandbhave to be given. Finally, the time-discretization

step \tau also needs to be provided. We want to say that \tau is easy to tune. The selection of \gamma

was addressed in section 3.5.2; further information about the choice of all these parameters will be provided in section5.

3.8. On the handling of periodic boundary conditions. In the preceding sections (section 3.4, in particular) we mentioned quite a few times the possibility of using periodic boundary conditions when \Omega is a rectangular domain (a very common situation). The changes that choice requires are minimal and will be discussed below (we will assume that \Omega = (0, L)\times (0, H)).

Thefirst modification one encounters is to replace the space\scrW in (2.3) by\scrW _{P} defined by

\scrW _{P} =\{ (v,\bfitmu )\in \scrH _{P}^{1}(\Omega )\times \scrH _{P}(\Omega ,div), \bfitmu \cdot \nabla v=| \nabla v| , | \bfitmu | \leq 1\} ,
where (with obvious notation)

\scrH ^{1}_{P}(\Omega ) =\{ v\in \scrH ^{1}(\Omega ), v(0,\cdot ) =v(L,\cdot ), v(\cdot ,0) =v(\cdot , H)\}

and

\scrH _{P}(\Omega ,div) =\{ \bfitmu = (\mu 1, \mu 2)| \bfitmu \in \scrH (\Omega ,div), \mu 1(0,\cdot ) =\mu 1(L,\cdot ), \mu _{2}(\cdot ,0) =\mu 2(\cdot , H)\} .
The second modification is to define \Sigma f by

\Sigma _{f} =

\Biggl\{

q\in (\scrL ^{2}(\Omega ))^{2}, \exists v\in \scrH ^{1}_{P}(\Omega ) such thatq=\nabla v and

\int

\Omega

(v - f)dx= 0

\Biggr\}

,

and we replace (2.6) and (2.9) by (3.45)

(\bfq ,\bfitmu )\in (\scrL ^{2}(\Omega ))min^{2}\times \scrH _{P}(\Omega ,\mathrm{d}\mathrm{i}\mathrm{v})

\biggl[ \int

\Omega

\bigl(

a+b| \nabla \cdot \bfitmu | ^{2}\bigr)

| q| dx+1

2

\int

\Omega

| v_{\bfq } - f| ^{2}dx+I\Sigma _{f}(q) +IS(q,\bfitmu )

\biggr]

, and

(3.46)

\left\{ \nabla ^{2}v\bfq =\nabla \cdot q in \Omega ,

v_{\bfq }(0,\cdot ) =v_{\bfq }(L,\cdot ), v_{\bfq }(\cdot ,0) =v_{\bfq }(\cdot , H),

\biggl(

\partial v\bfq

\partial x_{1} - q_{1}

\biggr)

(0,\cdot ) =

\biggl(

\partial v\bfq

\partial x_{1} - q_{1}

\biggr) (L,\cdot ),

\biggl(

\partial v\bfq

\partial x_{2} - q_{2}

\biggr)

(\cdot ,0) =

\biggl(

\partial v\bfq

\partial x_{2} - q_{2}

\biggr) (\cdot , H),

\int

\Omega

v\bfq dx=

\int

\Omega

f dx, respectively (above, (q1, q2) =q).

The third modification is to replace (3.17) and (3.18) by
(3.47) \bfitlambda ^{n+1/3}= arg min

\bfitmu \in \scrH P(\Omega ,\mathrm{d}\mathrm{i}\mathrm{v})

\biggl[

\gamma

\int

\Omega

| \bfitmu - \bfitlambda ^{n}| ^{2}

2\tau dx+J1(\bfitmu ,p^{n+1/3})

\biggr]

and

(3.48)

\left\{

\gamma \bfitlambda ^{n+1/3} - \bfitlambda ^{n}

\tau - 2b\nabla (| p^{n+1/3}| \nabla \cdot \bfitlambda ^{n+1/3}) =0 in \Omega ,

\bfitlambda _{1}(0,\cdot ) =\bfitlambda _{1}(L,\cdot ), \bfitlambda _{2}(\cdot ,0) =\bfitlambda _{2}(\cdot , H),

\Bigl(

| p^{n+1/3}| \nabla \cdot \bfitlambda ^{n+1/3}

\Bigr)

(0,\cdot ) =\Bigl(

| p^{n+1/3}| \nabla \cdot \bfitlambda ^{n+1/3}

\Bigr) (L,\cdot )

\Bigl(

| p^{n+1/3}| \nabla \cdot \bfitlambda ^{n+1/3}\Bigr)

(\cdot ,0) =\Bigl(

| p^{n+1/3}| \nabla \cdot \bfitlambda ^{n+1/3}\Bigr)
(\cdot , H),

respectively. The periodic boundary conditions in (3.48) make the above linear elliptic problem well suited to FFT-based solution methods, after appropriate finite difference discretization (see section4).

Finally, replace (3.42), (3.43), and (3.44) by

(3.49)

\left\{

p^{n+1} =\nabla u^{n+1} with

u^{n+1} = arg min

v\in \scrH ^{1}_{P}(\Omega )

\biggl[

1 2

\int

\Omega

| \nabla v| ^{2}dx+\tau
2

\int

\Omega

| v - f| ^{2}dx -

\int

\Omega

p^{n+2/3}\cdot \nabla vdx

\biggr]

,

(3.50)

\left\{

u^{n+1}\in \scrH ^{1}_{P}(\Omega ),

\int

\Omega

\nabla u^{n+1}\cdot \nabla vdx+\tau

\int

\Omega

u^{n+1}vdx=

\int

\Omega

p^{n+2/3}\cdot \nabla vdx+\tau

\int

\Omega

f vdx \forall v\in \scrH ^{1}_{P}(\Omega ),

and

(3.51)

\left\{

- \nabla ^{2}u^{n+1}+\tau u^{n+1}= - \nabla \cdot p^{n+2/3}+\tau f in \Omega ,

u^{n+1}(0,\cdot ) =u^{n+1}(L,\cdot ), u^{n+1}(\cdot ,0) =u^{n+1}(\cdot , H),

\biggl(

\partial u^{n+1}

\partial x_{1} - p^{n+2/3}_{1}

\biggr)

(0,\cdot ) =

\biggl(

\partial u^{n+1}

\partial x_{1} - p^{n+2/3}_{1}

\biggr) (L,\cdot ),

\biggl(

\partial u^{n+1}

\partial x2

- p^{n+2/3}_{2}

\biggr)

(\cdot ,0) =

\biggl(

\partial u^{n+1}

\partial x2

- p^{n+2/3}_{2}

\biggr) (\cdot , H), respectively.

System (3.51) is an elliptic problem with constant coefficients, and periodic boundary conditions, taking place on a rectangle. In section 4, we will show how to solve its finite difference discrete analogues by FFT.

4. Numerical discretization.

4.1. Synopsis. As with the THC method in [48], we will assume that \Omega is a rectangle.

We assume also that all functions are periodic in both the x_{1} and x_{2} directions. To discretize
the Euler elastica variational problem, we will use staggered grids as visualized in Figure1. In
Figure 1, the unknown functionv is discretized at the \bullet -nodes, while the first (resp., second)
components of qand \bfitmu are discretized at the \circ -nodes (resp.,\square -nodes). Useful notation will
be introduced in section 4.2. The solution of the discrete subproblems will be discussed in
sections4.3--4.6.

Figure 1. Indexation of the discrete analogues of the unknown functionsv(at the\bullet -nodes) and of the first (at the\circ -nodes) and second (at the \square -nodes) components of the vector-valued functions\bfq and\bfitmu .

4.2. Some useful discrete operators. After discretization, we denote by \Omega _{h} the discrete
image domain \Omega h= [1, M1]h\times [1, N1]h, whereh=L/M1=H/N1, which indicates the image
size isM_{1}\times N_{1}. Note that \Omega _{h} is a set ofM_{1}N_{1} points inR^{2}. Taking periodicity into account,

we define the backward (--) and forward (+) discrete analogues of _{\partial x}^{\partial v}

1 and _{\partial x}^{\partial v}

2 by

\partial _{1}^{ - }v(i, j) =

\Biggl\{

(v(i, j) - v(i - 1, j))/h, 1< i\leq M_{1}
(v(1, j) - v(M_{1}, j))/h, i= 1,

\partial _{2}^{ - }v(i, j) =

\Biggl\{

(v(i, j) - v(i, j - 1))/h, 1< j \leq N1

(v(i,1) - v(i, N_{1}))/h, j= 1,

\partial _{1}^{+}v(i, j) =

\Biggl\{

(v(i+ 1, j) - v(i, j))/h, 1\leq i < M1

(v(1, j) - v(M_{1}, j))/h, i=M_{1},

\partial _{2}^{+}v(i, j) =

\Biggl\{

(v(i, j+ 1) - v(i, j))/h, 1\leq j < N1

(v(i,1) - v(i, N1))/h, j=N1.

With obvious notation, the discrete forward (+) and backward (--) gradient operators\nabla ^{+}

and \nabla ^{ - } are defined by

\nabla ^{\pm }v(i, j) = (\partial _{1}^{\pm }v(i, j), \partial _{2}^{\pm }v(i, j)).

The associated discrete forward (+) and backward (--) divergence operators div^{+}and div^{ - }
are defined (again with obvious notation) by

div^{\pm }q(i, j) =\partial _{1}^{\pm }q_{1}(i, j) +\partial ^{\pm }_{2}q_{2}(i, j).

If, in particular, a variable defined at the\circ -nodes (resp.,\square -nodes) needs to be evaluated at

the\square -node (resp.,\circ -node) (i, j), they will be done, respectively, using the following averaging

operators:

(4.1) \scrA ^{\square }_{i,j}(\mu _{1}) = \mu 1(i, j+ 1) +\mu 1(i - 1, j+ 1) +\mu 1(i, j) +\mu 1(i - 1, j)

4 ,

(4.2) \scrA ^{\circ }_{i,j}(\mu 2) = \mu 2(i+ 1, j) +\mu 2(i, j) +\mu 2(i+ 1, j - 1) +\mu 2(i, j - 1)

4 ,

where \mu _{1} (resp., \mu _{2}) is defined at the \circ -nodes (resp., \square -nodes). In order to evaluate the
magnitude of q = (q_{1}, q_{2}) at the \bullet -node (i, j) we will use an additional averaging operator,
namely

(4.3) | \scrA | ^{\bullet }_{i,j}(q) =

\sqrt{}

\biggl(

q_{1}(i, j) +q_{1}(i - 1, j)
2

\biggr) 2

+

\biggl(

q_{2}(i, j) +q_{2}(i, j - 1)
2

\biggr) 2

,

where q_{1} and q_{2} are defined on \circ -nodes and \square -nodes, respectively. Similarly, the discrete
divergence div^{\bullet }_{i,j}(\bfitmu ) of\bfitmu = (\mu _{1}, \mu _{2}) at the\bullet -node (i, j) is defined by

(4.4) div^{\bullet }_{i,j}(\bfitmu ) = [\mu _{1}(i, j) - \mu _{1}(i - 1, j) +\mu _{2}(i, j) - \mu _{2}(i, j - 1)]/h,

where\mu 1 (resp.,\mu 2) is defined at the\circ -nodes (resp.,\square -nodes). Finally, we define shifting and identity operators by

(4.5) \scrS _{1}^{\pm }\varphi (i, j) =\varphi (i\pm 1, j), \scrS _{2}^{\pm }\varphi (i, j) =\varphi (i, j\pm 1), and \scrI \varphi (i, j) =\varphi (i, j).

4.3. Computation of the discrete analogue ofp\bfitn +\bfone /\bfthree in (3.16). Let us recall that from (3.16) one has

(4.6) p^{n+1/3} = max

\biggl\{

0,1 - c

| p^{n}|

\biggr\}

p^{n},

wherec=\tau a+\tau b| \nabla \cdot \bfitlambda ^{n}| ^{2}. In the discrete setting, the first (resp., second) component ofp^{\bfn }

and \bfitlambda ^{n} is defined at\circ -nodes (resp.,\square -nodes), we need to discuss the two situations we will

encounter when discretizing (4.6) (for simplicity, we will denote\bfitlambda ^{n} by \bfitlambda and p^{n} byp).

(1) If (i, j) is a \circ -node, the corresponding discretization ofp and c is given as follows:

(4.7) p^{(1)}_{1} (i, j) =p1(i, j); p^{(1)}_{2} (i, j) =\scrA ^{\circ }_{i,j}(p2),

(4.8)

c^{(1)}(i, j) =\tau \bigl[

a+b| \partial 1\lambda 1(i, j) +\partial 2\lambda 2(i, j)| ^{2}\bigr]

=\tau

\Biggl[

a+b

\bigm|

\bigm|

\bigm|

\bigm|

\lambda 1(i+ 1, j) - \lambda 1(i - 1, j)

2h +\lambda 2(i+ 1, j) +\lambda 2(i, j)

2h - \lambda 2(i, j - 1) +\lambda 2(i+ 1, j - 1) 2h

\bigm|

\bigm|

\bigm|

\bigm|

2\Biggr]

.

(2) If (i, j) is a \square -node, the corresponding discretization of p and cis given as follows:

(4.9) p^{(2)}_{1} (i, j) =\scrA ^{\square }_{i,j}(p_{1}); p^{(2)}_{2} (i, j) =p_{2}(i, j),

(4.10)

c^{(2)}(i, j) =\tau \bigl[

a+b| \partial 1\lambda 1(i, j) +\partial 2\lambda 2(i, j)| ^{2}\bigr]

=\tau

\Biggl[

a+b

\bigm|

\bigm|

\bigm|

\bigm|

\lambda 1(i, j) +\lambda 1(i, j+ 1)

2h - \lambda 1(i - 1, j) +\lambda 2(i - 1, j+ 1)

2h +\lambda 2(i, j+ 1) - \lambda 2(i, j - 1)

2h

\bigm|

\bigm|

\bigm|

\bigm|

2\Biggr]

.

Finally,

(4.11) p^{n+1/3}_{\alpha } (i, j) = max

\left\{

0,1 - c^{(\alpha )}(i, j)

\sqrt{}

| p^{(\alpha )}_{1} (i, j)| ^{2}+| p^{(\alpha )}_{2} (i, j)| ^{2}

\right\}

p^{(\alpha )}_{\alpha } (i, j), \alpha =\{ 1,2\} .