• 沒有找到結果。

Applying smoothing technique and semi-proximal ADMM for image deblurring

N/A
N/A
Protected

Academic year: 2022

Share "Applying smoothing technique and semi-proximal ADMM for image deblurring"

Copied!
28
0
0

加載中.... (立即查看全文)

全文

(1)

Applying smoothing technique and semi-proximal ADMM for image deblurring

Caiying Wu 1

School of Mathematical Sciences Inner Mongolia University

Hohhot 010021, China Xiaojuan Chen 2

School of Mathematical Sciences Inner Mongolia University

Hohhot 010021, China Qiyu Jin 3

School of Mathematical Sciences Inner Mongolia University

Hohhot 010021, China Jein-Shan Chen 4 Department of Mathematics National Taiwan Normal University

Taipei 11677, Taiwan

Abstract. We present a new approach which combines smoothing technique and semi- proximal alternating direction method of multipliers for image deblurring. More specifi- cally, in light of a nondifferentiable model, which is indeed of the hybrid model of total variation and Tikhonov regularization models, we consider a smoothing approximation to conquer the disadvantage of nonsmoothness. We employ four smoothing functions to approximate the hybrid model and build up a new model accordingly. It is then solved by semi-proximal alternating direction method of multipliers. The algorithm is shown globally convergent. Numerical experiments and comparisons affirm that our method is

1E-mail: wucaiyingyun@163.com. The research is supported by the Natural Science Foundation of

Inner Mongolia Autonomous Region (2018MS01016)

2E-mail: chenxiaojuan05@163.com. The research is supported by the Natural Science Foundation of

Inner Mongolia Autonomous Region (2018MS01016)

3E-mail: qyjin2015@aliyun.com. The research is supported by the National Natural Science Foun-

dation of China (12061052) and the Natural Science Fund of Inner Mongolia Autonomous Region (2020MS01002)

4Corresponding author. E-mail: jschen@math.ntnu.edu.tw. The research is supported by Ministry

of Science and Technology, Taiwan.

(2)

an efficient approach for image deblurring.

Keywords. TV regularization, Image restoration, SP-ADMM, Smoothing function

1 Introduction

In this paper, the target problem is image deblurring, which has wide applications in image processing tasks, such as image segmentation, edge detection and pattern recogni- tion. Tremendous amount of articles related to this topic can be found in the literature, and hence we do not repeat its importance and realistic applications here. Therefore, we look into its mathematical model directly and convey our new idea for tackling this problem. The image deblurring model is described as follows:

f = A ⊗ u + ε, (1)

where u ∈ IRm×n is the unknown clear image, f ∈ IRm×n is the observed degraded image, ε ∈ IRm×n is the Gaussian white noise, A is the blurring operator, and A ⊗ u is the convolution of A with u meaning

[A ⊗ u] (i, j) = X

(s,t)∈Ω

A [(i, j) − (s, t)] × u(s, t), Ω = {1, 2 · · · m} × {1, 2 · · · n} .

For each 2D image u, u(i, j) denotes the image value at pixel (i, j) ∈ Ω. Restoring the unknown image u from the degraded image f is an ill-posed problem. In order to deal with the image deblurring problem (1), many researchers consider the associated regularized optimization problem [20]:

minu

µ

2kA ⊗ u − f k22+ k∇uk1, (2)

where ∇ denotes the gradient operator, ∇u = [∇xu, ∇yu] ∈ IR2m×n is the gradient in the discrete setting with ∇u(i, j) = (∇xu(i, j), ∇yu(i, j)), and ∇xu, ∇yu denote the hor- izontal and vertical first order differences with Dirichlet boundary condition respectively, which are defined as follows:

xu(i, j) = u(i + 1, j) − u(i, j) if i < m,

0 if i = m,

yu(i, j) = u(i, j + 1) − u(i, j) if j < m,

0 if j = m.

Here k∇uk1 means the l1-norm of ∇u, that is, k∇uk1 = X

(i,j)∈Ω

|∇u(i, j)|, where |∇u(i, j)| = q

(∇xu(i, j))2 + (∇yu(i, j))2. (3)

(3)

The above regularized minimization model (2) is called the total variation (TV) model for image recovery. This model is powerful for preserving sharp edges, and plenty of studies and variants based on this model have been investigated, please see [2–4, 8, 14, 16, 19]

and references therein.

However, image deblurring with TV model often leads to staircase effects. In order to avoid this phenomenon, one popular approach is combining the TV regularization term with some more effective models. In [9, 21], through mixing TV and Tikhonov regulariza- tion terms, the authors presented a hybrid model, and their experiments demonstrated that their model can effectively reduce noise. In [10, 15, 18], some other hybrid regulariza- tion models were investigated with combining the TV model and LLT model (originally proposed by Lysaker, Lundervold and Tai in [15]). In [1, 12], the authors employed the TV model with harmonic model for the image recovery. Among all these hybrid models, we focus on the model of combining TV and Tikhonov regularization, which is described by

minu

µ

2kA ⊗ u − f k22+ k∇uk1+ 1

2k∇uk22, (4)

where µ ∈ IR+ is a parameter.

Although all the aforementioned hybrid models based on total variation and Tikhonov regularization models are widely used in the field of image recovery, there is one draw- back that the objective function is nondifferentiable. There are many ways to conquer the disadvantage of nonsmoothness, one of them is using the smoothing approximation.

In this paper, we employ four smoothing functions, which were studied in [17, 25, 26], to approximate the TV part in model (4). Accordingly, we reformulate the image deblur- ring problem as a smoothing convex optimization problem, and then apply semi-proximal alternating direction method of multipliers (SP-ADMM) to solve the smoothing model.

In view of the smoothing feature, we can achieve the analytical solutions of subproblems in SP-ADMM without using soft threshold operator. To the contrast, in the literature of image processing, solving the model with TV part usually requires soft threshold oper- ator. Our proposed algorithm is shown globally convergent. Moreover, we compare the proposed algorithm with different kinds of denoising and deblurring algorithms, includ- ing TV [20], DCA [13], TRL2 [24], SOCF [11], and BM3D [6] respectively. Numerical simulations demonstrate that our model effectively eliminates the noise and blur at the same time, and it has higher peak signal to noise ratio (PSNR) and structural similarity index (SSIM) compared to other methods. In summary, our new approach provides a promising way in this field, which is a good contribution.

(4)

2 The smoothing hybrid regularization model

Recently, there are six smoothing functions studied in [17] to approximate the absolute value function. In particular, they were employed for signal reconstruction in [25, 26], which show promising performance and effectiveness in real applications. Inspired by this, we adopt those smoothing functions to approximate the TV part in model (4).

Since our algorithm needs to exploit and calculate their analytic solutions, we could only use four of them due to this reason. First, we present them out as below:

ϕ11, t) =





t if t ≥ µ21,

t2

µ1 +µ41 if − µ21 < t < µ21,

−t if t ≤ −µ21, ϕ21, t) =

q

21+ t2, ϕ31, t) =

( t2

1 if |t| ≤ µ1,

|t| − µ21 if |t| > µ1,

ϕ41, t) =





t if t > µ1,

t43 1

+ 3t2

1 +81 if − µ1 ≤ t ≤ µ1,

−t if t < −µ1,

Figure 1: The figure of approximate effects when µ1 = 1.0e − 5

(5)

where µ1 > 0 is a smoothing parameter. With these smoothing functions, the hybrid model (4) can be replaced by

minu

µ

2kA ⊗ u − f k22+ X

(i,j)∈Ω

ϕl1, |∇u(i, j)|) + 1

2k∇uk22, (5) where l = 1, 2, 3, 4. More specifically, each ϕl is a smoothing approximation of |∇u(i, j)|, i.e.,

ϕl1, |∇u(i, j)|) ≈ |∇u(i, j)|.

Figure 1 depicts the graphs of ϕl for l = 1, 2, 3, 4. With this, we apply a version of semi-proximal alternating direction method of multipliers (SP-ADMM), which will be presented in the next section, to solve the smoothing model (5).

3 Algorithm and Convergence Analysis

This section is devoted to the detailed description and implementation of our algo- rithmic idea and its global convergence. In order to apply semi-proximal alternating direction method of multipliers (SP-ADMM), we further recast the smoothing model (5) as an equivalent form,

minu

µ

2kA ⊗ u − f k22+ X

(i,j)∈Ω

ϕl1, |k(i, j)|) + 1 2kkk22 s.t. ∇u − k = 0.

(6)

The augmented Lagrangian function of problem (6) is expressed as H(u, k, λ) =µ

2kA ⊗ u − f k22+ X

(i,j)∈Ω

ϕl1, |k(i, j)|) + 1 2kkk22 + hλ, ∇u − ki + β

2k∇u − kk22,

(7)

where λ = (λ1, λ2) ∈ IR2m×n is the Lagrange multiplier parameter matrix with λ(i, j) = (λ1(i, j), λ2(i, j)), and β > 0 is a parameter. Accordingly, the SP-ADMM iterative scheme for solving (7) goes as below:

un = arg min

u

H(u, kn−1, λn−1) + 1 2

u − un−1

2 S1, kn = arg min

k

H(un, k, λn−1) + 1 2

k − kn−1

2 S2, λn = λn−1+ η β (∇un− kn) ,

(6)

where η ∈ 0,1+

5 2



. The matrix norm is defined by kxkS :=phx, Sxi for any u ∈ IRm×n and S ∈ IRm×m. In our implementations, we set S1 = ρ1I, S2 = ρ2I, where ρ1 > 0, ρ2 > 0 and I is the unit matrix.

Algorithm 1

Step 0. Input a blurred noisy image f , a blurring operator A, standard deviation σε and Gaussian kernel . Set parameters µ, µ1, β, η, S1, S2, tol, MaxIter.

Step 1. Using the BM3D technique as in [5] to preprocess blurred image f . Step 2. Initialize u0 = 0, k0 = 0, λ0 = 0.

Step 3. For n = 1: MaxIter, iterate.









un = arg min

u

H(u, kn−1, λn−1) + 1 2

u − un−1

2

S1, (8)

kn= arg min

k

H(un, k, λn−1) + 1 2

k − kn−1

2

S2, (9)

λn = λn−1+ η β(∇un− kn). (10)

Step 4. If  kun−un−1k2

kf k2 ,kknk∇f k−∇unk2

2



≤ tol, stop.

Step 5. Output un.

To proceed, we elaborate more about how to solve the subproblems (8) and (9) in Algorithm 1. From subproblem (8), we know

µAT(A ⊗ u − f ) + ∇Tλ − β∇Tkn−1+ β∆u + ρ1(u − un−1) = 0,

where ∆u = ∇T∇u. In light of the convolution theorem of Fourier transforms [22], the solution of subproblem (8) can be written as

u = F−1 µF (AT) F (f ) + F (∇T(βkn−1− λn−1)) + F (ρ1un−1) µF (AT) ⊗ F (A) + F (β∆ + ρ1I)



, (11)

where denotes componentwise multiplication. As for solving the subproblem (9), we notice that the subproblem (9) is equivalent to

kn= arg min

k

X

(i,j)∈Ω

ϕl1, |k(i, j)|) +1

2kkk22n−1, ∇un− k +β

2 k∇un− kk22+1

2kk − kn−1k2S2

(12)

(7)

for each l = 1, 2, 3, 4. It is easy to verify that (12) is indeed the same as kn(i, j) = arg min

k(i,j)

ϕl1, |k(i, j)|) +1

2|k(i, j)|2n−1(i, j), ∇un(i, j) − k(i, j) + β

2|∇un(i, j) − k(i, j)|22 2

k(i, j) − kn−1(i, j)

2.

(13)

Since each ϕl for l = 1, 2, 3, 4 is differentiable, the optimality condition of (13) is charac- terized by

∇ϕl1, |kn(i, j)|)−λn−1(i, j)−β∇un(i, j)−ρ2kn−1(i, j)+(1+β +ρ2)kn(i, j) = 0. (14) (i) For l = 1, when |kn(i, j)| ≥ µ21, we have

kn(i, j) = |kn(i, j)|

1 + (1 + β + ρ2)|kn(i, j)| ρ2kn−1(i, j) + β∇un(i, j) + λn−1(i, j) . It is nonlinear, by using the information of the previous step in the algorithm, we ap- proximate it as

kn(i, j) = |kn−1(i, j)|

1 + (1 + β + ρ2)|kn−1(i, j)| ρ2kn−1(i, j) + β∇un(i, j) + λn−1(i, j) . (15) Likewise, when |kn(i, j)| < µ21, we obtain

kn(i, j) = µ1

2 + (1 + β + ρ21 ρ2kn−1(i, j) + β∇un(i, j) + λn−1(i, j) . (16) (ii) For l = 2, in light of (14), we have

1

p|kn(i, j)|2+ 4µ21 + 1 + β + ρ2

!

kn(i, j) − λn−1(i, j) − β∇un(i, j) − ρ2kn−1(i, j) = 0.

(17) Like (15), the problem (17) is approximately solved as

kn(i, j) = P

1 + (1 + β + ρ2)P ρ2kn−1(i, j) + β∇un(i, j) + λn−1(i, j) . (18) where P =p|kn−1(i, j)|2+ 4µ21.

(iii) For l = 3, when |kn(i, j)| > µ1, applying the same approximation technique yields kn(i, j) = |kn−1(i, j)|

1 + (1 + β + ρ2)|kn−1(i, j)| ρ2kn−1(i, j) + β∇un(i, j) + λn−1(i, j) . (19) In addition, when |kn(i, j)| ≤ µ1, it leads to

kn(i, j) = µ1

1 + (1 + β + ρ21 ρ2kn−1(i, j) + β∇un(i, j) + λn−1(i, j) . (20)

(8)

(iv) For l = 4, when |kn(i, j)| > µ1, the approximate solution is kn(i, j) = |kn−1(i, j)|

1 + (1 + β + ρ2)|kn−1(i, j)| ρ2kn−1(i, j) + β∇un(i, j) + λn−1(i, j) , (21) while, when |kn(i, j)| ≤ µ1, the approximate solution is

kn(i, j) = 2µ312kn−1(i, j) + β∇un(i, j) + λn−1(i, j))

(3 + (1 + β + ρ2)2µ121− (kn−1(i, j))2 . (22) From all the above discussions, we achieve explicit expressions for knwith explicit kn(i, j), which is the solution to the subproblem (9).

With the four smoothing functions and the above results, we are ready to present the global convergence of our algorithm. The proof follows the similar idea and analysis techniques used in [7, 11], albeit a bit tedious.

Theorem 3.1. For any µ1 > 0, consider the Algorithm 1. Let {(un, kn, λn)} be the sequence generated by Algorithm 1. Then, we have

n→∞lim∇H(un, kn, λn) = 0.

Proof. For convenience, we denote F (u) := µ

2kA ⊗ u − f k22 and G(k) := X

(i,j)∈Ω

ϕl1, |k(i, j)|)+1

2kkk22 for l = 1, 2, 3, 4.

We see that (¯u, ¯k) is an optimal solution of problem (6) if and only if there exists Lagrange multiplier ¯λ such that





0 = ∇F (¯u) + ∇>λ,¯ 0 = ∇G(¯k) − ¯λ, 0 = ∇¯u − ¯k,

where ∇F = µAT(A ⊗ u − f ). From the KKT conditions, we know that if (u, k, λ) is the KKT point of problem (6), then





0 = ∇F (u) + ∇>λ, 0 = ∇G(k) − λ, 0 = ∇u − k.

(23)

From the definition of ϕl for l = 1, 2, 3, 4, we know that the function ϕl is convex. Since kkk22 is strongly convex, the function G is strongly convex. It is also clear to see that the function F is strongly convex. Thus, the gradients ∇F and ∇G are strongly monotone.

(9)

These indicate that there exist positive constants σ1 and σ2 such that for all u, ˆu ∈ IRm×n, ω = ∇F (u) and ˆω = ∇F (ˆu), there holds

hω − ˆω, u − ˆui ≥ σ1ku − ˆuk22 = ku − ˆuk2ΣF, ΣF = σ1I; (24) as well as for all k, ˆk ∈ IR2m×n, x = ∇G(k) and ˆx = ∇G(ˆk), there holds

hx − ˆx, k − ˆki ≥ σ2kk − ˆkk22 = kk − ˆkk2Σ

G, ΣG= σ2I. (25)

On the other hand, from the Algorithm 1, we have





0 = ∇F (un+1) + ∇>(β(∇un+1− kn) + λn) + S1(un+1− un), 0 = ∇G(kn+1) − (β(∇un+1− kn+1) + λn) + S2(kn+1− kn), 0 = (∇un+1− kn+1) − (βη)−1n+1− λn).

(26)

For notational simplicity, let (u, k) := ∇u−k and denote une := un−¯u. Similar notational ideas are applied to ken, λne as well. Then, we have

n+1 = λn+1+ (1 − η)β(un+1, kn+1) + β(kn+1− kn), xn+1 = λn+1+ (1 − η)β(un+1, kn+1),

which imply

 un+1e , ken+1 = ∇un+1− kn+1= (un+1, kn+1)

= (βη)−1n+1e − λen)

= (βη)−1n+1− λn).

Using (23), (24), (25) and (26) all together yields

kun+1e k2ΣF = kun+1− uk2ΣF ≤ωn+1− ω, un+1− u

=∇>(λ − λn) + β(∇>kn− ∆un+1) − S1(un+1− un), un+1e , (27) kken+1k2Σ

G = kkn+1− kk2Σ

G ≤xn+1− x, kn+1− k

=(λn− λ) + β(∇un− kn+1) − S2(kn+1− kn), ken+1 . (28) By calculation, it follows that

−∇>wn+1= −∇>n+1+ (1 − η)β(un+1, kn+1) + β(kn+1− kn)

= −∇>λn− β∇>∇un+1+ β∇>kn, xn+1= λn+1+ (1 − η)β(un+1, kn+1)

= λn+ β(∇un+1− kn+1).

(10)

Hence, inequality (27) is converted to (29) and inequality (28) becomes (30), that is, un+1e

2

ΣF ≤−∇>wn+1− S1(un+1− un) + ∇>λ, un+1e , (29) ken+1

2

ΣG ≤xn+1− S2(kn+1− kn) − λ, kn+1e . (30) Then, according to (29) and (30), we achieve

un+1e

2 ΣF +

kn+1e

2

ΣG ≤ (βη)−1n+1e , λne − λn+1e i − βhkn+1− kn, (un+1, kn+1)i

− βhkn+1− kn, kn+1e i − β(1 − η)k(un+1, kn+1)k22

− hS1(un+1− un), un+1e i − hS2(kn+1− kn), ken+1i).

(31)

Next, we estimate the term βhkn+1− kn, (un+1, kn+1)i. First, it follows from equation (26) that

xn+1− S2(kn+1− kn) = ∇G(kn+1), xn− S2(kn− kn−1) = ∇G(kn).

In addition, by the strong monotonicity of ∇G(·), we have hkn+1− kn, xn+1− xni

= hkn+1− kn, xn+1− S2(kn+1− kn) − (xn− S2(kn− kn−1))i +hkn+1− kn, S2(kn+1− kn)i − hkn+1− kn, S2(kn− kn−1)i

≥ kkn+1− knk2S2 + kkn+1− knk2S2 − hkn+1− kn, S2(kn− kn−1)i

≥ kkn+1− knk2S

2 − hkn+1− kn, S2(kn− kn−1)i.

Moreover, by denoting αn+1= −(1 − η)βhkn+1− kn, (un, kn)i, it further implies

−βhkn+1− kn, (un+1, kn+1)i

= −(1 − η)βhkn+1− kn, (un+1, kn+1)i

−hkn+1− kn, xn+1− xn− (1 − η)β(un+1, kn+1) + (1 − η)β(un, kn)i

= αn+1+ hkn+1− kn, xn− xn+1i

≤ αn+1

kn+1− kn

2

S2 + hkn+1− kn, S2(kn− kn−1)i (32)

≤ αn+1− kkn+1− knk2S

2 +1 2

kn+1− kn

2 S2 +1

2

kn− kn−1

2 S2

= αn+1− 1 2

kn+1− kn

2 S2 + 1

2

kn− kn−1

2 S2.

(11)

Since λn+1 = λn+ (βη)(un+1, kn+1), applying (31) and (32) indicate 2kun+1e k2ΣF + 2kkn+1e k2ΣG

≤ 2(βη)−1n+1e , λne − λn+1e i − 2βhkn+1− kn, (un+1, kn+1)i

−2βhkn+1− kn, kn+1e i − 2β(1 − η)k(un+1, kn+1)k22

−2hS1(un+1− un), un+1e i − 2hS2(kn+1− kn), ken+1i

≤ (βη)−1(kλnek2− kλn+1e k2) − (2 − η)βk(un+1, kn+1)k2 (33) +2αn+1− kkn+1− knk2S2 + kkn− kn−1k2S2

−βkkn+1− knk2− βkken+1k2+ βkkenk2

−kun+1− unk2S1 − kun+1e k2S1 + kunek2S1

−kkn+1− knk2S2 − kkn+1e k2S2 + kkenk2S2. To proceed, we further define









δn+1= min{η, 1 + η − η2}βkkn+1− knk2+ kkn+1− knk2S2, tn+1 = δn+1+ kun+1− unk2S1 + 2kun+1− ¯uk2ΣF + 2kkn+1− ¯kk2ΣG, ψn+1 = θ(un+1, kn+1, λn+1) + kkn+1− knk2S

2,

θ(u, k, λ) = (βη)−1kλ − ¯λk2+ ku − ¯uk2S1 + kk − ¯kk2S2 + βkk − ¯kk2.

(34)

and discuss two cases as below.

Case (1): η ∈ (0, 1]. It is obvious that 2kn+1− kn, (un, kn) ≤

kn+1− kn

2 + k(un, kn)k2. From the definition of αn+1 and (33), we see

ψn+1+(1−η)β

(un+1, kn+1)

2−ψn+ (1 − η)β k(un, kn)k2+tn+1

(un+1, kn+1)

2 ≤ 0.

(35) Case (2): η ∈ (1,1+

5

2 ). Similarly, we obtain ψn+1+ (1 − η−1

(un+1, kn+1)

2−ψn+ (1 − η−1)β k(un, kn)k2 + tn+1+ η−1(1 + η − η2

(un+1, kn+1)

2 ≤ 0. (36)

In other words, by introducing a new notation gn+1, it says gn+1 =

( ψn+1+ (1 − η)βk(un+1, kn+1)k2, η ∈ (0, 1], ψn+1+ (1 − η−1)βk(un+1, kn+1)k2, η ∈ (1,1+

5 2 ).

Thus, from (35) and (36), we conclude that the sequence {gn} is bounded and monoton- ically decreasing, which guarantees a limit. Since ψn> 0, the sequence {ψn} is bounded.

Let γ = 1 or γ = η−1(1 + η − η2). Again, applying (35) and (36) implies 0 ≤ tn+1+ γβk(un+1, kn+1)k2 ≤ gn− gn+1.

(12)

This says that there must have

tn+1→ 0, n → ∞, (37)

(un+1, kn+1)

→ 0, n → ∞. (38)

Considering the relationship λn+1− λn= (ηβ)(un+1, kn+1), we have kλn+1− λnk → 0. In view of (37) and the boundedness of {ψn}, one can conclude these sequences {kλn+1k}, {kun+1e k2S1}, {kun+1e k2Σ

F}, {kkn+1e k2Σ

G}, {kken+1k2S2}, {kkn+1e k2} are all bounded. Then, by using the inequality

k∇un+1e k ≤ k∇un+1e − kn+1e k + kkn+1e k = k(un+1, kn+1)k + kken+1k, we deduce that {k∇un+1e k} is also bounded, hence {kun+1e k>} is bounded. Since

kun+1e k = kun+1e kS1F+∇>∇+I− kun+1e kS1F+∇>,

and the positive definiteness of S1 + ΣF + ∇>∇ + I, the sequence {kun+1e k} is guaran- teed bounded, and hence the sequence {kun+1k} is bounded. In summary, the sequence {(un, kn, λn)} is bounded, which gives the existence of a convergent subsequence to a cluster point, denoted by lim

i→∞(uni, kni, λni) = (u, k, λ).

Consequently, it follows from (34) and (37) that

n→∞lim kkn+1− knk = 0,

n→∞lim kkn+1− knkS2 = 0,

n→∞lim kun+1− unkS1 = 0.

(39)

Therefore, from k∇un+1− knk ≤ k∇un+1− kn+1k + kkn+1− knk, we have lim

n→∞k∇un+1− knk = 0. Taking limits on both sides of equation (26) along the subsequence {(uni, kni, λni)}

and using the closedness of subdifferential, there hold





−∇>λ = ∇F (u), λ = ∇G(k), 0 = ∇u− k.

In other words, (u, k) is the optimal solution of (6) and λis the corresponding Lagrange multiplier.

Now, it remains to verify that lim

n→∞(un, kn, λn) = (u, k, λ). Since (u, k, λ) satisfies (26), we could replace (¯u, ¯k, ¯λ) with (u, k, λ) in the above analysis. From (35), (36), (37) and (38), we find gni → 0 as i → ∞. In particular, (35) and (36) indicate the

(13)

sequence {gn} has limits, then gn → 0 as n → ∞. Accordingly, ψn → 0 as n → ∞.

Furthermore, we know from the definition of ψn that when n → ∞, kλn− λk → 0 =⇒ λn → λ

kkn− kk → 0 =⇒ kn→ k un+1e

2 S1 → 0

(40)

Taking into account of (37) and (38), we conclude that un+1e

2

ΣF → 0, n → ∞. (41)

From (41), it also guarantees that lim

n→∞kunek = 0, namely, lim

n→∞un = u. To sum up, when η ∈ (0,1+

5

2 ), we show that

n→∞lim(un, kn, λn) = (u, k, λ).

Then, the proof is complete. 2

4 Numerical Experiments

In this section, we report the results of our simulations demonstrating the applica- bility, efficiency and merits of our algorithm. All the experiments are performed under windows 10 and MATLAB R2018a running on a desktop (Inter(CR) Core(TM)i5-8250 CPU @ 1.60GHZ).

Our tests consist of 16 images, including 11 gray images and 5 color images, as shown in Figure 2. When running Algorithm 1, we use three types of blurring functions, Gaussian blur (GB), Motion blur (MB) and Average blur (AB), with different levels of Gaussian noise. The notation GB(9, 5)/σε=5 denotes the Gaussian kernel with the free parameter σB = 5, size 9 × 9, and the Gaussian noise with standard deviation σε = 5.

Other symbols of blur and noise have similar meanings.

We compare our algorithm with other famous algorithms in the literature. They are the classic TV [20], DCA with L1−0.5L2 [13], TRL2 [24], SOCF [11], and BM3D [6]. We adopt the peak signal to noise ratio (PSNR) and the structural similarity index (SSIM) to evaluate the quality of image restoration. The PSNR is defined by

PSNR(ˆu) = 10 log10 2552

MSE(ˆu), MSE(ˆu) = 1 m × n

X

(i,j)∈Ω

(u(i, j) − ˆu(i, j))2,

where u is the original image and ˆu is the restored image, and SSIM is defined in [23].

(14)

The parameters of our algorithm are summarized as follows:

µ1 = 1e − 10, µ ∈ (2000, 7000), β ∈ (0.2, 0.7), η = 1.1.

In order to express clearly the influence of parameter µ on our algorithm, we plot the PSNR values and SSIM values versus the values of µ in Figure 3. In particular, we stop Algorithm 1 when

s 1

m × n X

(i,j)∈Ω

(u(i, j) − ˆu(i, j))2 < 5e − 5

or the iteration number exceeds 500. The parameter settings for the other compared algorithms are given below.

• For TV [20], we set

µTV ∈ {120, 150, 160, 170, 180, 190, 200} and λTV∈ {10, 11, 12, 13, 14, 15, 16}.

• For DCA [13], we set

λDCA = 1 and µDCA ∈ {160, 170, 180, 190, 200, 220}.

• For TRL2 [24], we set

τTRL2∈ {0.04, 0.05, 0.06}, αTRL2 ∈ {60, 80, 100, 120, 150, 180}

and βTRL2 ∈ {80, 100, 120, 150}.

• For SOCF [11], we set

λSOCF ∈ {5, 7, 10, 11, 13, 15, 16, 18}, µSOCF ∈ {1e − 8, 1e − 10}, µSOCF1 = 0.2, µSOCF2 = 1e − 5 and ηSOCF = 1.5.

• For BM3D [6], we set

RI ∈ {4e − 4, 4e − 3, 1e − 3} and RWI ∈ {5e − 3, 3e − 2}.

Experiment 1

We compare the performance of our Algorithm 1 with other famous algorithms under different blur kernels and different Gaussian noise levels. The summary of numerical re- sults are presented in Tables 1, 2 and 3, respectively. From these, we see that Algorithm 1 has higher PSNR and SSIM when ϕl for l = 1, 2, 3, 4 is used. The performance of the proposed algorithm with the function ϕ2 is relatively poor.

(15)

Experiment 2

In this experiment, on the premise of the same blur kernel, we provide the average value of PSNR and SSIM of different methods with different Gaussian noise levels for all test images. The results are shown in Tables 4, 5 and 6. From these tables, we find that the average value of PSNR and SSIM reduces as the noise level increases. We observe the same pattern that Algorithm 1 provides higher average values by using function ϕl for l = 1, 2, 3, 4 and has lower average values by using function ϕ2.

Experiment 3

In this experiment, we try to do comparison based on CPU time. In our test, we take 5 gray images with size 512 × 512 and one color image with size 768 × 512. Then, for each algorithm, we get two data, one is the average time of these 5 grey images, and the other is the time of the given color image, see Table 7. From Table 7, we observe that our algorithm takes more CPU time, this is caused by the fact that Algorithm 1 needs to evaluate the function values ϕl for l = 1, 2, 3, 4 at each step. Acceleration will be considered in the future work.

Experiment 4

In this test, we present detailed information of restored images with zoomed area for better illustration. Figures 4 and 5 show that our method yields the better image quality in terms of deblurring and removing noise.

Experiment 5

In this test, we observe and discuss the overall convergence behavior of our algorithm.

First, we do experiments about the noise level versus PSNR. We compare our algorithm with five other algorithms: TV, DCA, TRL2, SOCF and BM3D. We select the gray image Cameraman and the color image Duck under GB(9,5), see Figure 6. On the other hand, we do experiments about the iteration versus PSNR for our four functions ϕl for l = 1, 2, 3, 4, and we choose the gray image Cameraman under AB(9, 9)/σε=3, see Figure 7.

According to the simulations and numerical results in Table 1- Table 7, we summarize our numerical findings:

1. The SP-ADMM with the functions ϕ1, ϕ3 and ϕ4 provide higher PSNR and SSIM.

On the whole, Algorithm 1 with the smoothing function ϕ4provides the best results, and the functions ϕ1 and ϕ3 have similar effects, the performance of our algorithm with the function ϕ2 is relatively poor. Taking into account both the PSNR and SSIM , which are two important indicators for evaluating image quality, we believe that our algorithm provides better performance among all algorithms considered as

(16)

it can solve more images at higher PSNR and SSIM. In this respect, our algorithm has a major advantage.

2. Even though Algorithm 1 takes more CPU time of convergence, our method still has high image reconstruction effect when the noise increases.

3. While BM3D [6] is often the best solver among other algorithms considered in this paper, we note that our algorithm has highest average value of PSNR/SSIM for different algorithms under different Gaussian noise levels. In this sense, our algorithm has good generalization ability.

4. The visual results of image deblurring show that the proposed method yields the better image quality in terms of deblurring and removing noises.

5. Analyzing all the numerical results, we can be sure that smoothing method can deal with image reconstruction problem well. Usually, In the field of image recon- struction, solving the TV part requires soft threshold operator. Thus, this paper provides a new technical support for image reconstruction from the mathematical point of view.

(17)

ImageDegradedTV[20]DCA[13]TRL2[24]SOCF[11]BM3D[6]ϕ1ϕ2ϕ3ϕ4 Shepp-Logan18.96/0.58122.75/0.92723.34/0.89523.85/0.66224.52/0.79224.56/0.84626.39/0.92723.57/0.67226.50/0.93726.10/0.940 Shape15018.46/0.54325.08/0.85425.45/0.86624.29/0.75326.19/0.82926.44/0.87327.97/0.89923.76/0.61128.28/0.91228.16/0.910 House24.11/0.55827.93/0.78428.09/0.79728.24/0.72629.09/0.75929.90/0.78330.20/0.79528.75/0.71230.32/0.81630.28/0.815 Boat23.34/0.48825.06/0.63225.37/0.65225.97/0.65826.66/0.68226.76/0.70026.92/0.70226.28/0.64926.82/0.69926.88/0.706 Pepper21.43/0.57523.44/0.73523.49/0.73024.69/0.73726.25/0.75926.42/0.76325.98/0.73924.90/0.66425.85/0.76525.90/0.776 Cameraman20.87/0.50023.08/0.71623.44/0.73124.16/0.68224.35/0.71024.74/0.73724.83/0.75224.05/0.66024.88/0.75824.87/0.760 Hill24.85/0.49726.87/0.63127.10/0.65027.34/0.65727.67/0.67028.09/0.68828.01/0.69127.27/0.63427.88/0.68628.00/0.692 Montage20.19/0.58422.36/0.81223.14/0.81724.35/0.76024.91/0.79625.24/0.82825.48/0.80724.02/0.69525.30/0.84125.52/0.849 Man24.37/0.52425.79/0.64526.32/0.68026.78/0.68127.02/0.68827.37/0.71827.37/0.71326.68/0.65127.24/0.71027.42/0.718 Barbara22.36/0.49923.08/0.60823.11/0.61823.31/0.60323.65/0.62623.98/0.66523.76/0.63523.67/0.60223.79/0.64923.73/0.651 Couple23.22/0.46124.90/0.60625.39/0.64325.84/0.65826.56/0.69026.78/0.71126.62/0.69226.25/0.65826.61/0.70326.77/0.709 Plate17.39/0.77821.65/0.91122.44/0.92323.14/0.92924.35/0.94124.57/0.94224.95/0.94523.04/0.92225.14/0.94825.10/0.948 Duck24.17/0.65927.29/0.79527.41/0.81027.91/0.78228.40/0.79728.60/0.83029.17/0.82827.77/0.76729.17/0.83029.24/0.834 Building19.54/0.63720.34/0.70120.57/0.72820.89/0.73421.17/0.74421.33/0.63821.35/0.75321.13/0.72421.33/0.75821.30/0.761 Hats27.04/0.85929.29/0.94029.29/0.93928.90/0.90029.66/0.91830.10/0.94430.49/0.94429.64/0.92030.43/0.94530.41/0.946 Redcar23.92/0.75625.65/0.83525.80/0.83926.09/0.81326.31/0.82426.74/0.84726.93/0.84826.27/0.82026.83/0.84926.92/0.851 Average22.13/0.59324.66/0.76424.99/0.77025.35/0.73426.05/0.76426.35/0.78226.65/0.79225.44/0.71026.65/0.80026.66/0.804 Table1:ThevalueofPSNR/SSIMfordifferentalgorithmswithGB(9,5)/σε=5

(18)

ImageDegradedTV[20]DCA[13]TRL2[24]SOCF[11]BM3D[6]ϕ1ϕ2ϕ3ϕ4 Shepp-Logan17.47/0.54821.35/0.91124.35/0.91024.95/0.49925.59/0.82525.56/0.80627.46/0.89523.89/0.53328.44/0.91328.26/0.896 Shape15016.32/0.90526.74/0.91030.95/0.94823.85/0.62425.50/0.82526.62/0.86527.22/0.85922.94/0.62127.80/0.87427.79/0.869 House21.46/0.50525.34/0.73627.25/0.78325.80/0.60228.63/0.75229.24/0.78629.51/0.79027.22/0.68429.78/0.80729.66/0.800 Boat22.00/0.45724.03/0.59824.93/0.63624.83/0.60126.70/0.68926.63/0.69826.72/0.69426.06/0.65226.82/0.70026.86/0.701 Pepper20.05/0.53222.57/0.70823.19/0.72323.36/0.65224.92/0.75625.39/0.73224.66/0.73623.92/0.66624.81/0.75424.83/0.756 Cameraman19.79/0.47822.41/0.68923.37/0.72223.95/0.60724.96/0.72225.25/0.74525.37/0.74324.26/0.64825.44/0.75725.40/0.754 Hill23.21/0.45025.49/0.57826.29/0.61725.57/0.58427.24/0.65827.42/0.67027.39/0.66826.72/0.62927.51/0.67427.54/0.675 Montage19.31/0.54822.10/0.78823.23/0.81424.61/0.65625.70/0.80226.02/0.81826.45/0.80324.19/0.65126.80/0.82526.95/0.831 Man23.05/0.50125.41/0.64926.00/0.67825.28/0.61627.30/0.71527.32/0.72827.22/0.71726.25/0.66627.42/0.72727.50/0.728 Barbara21.76/0.49523.06/0.61623.48/0.64323.04/0.58624.78/0.69125.53/0.73224.88/0.69724.79/0.66324.85/0.70125.01/0.707 Couple21.83/0.42123.57/0.55324.51/0.60424.34/0.59126.17/0.67326.16/0.69026.34/0.68825.73/0.64926.39/0.69026.51/0.693 Plate15.93/0.73120.01/0.88021.53/0.91122.33/0.92023.43/0.93424.11/0.94124.48/0.94321.78/0.90924.57/0.94424.57/0.945 Duck22.48/0.62226.54/0.77326.81/0.78326.53/0.68127.77/0.75628.03/0.80828.38/0.78927.15/0.73828.39/0.79228.51/0.796 Building18.94/0.62820.38/0.71520.97/0.74921.57/0.75022.41/0.78822.86/0.81622.86/0.80222.46/0.77222.80/0.80522.81/0.806 Hats25.44/0.83928.29/0.93128.78/0.93527.43/0.83628.81/0.89829.22/0.93729.63/0.93128.80/0.90629.71/0.93529.70/0.937 Redcar22.04/0.73724.36/0.82125.14/0.83325.04/0.76026.10/0.81626.49/0.84726.77/0.84025.91/0.80826.86/0.84626.89/0.845 Average20.70/0.58723.86/0.74125.05/0.76824.53/0.66126.00/0.76926.37/0.78926.58/0.78725.15/0.70026.78/0.79626.77/0.796 Table2:ThevalueofPSNR/SSIMfordifferentalgorithmswithMB(20,60)/σε=5

(19)

ImageDegradedTV[20]DCA[13]TRL2[24]SOCF[11]BM3D[6]ϕ1ϕ2ϕ3ϕ4 Shepp-Logan18.66/0.70621.74/0.91622.40/0.91424.88/0.73324.90/0.93725.71/0.88427.39/0.94524.06/0.67127.96/0.95127.79/0.956 Shape15018.16/0.61327.92/0.94030.09/0.96026.05/0.81227.74/0.94228.01/0.92829.41/0.93324.79/0.66229.27/0.92629.33/0.933 House23.97/0.62226.78/0.76325.14/0.64029.72/0.77430.73/0.83031.47/0.83631.62/0.84029.93/0.76331.67/0.84031.57/0.839 Boat23.24/0.52124.50/0.60823.23/0.71827.12/0.71027.48/0.73128.06/0.75128.15/0.75527.50/0.71128.15/0.75328.18/0.753 Pepper21.25/0.61322.94/0.70923.08/0.72826.52/0.78627.35/0.82927.75/0.80628.11/0.83126.36/0.73227.93/0.81926.43/0.737 Cameraman20.71/0.56122.58/0.69823.13/0.72725.04/0.73324.88/0.77125.81/0.78125.96/0.79624.95/0.71326.05/0.79726.08/0.796 Hill24.85/0.52726.45/0.61026.90/0.64028.34/0.71028.62/0.71729.03/0.73129.12/0.74128.44/0.70028.91/0.73328.91/0.733 Montage20.01/0.64822.05/0.80123.15/0.81826.16/0.81925.87/0.88227.18/0.86927.19/0.88625.42/0.75227.41/0.87927.42/0.877 Man24.35/0.56425.90/0.64926.26/0.67127.74/0.72827.89/0.74428.23/0.75528.43/0.76227.71/0.71028.29/0.75428.21/0.749 Barbara22.39/0.54223.12/0.61023.11/0.61623.75/0.64224.00/0.67624.44/0.69524.20/0.68624.16/0.65124.17/0.68124.17/0.681 Couple23.12/0.48424.31/0.57425.14/0.62726.92/0.71627.34/0.73528.15/0.77028.09/0.76827.45/0.72828.03/0.76728.05/0.766 Plate17.08/0.76620.93/0.89722.24/0.92025.23/0.95125.70/0.95626.19/0.95826.86/0.96324.39/0.94126.95/0.96426.91/0.963 Duck24.12/0.70827.12/0.79127.42/0.80929.25/0.83129.80/0.86029.91/0.86330.65/0.86829.02/0.81630.33/0.85930.51/0.863 Building19.48/0.65120.17/0.70220.40/0.72021.26/0.76121.31/0.77122.06/0.79421.76/0.78821.65/0.76121.92/0.79121.74/0.778 Hats27.38/0.89629.20/0.93929.32/0.94129.70/0.91830.99/0.95431.24/0.95431.48/0.95630.62/0.93731.55/0.95631.46/0.956 Redcar23.93/0.78525.46/0.83325.60/0.83826.72/0.83726.99/0.85727.41/0.86127.59/0.86326.91/0.84127.61/0.86327.60/0.863 Average22.04/0.63824.45/0.75324.79/0.76826.53/0.77926.97/0.82427.54/0.82627.88/0.83626.46/0.75627.89/0.83327.77/0.828 Table3:ThevalueofPSNR/SSIMfordifferentalgorithmswithAB(9,9)/σε=3

參考文獻

相關文件

In this paper, we extended the entropy-like proximal algo- rithm proposed by Eggermont [12] for convex programming subject to nonnegative constraints and proposed a class of

In this paper, we build a new class of neural networks based on the smoothing method for NCP introduced by Haddou and Maheux [18] using some family F of smoothing functions.

Chen, The semismooth-related properties of a merit function and a descent method for the nonlinear complementarity problem, Journal of Global Optimization, vol.. Soares, A new

Section 3 is devoted to developing proximal point method to solve the monotone second-order cone complementarity problem with a practical approximation criterion based on a new

Although we have obtained the global and superlinear convergence properties of Algorithm 3.1 under mild conditions, this does not mean that Algorithm 3.1 is practi- cally efficient,

The purpose of this talk is to analyze new hybrid proximal point algorithms and solve the constrained minimization problem involving a convex functional in a uni- formly convex

Huang, A nonmonotone smoothing-type algorithm for solv- ing a system of equalities and inequalities, Journal of Computational and Applied Mathematics, vol. Hao, A new

Abstract In this paper, we consider the smoothing Newton method for solving a type of absolute value equations associated with second order cone (SOCAVE for short), which.. 1