• 沒有找到結果。

A Robust Algorithm for Total Variation Deblurring and Denoising

N/A
N/A
Protected

Academic year: 2021

Share "A Robust Algorithm for Total Variation Deblurring and Denoising"

Copied!
17
0
0

加載中.... (立即查看全文)

全文

(1)

AND DENOISING∗

QIANSHUN CHANG †, LUNG-SHENG CHIEN, WEI-CHENG WANG §,AND JING XU

Abstract. We propose a new deblurring and denoising algorithm for the total variation based image restoration. The algorithm consists of an efficient solver for the linearized system and an acceleration strategy for the outer iteration. For the linear system, we have devised an Algebraic Multigrid (AMG) method tailored for elliptic problems with non-smooth diffusion coefficient. For the outer iteration, we have conducted formal convergence analysis to determine an auxiliary linear term that significantly stabilizes and accelerates the outer iteration. The overall algorithm is efficient and robust. In addition, it is applicable over a wide range of parameters, from images with large noise and strong blur to purely blurred images without noise. In the latter case, the proposed AMG method can also be used to invert mild blur operators directly without the outer iteration.

Key words. Image restoration, total variation, nonlinear iteration, algebraic multigrid method, Krylov acceleration

AMS subject classifications.68U10, 65M55

1. Introduction. Image restoration is a fundamental subject with many appli-cations in both image processing and computer vision. The blurring of images often results from the motion of objects, calibration errors with imaging devices or unfo-cused cameras. The purpose of image restoration is to recover original image u from the observed data z (noisy and blurred image) from the relation

z = Ku + n, (1.1)

where K is a linear blur operator and n is a Gaussian white noise. Typically, the blur operator K and partial information on the noise n are known.

A well known model for noise removal and image deblurring is the total variation based restoration method proposed by L. Rudin, S. Osher and E. Fatemi [21]. In this approach, the total variation of u is used as a penalty functional. The corresponding image restoration can then be formulated as the following minimization problem:

min u  α Z Ω p|∇u|2+ β dxdy +1 2kKu − zk 2 L2  . (1.2)

Here α > 0 is the penalty parameter, β > 0 is the diffusion regularization parameter and is typically small. The functional in (1.2) is strictly convex with a unique global minimizer. The well-posedness of problem (1.2) as β → 0+ has been studied in [1].

The corresponding Euler-Lagrange equation for (1.2) is given by −α∇ · ( ∇u

p|∇u|2+ β) + K

(Ku − z) = 0, in Ω (1.3)

An earlier version of this manuscript has appeared in the form of conference proceeding in ‘Image

Processing Based on Partial Differential Equations’, Proceedings of the International Conference on PDE-Based Image Processing and Related Inverse Problems, Springer-Verlag (2006) p95-108.

Institute of Applied Mathematics, Academy of Mathematics and Systems Sciences, Chinese

academy of Sciences, Beijing, China. (qschang@amss.ac.cn).

Department of Mathematics, National Tsing-Hua University, HsinChu, Taiwan.

(d947207@oz.nthu.edu.tw).

§Department of Mathematics, National Tsing-Hua University, HsinChu, Taiwan.

(wangwc@duet.am.nthu.edu.tw).

Institute of Applied Mathematics, Academy of Mathematics and Systems Sciences, Chinese

academy of Sciences, Beijing, China. (jingxu@amss.ac.cn). 1

(2)

where K∗ is the adjoint operator of K with respect to standard L

2 inner product.

Several numerical methods have been proposed to solve for the Euler-Lagrange equation (1.3). The majority of the numerical examples in the literature are focused on two types of applications. The first one is the pure denoising problems where K = I. Earlier works include the time-marching scheme proposed in [21], the affine scaling algorithm [15], and the Newton’s method with a continuation procedure on β [8]. Later on, the authors of [25] proposed a fixed point iteration for (1.3). They also proposed multigrid methods to solve the linearized equation of (1.3) in [18] and [24] respectively. In [3], the authors proposed a variant of (1.2) based on an anisotropic diffusion regularization and the system is solved by explicit time-marching as in [21]. In [9], the authors developed a nonlinear primal-dual method for (1.3). The corre-sponding linear system is twice as large, but is shown to be better conditioned. In [10], the combination of the algebraic multigrid method, Krylov subspace acceleration and extrapolation of initial data are successfully combined to give an efficient algorithm for the purely denoising problem.

Another important application is to recover noisy, blurred images from generic blur operator K. For example, the combination of fixed point iteration and precondi-tioned conjugate gradient method is proposed in [26] to handle problems of this type. In [17], the authors proposed a fast algorithm based on the discrete cosine transform to recover blurred images with 50dB Gaussian white noise. The discrete cosine trans-form is also proposed as a preconditioner for problems of strong blur operators with noise [7]. More recently, a new modular solver is proposed in [5] to restore images resulted from a Gaussian blur supplemented by an additive noise.

In this paper, we propose an efficient numerical scheme for the Euler-Lagrange equation (1.3). Our scheme consists of an efficient linear solver and an acceleration strategy for the nonlinear iteration. In the inner iteration, we adopt an Algebraic Multigrid (AMG) method previously developed by one of the authors for other ap-plications [13, 12]. This AMG method is specialized in solving elliptic equation with non-smooth diffusion coefficient. One V -cycle per inner iteration is sufficient in all our simulations. For the outer iteration, we have developed a stabilizing technique adapted to the blur operator K. This is done by adding a linear term on both sides of the equation, together with the Krylov subspace extrapolation. This linear stabi-lizing term is obtained via formal convergence analysis and is expressed explicitly in terms of the mask of K. The inclusion of the linear stabilizing term plays a crucial role in our scheme. The outer iteration indeed may diverge without a proper linear stabilizing term (Example 2, section 4).

In addition to providing an competitive alternative, our scheme also proves to work efficiently over a wide range of parameters. We have conducted extensive nu-merical experiments. The result shows that our algorithm is efficient and robust for α ranging from O(1) to the pure blurring limit α → 0 and various strong blur operators such as the truncated Gaussian. It is worth mentioning that the pure deblur problem

Ku = z (1.4)

is ill-posed if K is not diagonally dominant. Direct inversion of (1.4) may be difficult. In case of mild blur operators, our AMG can also be applied to invert (1.4) directly (that is, without the outer iteration). See section 4 for details.

The rest of the paper is organized as follows. In section 2, we present the for-mal convergence analysis and the framework of our scheme. In section 3, we briefly describe the AMG method used to solve the linearized system and pure deblurring

(3)

problems. In sections 4.1-4.3, we give explicit formulae of the linear stabilizing term, together with other implementation details. Numerical results are given in section 4.4. We end this paper with a brief summary at section 5.

2. Formal Convergence Analysis and the Main Algorithm. We first in-troduce the discretization of (1.3). We partition the domain Ω = (0, 1) × (0, 1) into L × L uniform cells with mesh size h = 1/L. The cell centers are

(xk, yl) = ((k −

1 2)h, (l −

1

2)h), k, l = 1, · · · , L.

Following [25, 10], we discretize (1.3) by standard five-point finite difference scheme to get −α∇h· ( ∇hu p|∇hu|2+ β ) + K∗(Ku − z) = 0 (2.1) or more precisely, −α h  ck+1 2,l uk+1,l− uk,l h − ck−12,l uk,l− uk−1,l h  −α h  ck,l+1 2 uk,l+1− uk,l h − ck,l−12 uk,l− uk,l−1 h  + (K∗(Ku − z)) k,l= 0, k, l = 1, · · · , L. (2.2)

with homogeneous Neumann boundary condition:

u0,l= u1,l, uL+1,l= uL,l, uk,0= uk,1, uk,L+1= uk,L, (2.3) where ck+1 2,l= 1 r  (Dxu)k+1 2,l 2 +(Dyu)k+1 2,l 2 + β , with (Dxu)k+1 2,l = uk+1,l− uk,l h , (Dyu)k+12,l= 1 2  uk+1,l+1− uk+1,l−1 2h + uk,l+1− uk,l−1 2h  etc.

To simplify the notation, we abbreviate (2.1) as

αL(u)u + K∗(Ku − z) = 0. (2.4) where L(v)w = −α∇h· ( ∇hw p|∇hv|2+ β ) (2.5)

In (2.4), L(u) is fully nonlinear with wildly varying coefficient. Moreover, the matrix K∗K is wide-banded and the spectrum of the matrices L(u) and K∗K are quite differently distributed. As a result, it is difficult to solve (2.4) by Newton’s method efficiently. Several alternatives using various preconditioners have been proposed in

(4)

literature. In [26], Vogel and Oman adopted the combination of a fixed point iteration and a product PCG to handle the nonlinear term and the linear system respectively. In [7], R. Chan, T. Chan and C. Wong proposed another efficient preconditioner based on the fast cosine transform.

In this paper, we will take an iterative method for (2.4) based on a proper de-composition of the operator L(u) + K∗K. To motivate our approach, we first notice

that an efficient algorithm for (2.4) should avoid inverting the matrix K∗K since it is

much denser than L(u). A natural approach would then be

αL(u(s))u(s+1)= −K∗(Ku(s)− z). (2.6) It turns out that the iteration (2.6) is not robust and may diverge even for weak blur operator corresponding to the mask 1

64(1, 1, 4, 1, 1)T(1, 1, 4, 1, 1) with α = 10. More

examples can be found in section 4.4.

To overcome this instability, we propose to add a linear term Su on both sides of (2.6)

(αL(u(s)) + S)u(s+1)= −(K∗K − S)u(s)+ K∗z. (2.7) where S is a symmetric matrix to be determined through the formal analysis below.

Denote by u(∞) the solution to (2.4), that is

(αL(u(∞)) + S)u(∞)= −(K∗K − S)u(∞)+ K∗z. (2.8) We then define e(s)= u(s)− u(∞) and abbreviate L(u(s)) by L(s). Since

L(s)= L(∞)+ O(e(s)), it follows that

(αL(∞)+ S)e(s+1)= −(K∗K − S)e(s)+ O(|e(s)|2+ |e(s+1)|2). (2.9)

When |e(s)| and |e(s+1)| are small enough, the leading behavior of (2.9) can be

ap-proximated by

(αL(∞)+ S)e(s+1)≈ −(K∗K − S)e(s) (2.10) or

ν(e(s+1)− e(s)) ≈ −(αL(∞)+ S − νI)e(s+1)− ((S − νI) − KK)e(s) (2.11)

with arbitrary ν > 0. To simplify the notations, we define two symmetric matrices Pν= αL(∞)+ (S − νI), Qν = (S − νI) − K∗K (2.12) and write (2.11) as ν(e(s+1)− e(s)) ≈ −P νe(s+1)− Qνe(s)  . (2.13)

Next we take the inner product of (Pνe(s+1)− Qνe(s)) and (2.13) to get

ν  (e(s+1)− e(s))T Pν− Qν 2 (e (s+1)+ e(s)) +Pν+ Qν 2 (e (s+1)− e(s))  / 0 (2.14)

(5)

For symmetric matrices A, we further introduce the notation kvk2

A= vTAv,

which defines a norm when A is positive definite. It follows from (2.14) that νke(s+1)k Pν−Qν − ke (s)k Pν−Qν+ ke (s+1)− e(s)k Pν+Qν  / 0 (2.15) In other words, the iteration (2.7) is locally convergent if both

Pν− Qν = αL(∞)+ K∗K, Pν+ Qν= αL(∞)+ 2(S − νI) − K∗K (2.16)

are positive definite for some ν > 0. This is the case provided

P − Q = αL(∞)+ K∗K > 0, P + Q = αL(∞)+ 2S − K∗K > 0 (2.17) in the sense of bilinear forms.

Since αL(∞) is positive definite, a sufficient condition for (2.17) is given by

2S − K∗K ≥ 0 in the sense of bilinear forms (2.18) Although the analysis given above is only formal, we find (2.18) indeed a good guideline for devising the outer iteration. We will elaborate on the choice of S better suited for numerical purposes in section 4.2. Indeed, the iteration (2.7) may fail to converge when (2.18) is not satisfied. See Example 2 in section 4.4 for more details. In addition, the preprocessing of initial data proposed in [10] is no longer required when (2.18) is satisfied. In all our numerical examples, the initial data is the observed image z.

3. AMG Algorithm for Systems with Rough Diffusion Coefficient. The linear system (2.7) results in an L2× L2system of equations

AU = F (3.1)

Here the matrix A corresponds to the operator αL(u(s)) + S in (2.7). In general,

A varies wildly near areas of high contrast of the image and need not be diagonally dominant. Nevertheless, A is symmetric and positive definite.

Under such circumstances, it is natural to consider algebraic multigrid method as the linear solver. Here we adopt a special version of AMG developed in ([13, 11, 10]) and incorporated the Krylov subspace extrapolation to accelerate and stabilize the outer outer iteration. We have conducted extensive numerical tests over a wide range of parameters and blur operators and found that the combination results in an efficient and robust scheme. Here we briefly review the construction of our AMG method.

In a standard multigrid process, one needs to construct a hierarchy of coarse grids Ωm, the corresponding coarse grid operators Am, the interpolation operators Im

m+1

and the restriction operators Im+1

m .

In an AMG setting, the construction of the coarse grids Ωm is solely based on

the algebraic information provided by the matrix A. The details can be found, for example, in [22, 13]. For the restriction and coarse grid operators, a simple and pop-ular approach is the Galerkin type construction. Namely, we take Imm+1 = (Im+1m )∗

and Am+1= Imm+1AmIm+1m . It remains to construct the interpolation operators Im+1m .

In many practical applications, this is the major factor that determines the the ef-ficiency and stability of the overall AMG algorithm. It has been shown in [13, 12]

(6)

that our AMG method works successfully for a class of linear systems resulting from discretization of anisotropic/nearly degenerate diffusion problems. In section 4, we will further demonstrate that it is capable of handling rough diffusion coefficient over a wide range of α and K and is therefore suitable for total variation based image restoration.

Here is a brief description of our interpolation procedure. We first denote by Nm i

the neighbors of i, Nm

i = {j ∈ Ωm| ami,j 6= 0, j 6= i, }, and classify points in Niminto

two classes. A point j ∈ Nm

i is said to be strongly connected to i if

| am

i,j|≥ θ · maxk6=i | ami,k |

for some fixed 0 < θ ≤ 1, and weakly connected if otherwise. We denote the collection of these neighboring points by Sm

i (strong) and Wim (weak), respectively. We also

denote Ωm+1T Sm

i by Cim.

Given an algebraically smooth error em+1j on the course grid Ωm+1, we would like

to derive an interpolation formula em i = X j∈Cm i ωm i,jemj , for i ∈ Ωm\ Ω m+1, (3.2) so that ami,iemi + X j∈Nm i ami,jemj = rmi ≈ 0 (3.3)

where it is understood that em j = e

m+1

j for j ∈ Ωm+1. The proposed interpolation

formula will be derived based on the following geometrical assumptions: (G1) In the neighborhood Nm

i of a point i, the larger the quantity |ami,j| is, the

closer the point j is to the point i.

(G2) With the normalization ai,i > 0, an algebraically smooth error is

geometri-cally smooth between i and j if am

i,j< 0 or |ami,j| is small, and it is geometrically

oscillating if otherwise.

With two geometric assumptions (G1) and (G2), we can now determine the inter-polation/extrapolation formula for points j ∈ Nm

i \ Ωm+1 by examining the relative

locations between i, j and the ‘center of mass’ of Cm i ∩ Njm.

More precisely, let us define

ζm i,j= −P k∈Cm i ∩Njma m j,k P k∈Cm i ∩Njm | a m j,k| , ηm i,j= | am i,j| 1 |Cm i ∩Njm| P k∈Cm i ∩Njm | a m j,k| . The quantity ζm

i,j indicates whether there is a large negative entry amj,k for k ∈

Cm

i ∩Njm. From Assumption (G2), the larger ζi,jm is, the smoother the error is between

i and j, geometrically.

On the other hand, the quantity ηm

i,j reveals the ratio of the distance between i

and j and average distance between points in Cm

i ∩ Njmand j. When ηi,jm is close to

1, points in Cm

i ∩ Njm are approximately equally distant from j, so j is likely to be

located close to the ‘center of mass’ of Cm

i ∩ Njm. Therefore emj is well approximated

by the local averageP

k∈Cm i ∩Njmg m j,kemk, where gj,km = |am j,k| P l∈Cmi ∩N mj |amj,l|. When η m i,j is

(7)

Cm

i ∩ Njm. Therefore the ’center of mass’ of Cim∩ Njmis likely to be biased towards

(away from) i, and an extrapolation (interpolation) from em

i and the local average

P

k∈Cm i ∩Njmg

m

j,kemk gives a better approximating of emj .

In summary, we have the following “geometric” interpolation formulae: (1) For j ∈ Sm i ∩ Ωm\ Ωm+1, we take emj =      2P k∈Cm i g m j,kemk − emi , if ηmi,j< 3 4, ζ m i,j≥ 1 2 and a m i,j< 0, 1 2( P k∈Cm i g m

j,kemk + emi ), if ηi,jm > 2, ζi,jm ≥ 12 and ami,j< 0,

P k∈Cm i g m j,kemk, otherwise. (2) For j ∈ Wm i ∩ Ωm\ Ωm+1, we take emj =        em i , if Cim∩ Sjm= ∅, ami,j< 0, −em i , if Cim∩ Sjm= ∅, ami,j> 0, 2P k∈Cm i g m

j,kemk − emi , if Cim∩ Sjm6= ∅, ζi,jm ≥12 and ami,j< 0,

P k∈Cm i g m j,kemk, otherwise. (3.4) The convergence proof for this AMG method is given in [11] when Amis symmetric

and positive definite. The robustness of this interpolation formulae is supported by plenty of numerical evidence [13, 11].

Remark: For pure deblurring problems

Ku = z (3.5)

The matrix K is in general non-diagonally dominant. Most AMG methods can ef-fectively reduce high frequency components of the error in the smoothing step using classical iterative methods. However, naive interpolation may amplify the high fre-quency error in an unstable manner and eventually lead to divergence [20]. The interpolation procedure described above is the main feature that distinguishes our algorithm from other AMG methods.

In addition to the general case of rough diffusion coefficient, we will show that our AMG method is stable enough to invert (3.5) directly for mild blur operators which are not diagonally dominant. Thus it can be considered as a new approach for solving (3.5), in contrast to the iterative method (2.7) with α = 0. See section 4.4 for details.

4. Numerical Experiments and Discussion.

4.1. Benchmark Models and Blurring Operators. The numerical examples given in this paper are based on the two images shown in Fig. 4.1. The first one is a satellite image (model I) and the second is a benchmark problem (model II) used in the literature (for example, [7]). For both models, the original image contains 256∗256 pixels. Each pixel is assigned an integer value in [0, 255].

We experiment on restoring the two model images blurred by the following three different blur operators:

(1) A blur operator given by the mask

1.5 784(1, 2, 3, 16, 3, 2, 1) T(1, 2, 3, 16, 3, 2, 1) = 1.5 784           1 2 3 16 3 2 1 2 4 6 32 6 4 2 3 6 9 48 9 6 3 16 32 48 256 48 32 16 3 6 9 48 9 6 3 2 4 6 32 6 4 2 1 2 3 16 3 2 1          

(8)

50 100 150 200 250 50 100 150 200 250 50 100 150 200 250 50 100 150 200 250

Fig. 4.1. Original images of Model I (left) and Model II (right)

In other words, Ku = 1.5AyAxu where (Axu)k,l= 1 28(uk−3,l+ 2uk−2,l+ 3uk−1,l+ 16uk,l+ 3uk+1,l+ 2uk+2,l+ uk+3,l) (Ayu)k,l= 1 28(uk,l−3+ 2uk,l−2+ 3uk,l−1+ 16uk,l+ 3uk,l+1+ 2uk,l+2+ uk,l+3). (2) An out-of-focus blur with the kernel of convolution given by the kernel

vk,l=

 c 13, if k

2+ l2≤ 4,

0, otherwise, or equivalently by the mask

c 13       0 0 1 0 0 0 1 1 1 0 1 1 1 1 1 0 1 1 1 0 0 0 1 0 0       . Here we take c = 1.2188.

(3) A truncated Gaussian blur given by

vk,l=  ce−τ (k2+l2) , if |k|, |l| ≤ 5 0, otherwise, where c = 0.176 and τ = 0.38.

4.2. The Linear Stabilizing Term. The condition (2.18) serves as a general guideline for the choice of the matrix S. In order to solve (2.7) efficiently, we only consider matrices S with the same or smaller support compared to L(u(s)). An obvious

candidate is diagonal matrices of the form

(9)

The condition (2.18) then reads ε ≥ ε∗=1 2λmax(K ∗K) = 1 2λmax(K) 2. (4.2)

To proceed with the estimate of λmax(K), we notice that our sample blur operators

(1)-(3) can be represented by a mask of the form

V =      v−r,−r v−r,−r+1 · · · v−r,r v−r+1,−r v−r+1,−r+1 · · · v−r+1,r .. . ... ... ... vr,−r vr,−r+1 · · · vr,r     

with vk,l= vl,k= v−k,l= vk,−l. It is easy to see that, under the Neumann BC (2.3),

the largest eigenvalue for such K is given by

λmax(K) = L2 X i=1 Ki,j= r X k,l=−r vk,l, (4.3)

where the second term of (4.3) is independent of j and the corresponding eigenfunction is a constant. In summary, we have the following sufficient condition for (2.18):

S = εI, ε ≥ ε∗=1 2( r X k,l=−r vk,l)2 (4.4)

Similar conditions can be derived for variants of (4.4), such as S = 1 2diag(K ∗K) + 1 2δI δ ≥ δ∗= max i X j6=i (K∗K) i,j= ( r X k,l=−r vk,l)2− r X k,l=−r v2 k,l (4.5) or S = diag(K∗K) + γI, γ ≥ γ∗=1 2maxi   X j6=i (K∗K)i,j− (K∗K)i,i  = 1 2( r X k,l=−r vk,l)2− r X k,l=−r v2k,l (4.6) Table 4.1

Critical values of the parameters, ε∗, δ, γand the solution to (4.9)

Blur operator ε∗ δγ(a, b)

1 1.13 1.95 0.83 (0.72,0.17) 2 0.74 1.37 0.63 (0.49,0.09) 3 1.06 1.98 0.93 (0.65,0.15)

In our examples, the diagonal entries of K∗K are constant-valued except for the near-boundary pixels. We expect the performances of (4.4), (4.5) and (4.6) to be comparable to each other. In this paper, most of our numerical experiments are conducted using (4.6) with γ = 1. See section 4.4 for more details.

(10)

In addition to diagonal matrices, we have also explored quinta-diagonal matrices S = Sa,b given by the mask

  0 b 0 b a b 0 b 0  . (4.7)

Sa,b has the same support as L, therefore the computational cost in the AMG step is

comparable to diagonal S.

In view of (2.10) and (2.18), it is clear that the optimal S(a,b)∗ is the one that

solves min a,b ρ  (αL(∞)+ Sa,b)−1(Sa,b− K∗K)  subject to 2Sa,b− K∗K ≥ 0 (4.8)

where ρ(A) is the spectral radius of A. The actual minimizer of (4.8) depends on the true image u(∞)and there is no simple way of finding it. An accessible approximation

is given by the following modified minimization problem: min

a,b ρ(Sa,b− K

K) subject to 2S

a,b− K∗K ≥ 0. (4.9)

In our examples, the common eigen-basis of Sa,b and K is given by

em,n(xk, yl) = cos(mπxk) cos(nπyl), m, n = 1, · · · L. (4.10)

It is straight forward to compute the corresponding eigenvalues for (Sa,b− K∗K) and

the numerical solution of (4.9) by varying a and b over a suitable range. The optimal (a, b)∗, together with the critical values of ε, δ and γ are given in Table 4.1. It is not

clear a priori why (4.9) should result in better performance. Nevertheless, we find from our experiment that S(a,b)∗ indeed performs no worse than diagonal S and can

be significantly faster in some cases. See Example 2 in section 4.4 for more details. 4.3. The Stopping Criterion and Acceleration Technique. The outer it-eration is halted upon relative decrease of the (normalized) residual by a factor of 10−4. Namely, when kr(N )k L2 kr(1)k L2 ≤ 10−4 is reached. Here the normalized residual is given by

r(s+1)=D(s+1)−1(αL(v(s+1)) + S)v(s+1)+ (K∗K − S)v(s+1)− K∗z, (4.11) where v(s+1)is an approximate solution of (2.7) obtained through one V cycle iteration

(see also Example 1 in section 4.4) and D(s+1) = diag (αL(v(s+1)) + S. Note that

the larger entries of the normalizing factor (D(s+1))−1 correspond to regions where

v(s+1)is less smooth. It amplifies the (unnormalized) residual on regions where v(s+1)

either has a jump or requires further denoising, therefore does a better job measuring the image quality. Numerical experiments confirmed that the normalized residual is indeed a good indicator for the quality of denoising and deblurring.

To accelerate and stabilize the outer iteration, we have incorporated the Krylov subspace method [6, 10, 19] in our implementation. The approximate solution is

(11)

optimized every p steps on a subspace of dimension M ≤ p. To be more precise, we take two fixed integers 0 < M ≤ p and for any integer n > 0, let

˜ u(c1, · · · , cM) = u(pn)+ M X m=1 cm(u(pn+1−m)− u(pn−m)). (4.12)

The residual of ˜u(c1, · · · , cM) can be approximated by

˜ r(c1, · · · , cM) def = r(pn)+ M X m=1 cm(r(pn+1−m)− r(pn−m)). (4.13)

One then minimizes k˜r(c1, · · · , cM)kL2 with respect to (c1, · · · , cM) to get

min

c1,··· ,cM

k˜r(c1, · · · , cM)kL2 = k˜r(c∗1, · · · , c∗M)kL2, (4.14)

and reset u(pn) to ˜u(c

1, · · · , c∗M) before going to (pn + 1)th outer iteration. We will

show that the Krylov subspace acceleration indeed gives rise to significant speedup and stabilization. See section 4.4, Example 2 for details.

4.4. Numerical Results. To generate an observed image z, a Gaussian white noise with mean 0 and variance σ is added to the blurred image. The values of σ, together with the signal to noise ratio (SNR)

SN R = kKu − zkL2 kukL2

. (4.15)

and the signal to blur ratio (SBR)

SBR = kKu − ukL2 kukL2

. (4.16)

are captioned in each example for readers’ convenience.

In all our examples, we take β = 1.0 ∗ 10−12 and start the iteration with the observed image u(0) = z. We choose S according to (4.6) with γ = 1.0 except in

Example 2 where we demonstrate the necessity of the linear stabilizing term and the overall efficiency of our scheme by varying γ.

Example1 (Convergence factor for the V -cycle). In all our simulations, it is enough to apply one single V -cycle in the AMG step with the Gauss-Seidel iteration as the smoother. Table 4.2 shows the convergence factors in the outer iterations of a typical simulation. It is clear that our AMG algorithm is efficient and suitable for elliptic equation with non-smooth diffusion coefficient.

Example 2 (The Linear Stabilizing Term and Krylov Acceleration). The rate of convergence of the outer iteration can be significantly improved by the Krylov subspace technique. In our experiments, we apply the Krylov acceleration procedure every 4 steps with the Krylov dimension M = 1 or 2. The result is given in Table 4.3. In general, the Krylov method with M = 2 is slightly better than M = 1. The total number of iterations is reduced to about 50% for Model I and 33% for Model II with M = 2. The overhead for the Krylov extrapolation step is negligible, as only simple algebraic operations are involved. The results demonstrate that the Krylov acceleration method is an efficient way to accelerate the outer iteration. Unless

(12)

Table 4.2

ρA, the convergence factor of the AMG method in each outer iteration (average value = 0.051).

This result is for Model I and blur (2) with α = 1.2, σ = 19.81, SN R = 25.62% and SBR = 27.01%.

iteration step 1 2 3 4 5 6 7 8 ρA 0.020 0.064 0.059 0.055 0.066 0.053 0.052 0.050 iteration step 9 10 11 12 13 14 15 16 ρA 0.085 0.060 0.055 0.053 0.127 0.043 0.038 0.035 iteration step 17 18 19 20 21 22 23 24 ρA 0.067 0.036 0.034 0.036 0.084 0.049 0.044 0.044 Table 4.3

Number of outer nonlinear iterations N needed with Krylov acceleration. M = 0 if the Krylov acceleration is not activated. Here α = 0.05, σ = 29.72 for model I and α = 2.5, σ = 89.16 for model II. The CPU time is measured on an Intel Celeron-M 370 (1.5GHz, 1MB cache) processor.

Blur Model M N CPU time

0 40 32.09s I 1 27 24.12s 1 2 21 19.45s 0 74 54.90s II 1 28 26.01s 2 24 21.05s 0 62 43.50s I 1 28 22.06s 2 2 25 19.95s 0 111 71.83s II 1 37 26.96s 2 33 24.47s 0 55 53.13s I 1 40 43.29s 3 2 27 30.51s 0 110 105.68s II 1 37 39.50s 2 33 35.84s

otherwise specified, we have used Krylov acceleration method with p = 4 and M = 2 in our examples.

We continue our numerical experiments by varying the parameter γ and compare their performance. The numbers of iteration of a typical example is shown in Table 4.4.

We make a few remarks on the result. First of all, in the second part of (2.17), we have neglected the αL∞ term to obtain a sufficient condition (2.18), from which

we further derived the critical value γ∗in (4.6). We therefore expect γto be slightly

overestimated. In other words, the outer iteration should converge for γ ≥ γ∗, but

not necessarily for γ < γ∗. This is in accordance with our numerical result. See the M = 0 rows of Table 4.4.

Secondly, the Krylov subspace technique helps to stabilize the outer iteration for γ < γ∗, but eventually fails if γ becomes too small. This result demonstrates the

(13)

Table 4.4

Number of outer iterations needed for different values of γ and S(a,b)∗. This result is for Blur

(3) with corresponding γ∗ = 0.93. The parameters α and σ are the same as in Table 4.3. The

numbers in the last column are those for S = S(a,b)∗ as described in section 4.2. Here (a, b)∗ =

(0.65.0.15). γ Model M 0.2 0.4 0.6 0.8 0.9 1.2 2.0 4.0 10.0 S(a,b)∗ 0 ∞ ∞ ∞ ∞ 53 60 81 136 297 46 I 1 ∞ ∞ 148 50 48 28 32 48 93 23 2 ∞ ∞ 37 28 28 27 29 41 78 22 0 ∞ ∞ 109 109 109 113 119 131 162 109 II 1 ∞ 39 37 36 36 37 40 44 65 36 2 ∞ 34 32 32 33 33 35 42 57 32

The combination of γ ≈ γ∗ and M = 2 gives the optimal result among possible

choices of γ and M in general. The number of iteration is insensitive to the variation of γ near γ∗, and gradually grows as γ increases. For simplicity of presentation, we

take γ = 1 and M = 2 in all other numerical examples.

We have also included the results for S = S(a,b)∗ in Table 4.4 for comparison.

In general, S = S(a,b)∗ performs no worse than (4.6) with γ = γ∗. In some cases,

S = S(a,b)∗ can be up to 25% faster.

Example 3 (Recovering Images over Various Parameter Regimes). We continue our numerical experiments by varying the noise level and the parame-ter α. In Table 4.5, we have experimented to find α∗that results in approximately the

optimal image quality at the prescribed noise level σ. Since α∗ is not known a priori,

we further tested our algorithm by taking α different from α∗ and denote the number

of iterations by N (2α∗), N (4α), etc. The result shows our scheme remains robust

and efficient over a wide range of parameters.

The optimal value α∗ is problem dependent. Empirically, αis smaller if the

noise is weak or there are small structures present in the image. In Fig. 4.2 and Fig. 4.3, we display the recovered images from the strong blur operator (3) on different noise levels with α suitably chosen. In Fig. 4.4, we show the recovered images with different α for comparison.

50 100 150 200 250 50 100 150 200 250 50 100 150 200 250 50 100 150 200 250

Fig. 4.2. Noisy and blurred (left), and restored (right) images of Model I and blur operator (3), α= 0.005, σ = 9.91, SN R = 12.81% and SBR = 45.32%.

Example 4 (Pure Deblur - Iterative Method Vs Direct Method). From our numerical experiments, we find that the algorithm (2.7) still converges for pure deblurring problems even with strong blur operators. One only needs to take α small

(14)

50 100 150 200 250 50 100 150 200 250 50 100 150 200 250 50 100 150 200 250

Fig. 4.3. Noisy and blurred (left), and restored (right) images of Model II and blur operator (3), α = 2.0, σ = 94.52, SN R = 56.51% and SBR = 45.41%. 50 100 150 200 250 50 100 150 200 250 50 100 150 200 250 50 100 150 200 250 50 100 150 200 250 50 100 150 200 250

Fig. 4.4. Noisy and blurred image (left), restored (middle) image with α = 0.1, and restored (right) image with α = 1.0 for Model II and blur operator (2), σ = 39.62, SN R = 23.79% and SBR= 22.21%.

enough to preserve the detail information of the image. Fig. 4.5 shows that a blurred satellite image (model I) is successfully recovered by taking α = 1.0 ∗ 10−10 and

applying our algorithm (2.7).

In addition, if the blur operator is mild, our AMG method can be used to solve Ku = z directly. We show in Fig. 4.6 the blurred image and the restored images by both methods. It is clear that both methods produce satisfactory numerical results for the pure deblur problem.

5. Conclusion. In this paper, we propose a new algorithm for the total vari-ation based image deblurring and denoising. Through formal convergence analysis, we derived an explicit formula for a linear stabilizing term. Together with Krylov subspace extrapolation and a special version of Algebraic Multigrid method, the com-bined numerical scheme is efficient and robust over a wide range of noise level and various blur operators. Moreover, the AMG method we use is shown to be stable enough to invert some pure blurring problems directly.

Acknowledgments. The authors would like to thank Professor Tony Chan for some valuable suggestions. This work was initiated when Chang visited Taiwan with support from NCTS of Taiwan. In addition, Chang and Xu are supported in part by NSFC of China. Chien and Wang are sponsored in part by NSC of Taiwan.

(15)

Fig. 4.5. Blurred (left) and restored (right) images of Model I and blur operator (2), α = 1.0 ∗ 10−10, SN R = 0 and SBR = 27.01%.

Table 4.5

Number of iteration needed for Model II, blur (2) with various α and σ

σ 2.0 8.0 16.0 32.0 64.0 128.0 α∗ 0.0062 0.07 0.25 0.64 1.36 3.0 N (10 α∗) 62 84 69 53 12 7 N (4 α∗) 36 64 68 54 40 9 N (2 α∗) 30 46 56 53 41 17 N (α∗) 31 38 40 44 37 25 N (α∗/2) 36 32 33 32 29 24 N (α∗/4) 44 28 28 28 25 21 N (α∗/10) 56 29 26 25 25 22

[1] R. Acar and C. R. Vogel, Analysis of total variation penalty methods for ill-posed problems, Inverse Problems, 10(1994), pp. 1217–1229.

[2] L. Alvarez, P. -L. Lions, and J. -M. Morel, Image selective smoothing and edge detection by nonlinear diffusion II, SIAM J. Numer. Anal., 29(1992), pp. 845–866.

[3] C. A. Z. Barcelos, and Y. Chen, Heat flow and related minimization problem in image restoration, Computers and Mathematics with Applications, 39(2000), No. 5-6, pp. 81–97. [4] G. Barles, and P. E. Souganidis, Convergence of approximation schemes for fully nonlinear

second order equations, Asymptotic Analysis 4(1991), pp. 271–283.

[5] P. Blomgren and T. Chan, Modular solvers for image restorations problems using the dis-crepancy principle, Numer. linear Algebra Appl. Vol. 9 (2002) pp. 347–358.

[6] A. Brandt, and V. Mikulinsky, On recombining iterants in multigrid algorithms and prob-lems with small islands, SIAM J. Sci. Comput., 16(1995), pp. 20–28.

[7] R. Chan, T. Chan and C. Wang, Cosine transform based preconditioners for total variation deblurring, IEEE Trans. Image processing, Vol. 8 (1999) pp. 1472–1478.

[8] R. Chan, T. Chan, and H. Zhou, Advanced signal processing algorithms, In proceedings of the International Society of Photo-Optical Instrumentation Engineers, F. Luk, ed., SPIE, 1995, pp. 314–325.

[9] T. F. Chan, G. H. Golub, and P. Mulet, A nonlinear primal-dual method for total variation-based image restoration, SIAM J. Sci. Comput., 20(1999), pp. 1964–1977.

[10] Q. Chang and I. Chern, Acceleration methods for total variation-based image denoising, SIAM J. Sci. Comput. 25 (2003) pp. 983–994.

[11] Q. Chang and Z. Huang, Efficient algebraic multigrid algorithms and their convergence, SIAM J. Sci. Comput. 24 (2002) pp. 597–618.

[12] Q. Chang, S. Ma and G. Lei, Algebraic multigrid method for queuing networks, Int. J. of Computer Math., 70(1999), pp. 539–552.

[13] Q. Chang, Yaushu Wong and H. Fu, On the algebraic multigrid method, J. Comput. Phys., 125(1996), pp. 279–292.

[14] V.E. Henson and P.S. Vassilevski, Element-free AMGe: General algorithms for computing interpolation weights, SIAM J. Sci. Comput. 23 (2001) pp. 629-650.

(16)

Fig. 4.6. Blurred image (top), restored image with α = 1.0 ∗ 10−10 (bottom left), and restored

image with AMG (bottom right) for Model I and blur operator (1), SN R = 0 and SBR = 47.73%.

[15] Y. Li and F. Santosa, An affine scaling algorithm for minimizing total variation in image enhancement, Tech. Report 12/94, Center for Theory and Simulation in Science and En-gineering, Cornell University, 1994.

[16] S. McCormick, Multigrid methods, SIAM, Philadelphia, 1987.

[17] M. Na and R. Chan and W. Tang, A fast algorithm for deblurring models with Neumann boundary conditions, SIAM J. Sci. Comput., Vol. 22, (1999) pp. 851–866.

[18] M. E. Oman, Fast multigrid techniques in total variation-based image reconstruction, In pro-ceedings of the 1995 Copper Mountain Conference on Multigrid Methods, 1995.

[19] C. W. Oosterlee, and T. Washio, Krylov subspace acceleration of nonlinear multigrid with application to recirculating flows, SIAM J. Sci. Comput., 21(2000), pp. 1670–1690. [20] S. Osher and L. Rudin, Feature-oriented image enhancement using shock filters, SIAM J.

Numer. Anal., 27 (1990), pp. 919–940.

[21] L. Rudin, S. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithms, Phys. D, 60(1992), pp. 259–268.

[22] J. Ruge and K. St¨uben, Algebraic multigrid, In Multigrid Methods, (S. F. McCormick, ed.) 4, SIAM, Philadelphia, (1987) pp. 73–130.

[23] P.S. Vassilevski and J. G.Wade, A comparison of multilevel methods for total variation regularization, Electric Transaction on Numerical Analysis, 6 (1997), pp. 255–280.

(17)

[24] C. R. Vogel, A multigrid method for total variation-based image denoising, In Computation and Control IV, Progr. Systems Control Theory 20, Birkhauser, Boston, MA, 1995, pp. 323–331.

[25] C. R. Vogel, and M. E. Oman, Iterative methods for total variation denoising, SIAM J. Sci. Comput., 17(1996), pp. 227–238.

[26] C. R. Vogel and M. E. Oman, Fast, robust total variation-based reconstruction of noisy, blurred images, IEEE Trans. Image processing, Vol.7 (1998) pp. 813–824.

[27] Y. Zhou, Applications of discrete functional analysis to the finite difference method, Interna-tional Academic Publishers, Beijing, 1991.

數據

Fig. 4.1. Original images of Model I (left) and Model II (right)
Fig. 4.2. Noisy and blurred (left), and restored (right) images of Model I and blur operator (3), α = 0.005, σ = 9.91, SN R = 12.81% and SBR = 45.32%.
Fig. 4.4. Noisy and blurred image (left), restored (middle) image with α = 0.1, and restored (right) image with α = 1.0 for Model II and blur operator (2), σ = 39.62, SN R = 23.79% and SBR = 22.21%.
Fig. 4.5. Blurred (left) and restored (right) images of Model I and blur operator (2), α = 1.0 ∗ 10 −10 , SN R = 0 and SBR = 27.01%.
+2

參考文獻

相關文件

Understanding and inferring information, ideas, feelings and opinions in a range of texts with some degree of complexity, using and integrating a small range of reading

Understanding and inferring information, ideas, feelings and opinions in a range of texts with some degree of complexity, using and integrating a small range of reading

Understanding and inferring information, ideas, feelings and opinions in a range of texts with some degree of complexity, using and integrating a small range of reading

Chen, The semismooth-related properties of a merit function and a descent method for the nonlinear complementarity problem, Journal of Global Optimization, vol.. Soares, A new

Arbenz et al.[1] proposed a hybrid preconditioner combining a hierarchical basis preconditioner and an algebraic multigrid preconditioner for the correc- tion equation in the

A derivative free algorithm based on the new NCP- function and the new merit function for complementarity problems was discussed, and some preliminary numerical results for

The Hilbert space of an orbifold field theory [6] is decomposed into twisted sectors H g , that are labelled by the conjugacy classes [g] of the orbifold group, in our case

Shang-Yu Su, Chao-Wei Huang, and Yun-Nung Chen, “Dual Supervised Learning for Natural Language Understanding and Generation,” in Proceedings of The 57th Annual Meeting of