• 沒有找到結果。

不均勻擴散問題之A.D.I.法的研究

N/A
N/A
Protected

Academic year: 2021

Share "不均勻擴散問題之A.D.I.法的研究"

Copied!
47
0
0

加載中.... (立即查看全文)

全文

(1)

應用數學系

數學建模與科學計算碩士班

碩 士 論 文

不均勻擴散問題之 A.D.I.法的研究

A Numerical Study of A.D.I. Methods

to Anisotropic Diffusion Problems

研 究 生:陳昱丞

(2)

不均勻擴散問題之 A.D.I.法的研究

A Numerical Study of A.D.I. Methods

to Anisotropic Diffusion Problems

研 究 生:陳昱丞 Student:Yu-Chen Chen

指導教授:賴明治 Advisor:Ming-Chih Lai

國 立 交 通 大 學

應用數學系數學建模與科學計算碩士班

碩 士 論 文

A Thesis

Submitted to Department of Applied Mathematics College of Science, Institute of Mathematical Modeling and Scientific Computing,

National Chiao Tung University in partial Fulfillment of the Requirements

for the Degree of Master

in

Applied Mathematics

September 2009

(3)

打從大學開始直到研究所畢業,這是一條漫長求學的路程。這其中或多或少 有經歷些困難與瓶頸甚至迷惘。而今能夠如願順利取得學位,首先要感謝的是我 的父母親給予的支持以及鼓勵。感謝他們耐心等待我達到自己期待的成就,並且 適時給出意見和提醒,陪我朝著自己的目標前進。同時也感謝弟妹的加油打氣, 我也祝福他們能夠跟我一樣有順利的求學過程。 再者感謝賴明治教授,不厭其煩的在做研究的路途上給我指引方向。無論是 討論中或在課堂裡,他都期待著學生培養起洞察問題的能力,勉勵我們要多花心 思去想問題,徹底了解問題之後,積極的解決問題。老師總是要求我們要把不懂 的事情說明白,其實背後隱藏著很多功夫要我們自己去鍛鍊。老師的一番話我銘 記在心,我會嘗試並努力實行。 很慶幸有一個溫馨的研究團隊。感謝同窗哲維、義閔、永潔、露結,在課堂 上的照顧,並肩學習。我會記得一起討論作業,腦力激盪的那一刻。感謝同研究 室的夥伴昆霖、裕昇、州哥、秉恆,有你們在的研究室是歡樂輕鬆的,讓我在緊 湊的研究生生活中有緩衝的時候。我會記得一起討論金融風暴,看足球,討論種 植物的專屬話題。另外感謝組合組與分析組的阿華田、油油、天兵、松育、士慶、 慧棻、彥寧、新同學、文昱、瑞毅、平日的照顧。感謝同門師兄小豪、仲尹、鈺 傑、先皓、麻將,在論文撰寫還有口試期間的鼎力相助,幫我打氣產生不少信心。 尤其感謝慧棻同學與仲尹學長在論文撰寫期間的幫忙跟贊助,其繁瑣的校稿訂正 與撰寫,真是辛苦你們了。 在口試準備期間,感謝張書銘教授、黃聰明教授,與賴明治教授的百忙之中 的指點,審校論文,讓我的畢業論文更加完整嚴謹。也感謝您們在口試期間給予 的指正跟批評,以及對我的肯定。 另外要感謝系上傅恆霖教授、陳秋媛教授對我的關心,讓我在求學過程中充 滿了鼓勵。最後感謝棒球隊教練黃杉盈教練七年來的指教,除了教會我棒球之 外,也教我許多看事情的觀點,待人處世的方法。球場是我釋放壓力的地方,也 是另一個獲得成就的來源,感謝教過我打球的邱毛、浩呆、小吉,陳教練,李教 練、鄭教練,謝謝你們讓我學到更多。還有辛苦的學弟,替我做雜事,撿球抬水 搬東西整理球場樣樣來。感謝一起拼鬥的隊友:范銘隆、拉紐,跟我一起像瘋子 一樣扛起了一段交大乙組棒球的江山。

(4)

不均勻擴散問題之 A.D.I.法的研究

學生:陳昱丞 指導教授:賴明治 教授

國立交通大學應用數學系數學建模與科學計算碩士班

摘 要

微觀世界中存在著不均勻擴散現象,此現象在科學應用領域中占有重要地 位。本論文應用 A.D.I.法研究不均勻擴散方程式,我們除了討論 A.D.I.法的收 斂速度以及無條件穩定性之外,並且將其數值結果與疊代法所得數值結果互相比 較。由於不均勻擴散方程式中的擴散係數可分為常係數與變係數的形式,因此離 散之後所得的線性系統亦可分為常係數與變係數的型態。我們利用疊代法中常用 的 C.G.法與 B.I.C.G.法分別處理。基於特殊的線性系統結構,A.D.I.法在計算 速度上遠勝於疊代法。

(5)

A Numerical Study of A.D.I. Methods to

Anisotropic Diffusion Problems

Student : Yu-Chen Chen

Advisor : Ming-Chih Lai

Institute of Mathematical Modeling and Scientific Computing,

National Chiao Tung University,

1001 Ta Hsueh Road, Hsinchu 30050,

Taiwan.

Abstract

Anisotropic diffusion is a kind of microscopic phenomenon. It plays an im-portant role in a lot of scientific applications. We use A.D.I. method which is efficient to solve anisotropic diffusion problems. We study the order of ac-curacy and unconditionally stable convergence of the method, and compare it with preconditioned iterative methods. Since diffusivity of anisotropic diffu-sion equations can be constant and variable type. We choose conjugate gradi-ent method to deal with the constant type equation and biconjugate gradigradi-ent method to solve the general type. Because of the special structure of linear systems, A.D.I. method outperforms iterative methods of CPU time.

Keywords: A.D.I. method; Conjugate gradient method; Biconjugate grdient method; dirichlet boundary condition; anisotropic diffusion problems; precondition; iterative methods

(6)

Contents

Abstract (in Chinese) i Abstract (in English) ii

Contents iii

List of Figures v

1 Introduction 1

2 The Alternating Direction Implicit method (A.D.I.) 3 2.1 The anisotropic diffusion problems . . . 3 2.2 Discretization of A.D.I. method . . . 4 2.3 Stability for A.D.I. method . . . 7 2.3.1 von Neumann analysis for the anisotropic diffusion problems . 8 2.3.2 von Neumann analysis for general anisotropic diffusion problems 10 2.4 Apply A.D.I. method to anisotropic diffusion problems . . . 12 3 Conjugate gradient method for anisotropic diffusion problem 15 3.1 Steepest Descent method . . . 15 3.2 Introduction of conjugate gradient method (C.G. method) . . . 19

(7)

3.3 Implementation of the conjugate gradient method for anisotropic

dif-fusion problems . . . 22

3.3.1 Discrtiazation by Crank-Nicolson scheme . . . 22

3.3.2 The biconjugate gradient method . . . 26

3.3.3 The preconditioned conjugate gradient method . . . 27

3.3.4 The preconditioned biconjugate gradient method . . . 28

4 Numerical results 30 4.1 Numerical results for heat equations . . . 30

4.2 Comparison of iterative methods and the A.D.I. method . . . 32

4.3 Conclusion . . . 36

(8)

List of Figures

1 The quadratic form with a positive definite matrix has minimal extreme value. . . 16 2 We use regular uniform domain and collect the columns to be a new

column. . . 24 3 Comparison of order of accuracy for case 1, 2, 3, respectively. (y-axis

is order of accuracy, x-axis is number of grid point) . . . 33 4 CPU time of case 1, 2, 3, repectively. . . 34 5 Order of accuracy of case 4, 5, repectively. ( y axis represent order of

accuracy) . . . 35 6 CPU time of case 4, 5, respectively. . . 36 7 A case satisfied heat and anisotropic diffusion, we fit the curve of data

solved by A.D.I. and iterative methods. and compare with the CPU time. . . 37

(9)

1

Introduction

As it known, the anisotropic diffusion [5] is a kind of microscopic phenomenon in the random thermal motion on a microscopic scale. A lot of applications of anisotropic diffusion in physics, chemistry, biotechnology, and engineering. It is a net transport of particles from higher concentration to lower concentration. Because of anisotropic diffusion, the material mix gradually. The mixing process which can be described by Fick’s law will attempt to the equilibrium state. By solving the anisotropic diffusion problems could help us to simulate the whole behavior of diffusion.

In section 2, we introduce the alternating direction implicit (A.D.I.) method [9] which is known as the finite-difference implicit-type algorithm. A.D.I. method has the advantage of ensuring a more efficient formulation and calculation than other implicit methods in the case of multidimensional problems. We discretize the anisotropic dif-fusion equation by Crank-Nicolson scheme and split the finite difference time-domain problem by Douglas-Rachford method which is a kind of A.D.I. method. By us-ing Taylor series, we discuss the order of accuracy of the A.D.I. method which is first order in time and second order in space. Next, we illustrate that the two level finite-difference method is unconditionally stable.

Next, we introduce some direct methods solving large sparse linear system of equations. Direct methods take too much time and limited memory in computing process. In section 3, the conjugate gradient method [7] can be regarded as the iterative method to solve the anisotropic diffusion problems. When the diffusivity is constant type and the matrix of linear system is symmetric positive definite, the problem will be dealt by conjugate gradient (C.G.) method. If the diffusivity is variable type, we employ bi-conjugate gradient (B.I.C.G.) [12] method. Both B.I.C.G.

(10)

and C.G. are iterative methods, but the first one does not take advantage of the symmetric property, it requires the transpose of the original matrix. In other words, B.I.C.G. is more general than C.G. method. Generally speaking, one employs iterative methods to include preconditioning, if the matrix is ill-conditioned. C.G. and B.I.C.G. is highly susceptible to rounding errors. We use incomplete LU factorization [15] which is often used as a preconditioner, especially for large sparse matrix.

The last section contains numerical results about implementing of A.D.I. method, C.G. method and biconjugate gradient method. Firstly, we consider a heat equation which is a simpler problem than the anisotropic diffusion problem. The numerical results show that A.D.I. , C.G. and B.I.C.G methods converge in the second order of accuracy. In addition, we record CPU time for these methods and notice that A.D.I. is more efficient than iterative methods. Finally, we apply our numerical methods for the anisotropic diffusion problems and make a conclusion by comparison of numerical results.

(11)

2

The Alternating Direction Implicit method (A.D.I.)

In section 2, firstly, we describe a standard problem of the anisotropic diffusion prob-lem. Secondly, we introduce an efficient and simple numerical method for solving parabolic equations, especially on regular domains. By using Taylor series, we show that the scheme is consistent with the partial differential equation. The accuracy is first order in time and second order in space. Thirdly, we illustrate the implemen-tation of A.D.I. method. We will demonstrate some numerical results in the last section, and give a concise conclusion.

2.1

The anisotropic diffusion problems

Anisotropic diffusion problems arise in widespread range of scientific fields such as oil-reservoir simulation, plasma physics, image processing, semiconductor modeling, biology, and hydro-geology etc. Numerical simulation plays an important role in wild applications. When implement various numerical methods on this type of problem, one needs to find an approximation of u, which is the solution of the following two dimensional anisotropic diffusion equation. [9]

∂u

∂t = ∇ · (β∇u) in Ω × (0, T ] (2.1) with the initial condition

u(x, y, 0) = u0(x, y), (x, y) ∈ Ω

and the dirichlet boundary condition

(12)

where β =  ˜a(x, y) ˜b(x, y)˜b(x, y) ˜c(x, y) 

is a 2 × 2 symmetric coefficient matrix, subject to ˜a > 0, ˜c > 0, and ˜b2− ˜a˜c < 0.

2.2

Discretization of A.D.I. method

As noted before, the anisotropic diffusion problems are difficult and expensive when solved by all kinds of direct methods or iterative methods. In other words, they may cost much time and memory at each time step. If we want to solve the problems efficient, the expensive method may not suitable. A.D.I. method is an approach which reduce two-dimensional problem to a succession of a system of one-dimensional problems. Now, we start our work to extend Eqn.(2.1) as follows.

∂u

∂t =∇ · (β∇u) = ∇ · (

 ˜a ˜b

˜b ˜c   uuxy



) = ∇ · ˜au˜bux+ ˜buy

x+ ˜cuy

 . Obviously, we extend the above equation which contains four terms.

∂u

∂t = auxx+ buxy+ buyx+ cuyy.

Notice that the right-hand side includes ∇2u, and two mixed terms u

xy and uyx.

In order to simplify the equation, we assume u ∈ C2, then we have u

xy = uyx. Thus

we obtain.

ut= auxx+ 2buxy+ cuyy. (2.2)

In the beginning of discretization, we omit the mixed derivative term to reduce the difficulties of designing the scheme of equation. Therefore we consider heat equation first and look for a higher order of accuracy approximation of time derivative. Crank-Nicolson scheme is one kind of traditional scheme which has high order of accuracy as we want. We start with the formula for ut evaluated at tk+1/2. By using Taylor

series, we obtain two equations (2.3) and (2.4). un+1 = un+1/2+k

2ut+ k2

4utt+ O(k

(13)

un = un+1/2− k 2ut+ k2 4utt+ O(k 3). (2.4)

We combine equations (2.3) and (2.4) such that the second derivative of time van-ished. Therefore we have a second order accuracy scheme (2.5) and we will use it to approximate the time derivative.

ut =

un+1− un

k + O(k

2). (2.5)

We undertake to approximate the derivatives respect to space. By using Taylor series, we have

ui+1= ui+ hux+ h2uxx+ h3uxxx+ O(h4), (2.6)

ui−1 = ui− hux+ h2uxx− h3uxxx+ O(h4). (2.7)

Take summation of eqn.(2.6) and (2.7), the first derivative in space can be vanished, and let the summation divided by h2 immediately. We product eqn.(2.8) which

con-verges in the second order accuracy. Base on the same techniques, we derive eqn.(2.9). uxx =

ui+1,j − 2ui,j+ ui−1,j

h2 + O(h

2), (2.8)

uyy =

ui,j+1− 2ui,j + ui,j−1

h2 + O(h

2). (2.9)

Next, we will derive the finite difference scheme for the first derivative respect to space in x and y directions. We subtract (2.7) from (2.6) to keep the term ux, and

divide by 2h immediately, and we obtain eqn. (2.8).

Similarly, we produce the approximation for first order derivative in y direction. To simplify, we introduce some notations.

ux = ui+1,j − ui−1,j 2h + O(h 2), (2.10) uy = ui,j+1− ui,j−1 2h + O(h 2). (2.11)

(14)

Let δ2

x, δy2, Hx, and Hy be linear operators [10]. For convenience, we define

δ2xu = ui+1,j− 2ui,j + ui−1,j h2 ,

δ2yu = ui,j+1− 2ui,j + ui,j−1 h2 , Hxu = ui+1,j− ui−1,j 2h , Hyu = ui,j+1− ui,j−1 2h .

For the approximation in space, we employ the same ideas which has been used in the Crank-Nicoslon scheme at tk+1/2. Thus we obtain the equation below.

un+1− un= ka 2 (δ 2 xun+1+ δx2un) + kc 2 (δ 2 yun+1+ δy2un) + O(k3+ kh2), (1 + a)(1 + b) = 1 + a + b + ab. (2.12) It is quiet difficult to solve the block tri-diagonal linear system by direct methods. However, base on formula (2.12), we add k2acδx2δy2un+1/4 to both side of above

equa-tion and re-write it as follows. (1−ka2 δx2)(1−kc 2 δ 2 y)un+1= (1+ ka 2 δ 2 x)(1+ kc 2 δ 2 y)un+ k2ac 4 δ 2 xδ2y(un+1−un)+O(k3+kh2). (2.13) Since un+1− un = ku

t+ O(k3) and the k2 factor, the second term of eqn.(2.13) is

higher order term. We can omit the term without effecting the order of accuracy of original numerical scheme. From now on, we have already illustrated the detail of discretization of heat equation.

(1 −ka 2 δ 2 x)(1 − kc 2 δ 2 y)un+1 = (1 + ka 2 δ 2 x)(1 + kc 2 δ 2 y)un+ O(k3+ kh2). (2.14)

Now, we try to deal with the mixed derivative. According to the Douglas-Rachford method [9], we put the mixed term on right-hand side of eqn. (2.14). In other words,

(15)

we treat it as the information that we have already known. uxy = 1 2h[ ui+1,j+1− ui+1,j−1 2h − ui−1,j+1+ ui−1,j−1 2h ] + O(h), = 1

4h2[ui+1,j+1− ui+1,j−1− ui−1,j+1+ ui−1,j−1] + O(h),

= HxHyu + O(h).

We insert the mixing derivative term into eqn. (2.14), then we have. (1 −ka2 δx2)(1 −kc 2 δ 2 y)un+1 = (1 + ka 2 δ 2 x)(1 + kc 2δ 2 y)un+ 2bkHxHyun+ O(k3+ kh2+ kh). (2.15) By the way, the scheme is consistent with the anisotropic diffusion equation. The order of accuracy is first order in time and second order in space.

We split above equation into two level time-domain scheme by Douglas-Rachford method. (1 − 1 2kaδ 2 x)u∗i,j = (1 + 1 2kaδ 2 x+ kcδ2y + 2kbHxHy)uni,j, (2.16) (1 − 1 2kcδ 2 y)un+1i,j = u∗i,j − 1 2kcδ 2 yuni,j. (2.17)

Notice that each step requires the value u∗

i,j. We should regard u∗i,j as intermediate or

temporary value. A.D.I. method deal with intermediate values by boundary condition [10]. The problem about temporary values will illustrate in implementation of A.D.I method.

2.3

Stability for A.D.I. method

The Douglas-Rachford method is unconditionally stable. It can be easily checked by von Neumann analysis for two dimensional problems. Since von Neumann analysis can not deal with variable coefficient case, we show the variable type as well as constant type referring to [13].

(16)

2.3.1 von Neumann analysis for the anisotropic diffusion problems

Before we study the stability of A.D.I. method, we apply von Neumann analysis of fi-nite difference schemes which is used to check the stability of fifi-nite difference schemes. By using von Neumann analysis, we can give necessary and sufficient conditions for the stability of A.D.I. method.

We consider simple case whose coefficient matrix is constant type. By using von Neumann analysis for two dimensional equation [9], we replace un

i,j by λneimθejnφ, θ =

ηh, φ = ξh, η and ξ which are arbitrary real numbers, 0 ≤ θ, φ ≤ π.

Where λ is amplification factor. We will perform von Neumann analysis for A.D.I method. The method is stable if and only if |λ| ≤ 1.

δ2xun= e iθ − 2 + e−iθ h2 λ neimθejnφ = −h42 sin 2 θ 2λ neimθejnφ, δ2 yun= ejφ− 2 + e−jφ h2 λ neimθejnφ= − 4 h2 sin 2 θ 2λ neimθejnφ, HxHyun=

(eiθe− ee−jφ− e−iθe+ e−iθe−jφ)

4h2 λ

neimθejnφ = −1

h2λ

neimθejnφsin θ sin φ.

By using equations above, eqn. (2.15) can be re-written as follows.

(1 +2ak h2 sin 2 θ 2)(1 + 2ck h2 sin 2 φ 2)λ = [(1 − 2ak h2 sin 2 θ 2)(1 − 2ck h2 sin 2φ 2) − 2bk h2 sin θ sin φ].

Then we have the absolute value of λ. |λ| = |(1 −

2ak

h2 sin2 θ2)(1 − 2ckh2 sin2 φ2) −2bkh2 sin θ sin φ| |(1 + 2akh2 sin 2 θ 2)(1 + 2ck h2 sin 2 φ 2)| . To simplify, we replaced k/h2 by r. |λ| = |(1 − 2arsin 2 θ 2)(1 − 2cr sin 2 φ 2) − 2br sin θ sin φ| |(1 + 2r sin2 θ2)(1 + 2cr sin 2 φ 2)| .

(17)

Next work will discuss the relations between a, b, c, θ, φ, r which make |λ| ≤ 1. We multiply denominator on both sides, and take square.

|(1 − 2ar sin2 θ2)(1 − 2cr sin2 φ2) − 2br sin θ sin φ| ≤ |(1 + 2r sin2 θ2)(1 + 2cr sin2 φ 2)|, |(1 − 2ar sin2 θ 2)(1 − 2cr sin 2 φ 2) − 2br sin θ sin φ| 2 ≤ |(1 + 2r sin2 θ 2)(1 + 2cr sin 2 φ 2)| 2,

We subtract right hand side from left hand side and apply difference of two squares. |(1−2ar sin2θ 2)(1−2cr sin 2 φ 2)−2br sin θ sin φ| 2−|(1+2r sin2 θ 2)(1+2cr sin 2 φ 2)| 2 ≤ 0,

[(1 − 2ar sin2 θ2)(1 − 2cr sin2 φ2) − 2br sin θ sin φ + (1 + 2r sin2 θ2)(1 + 2cr sin2 φ 2)]× [(1 − 2ar sin2 θ2)(1 − 2cr sin2 φ2) − 2br sin θ sin φ − (1 + 2r sin2 θ2)(1 + 2cr sin2 φ

2)] ≤ 0. Finally, we obtain a inequality as follows.

[1 + 4acr2sin2 θ 2sin

2 φ

2 − br sin θ sin φ][a sin

2 θ 2 + c sin 2 φ 2 + b 2sin θ sin φ] ≥ 0 For simplification, we let

λ1 = 1 + 4acr2sin2 θ2sin2 φ2 − br sin θ sin φ, and λ2 = a sin2 θ2 + c sin2 φ2 +2b sin θ sin φ.

Obviously, |λ| ≤ 1 if and only if λ1λ2 ≥ 0.

λ1 = 1 + 4acr2sin2 θ 2sin 2 φ 2 − br sin θ sin φ = 1 + 4acr2sin2 θ 2sin 2 φ 2 − 4br sin θ 2cos θ 2sin φ 2cos φ 2 = 1 + [2r√ac sinθ 2sin φ 2 − b √ accos θ 2cos φ 2] 2 − b 2 accos 2 θ 2cos 2φ 2. In the above three equations, we use addition formula and complete the square. λ1 = [2r√ac sin θ 2sin φ 2 − b √ accos θ 2cos φ 2] 2+ac − b2 ac cos 2θ 2cos 2 φ 2+ 1 − cos 2 θ 2cos 2 φ 2

(18)

By the constraint of anisotropic diffusion equation, we have b2 − ac < 0. Thus the

second term will be positive. λ1 = [2r√ac sin θ 2sin φ 2 − b √accosθ 2cos φ 2] 2+ ac − b2 ac cos 2 θ 2cos 2 φ 2 + 1 − cos 2 θ 2cos 2 φ 2 = [2r√ac sinθ 2sin φ 2 − b √accosθ 2cos φ 2] 2+ ac − b2 ac cos 2 θ 2cos 2 φ 2 + sin 2 θ 2 + cos 2 θ 2(1 − cos 2 φ 2) = [2r√ac sinθ 2sin φ 2 − b √ accos θ 2cos φ 2] 2+ ac − b2 ac cos 2 θ 2cos 2 φ 2 + sin 2 θ 2 + cos 2 θ 2sin 2 φ 2 ≥ 0 Finally, we find the third and fourth terms of λ1 are both non-negative. Therefore,

we have a conclusion that λ1 must be positive. Similarly, we deal with λ2.

λ2 = a sin2 θ 2 + c sin 2 φ 2 + b 2sin θ sin φ = a sin2 θ 2 + c sin 2 φ 2 + 2b sin θ 2cos θ 2sin φ 2 cos φ 2 = [√a sinθ 2cos φ 2 + b √acos θ 2sin φ 2] 2 + a sin2 θ 2sin 2 φ 2 + c sin 2 φ 2 − b2 a cos 2θ 2sin 2φ 2 = [√a sinθ 2cos φ 2 + b √ acos θ 2sin φ 2] 2 +ac − b2 a cos 2θ 2sin 2φ 2 + (a + c) sin 2 θ 2sin 2 φ 2 ≥ 0. Notice that the constraint b2− ac < 0 plays an important role both in the calculation

process of λ1 and λ2.

No matter what a, b, c, θ, φ, γ be, λ1 ≥ 0, λ2 ≥ 0. The result implies λ1λ2 ≥ 0.

In other words, |λ| ≤ 1 is unconditional. Thus A.D.I. method is also unconditional stable when the anisotropic diffusion equation has a constant coefficient matrix. 2.3.2 von Neumann analysis for general anisotropic diffusion problems Next, we show that the general situation for the anisotropic diffusion problems. The von Neumann analysis fails to deal with variable type. Fortunately, we find that the important reference which be made by Widlund[13]. Therefore, the stability of A.D.I. method in general situation will be done. We rewrite the equation (2.15) as follows.

(1 −ka2 δx2 kc 2 δ 2 y + k2ac 4 δ 2 xδy2)un+1 = (1 + ka 2 δ 2 x+ kc 2 δ 2 y + k2ac 4 δ 2 xδy2+ 2bkHxHy)un.

(19)

For convenience, we define notations below and rewrite the equation. P =1 − (1 −12kaδx2)(1 − 1 2kcδ 2 y) = 1 2kaδ 2 x+ 1 2kcδ 2 y − 1 4ack 2δ2 xδy2, Q =(1 +1 2kaδ 2 x)(1 + 1 2kcδ 2 y) + 2kbHxHy− 1 = 1 2kaδ 2 x+ 1 2kcδ 2 y + 1 4ack 2δ2 xδ2y+ 2kbHxHy,

un+1i,j − uni,j = ¯P un+1i,j + ¯Quni,j.

Now, we replace some terms which refer to widlund’s notations. ui±1,j − ui,j = ±2i sin

θ 2e

±iθ 2, ui,j±1− ui,j = ±2i sin

φ 2e

±iφ2,

ui+1,j − ui−1,j = 2i sin θ,

ui,j+1− ui,j−1= 2i sin φ.

un+1i,j − uni,j = ¯P un+1i,j + ¯Quni,j, ¯ P = −2kah2 sin2 θ 2 − 2kc h2 sin 2 φ 2 − 4ack2 h4 sin 2 θ 2sin 2 φ 2, ¯ Q = −2kah2 sin 2θ 2 − 2kc h2 sin 2 φ 2 + 4ack2 h4 sin 2 θ 2sin 2 φ 2 − 2kb h2 sin θ sin φ,

un+1i,j − ¯P un+1i,j = + ¯Quni,j + uni,j ⇒ (1 − ¯P )un+1i,j = (1 + ¯Q)uni,j ⇒ un+1i,j = 1 + ¯Q 1 − ¯Pu

n i,j.

It is only necessary to show that |1+ ¯|1− ¯Q|P | ≤ 1. We find |1 + ¯Q| |1 − ¯P | = |λ| = |(1 −2akh2 sin 2 θ 2)(1 − 2ck h2 sin 2 φ 2) − 2bk h2 sin θ sin φ| |(1 + 2ak h2 sin 2 θ 2)(1 + 2ck h2 sin 2 φ 2)| .

Since the absolutely value of amplification factor of constant type is the same to variable type, so we can repeat the same work as well as the case with constant coefficient matrix. Therefore A.D.I method is unconditionally stable with variable coefficient matrix.

(20)

2.4

Apply A.D.I. method to anisotropic diffusion problems

To implement A.D.I. method on a square domain Ω = {(x, y)|0 ≤ x, y ≤ 1}, we begin with a grid consisting of points (xi, yj), given by xi = ih, and yj = jh for

i, j = 0, 1, ..., N respectively. In time direction we have t = nk, 1 ≤ n ≤ N − 1, where k = 1/N.

Firstly, we solve the temporary value u∗ by implicit method. We write down the

left hand side of eqn. (2.16) as follows. Obviously, the matrix which corresponds to the linear system is tri-diagonal.

−kai,j 2h2 u ∗ i−1,j + (1 + kai,j h2 )u ∗ i,j − kai,j 2h2 u ∗ i+1,j.

The right hand side will be written below, it contains known information. (1 + 1

2kaδ

2

x+ kcδy2+ 2kbHxHy)uni,j.

We assume that r = k/h2 and reduce the equation. We have implicit part

−0.5rai,ju∗i−1,j + (1 + rai,j)u∗i,j− 0.5rai,ju∗i+1,j.

and information part.

(1 − rai,j− 2rci,j)uni,j + 0.5rai,j(uni+1,j+ uni−1,j) + rci,j(uni,j+1+ uni,j−1) + 0.5rbi,j

(ui+1,j+1− uni−1,j+1− uni+1,j−1+ uni−1,j−1).

(21)

linear system Ajxj = rj ,where Aj is a tri-diagonal square matrix with order N − 1. Ai =      1 + ka1,j −0.5ka1,j −0.5ka2,j 1 + ka2,j −0.5ka2,j . .. . .. . .. −0.5kaN −1,j 1 + kaN −1,j      , xj =        u∗ 1,j ... u∗ i,j ... u∗ N −1,j        , ri =        un 1,j+ 0.5ka1,jδx2un1,j + kc1,jδy2un1,j+ 0.5kb1,jHxHyun1,j ... un

i,j+ 0.5kai,jδx2uni,j+ kci,jδy2uni,j+ 0.5kbi,jHxHyuni,j

... un N −1,j + 0.5kaN −1,jδx2unN −1,j + kcN −1,jδy2unN −1,j + 0.5kbN −1,jHxHyunN −1,j        +        0.5a1,ju∗1,j 0 ... 0 0.5aN −1,ju∗N −1,j        .

Notice rj composed of two terms, the second term is consisted of boundary condition.

Unfortunately, u∗ is undefined. Here is a problem arising, how to deal with the

boundaries about u∗ ? Obviously, the term uis contained both in eqn. (2.16) and

eqn. (2.17). Generally, we combine two equations and vanish the derivative terms about u∗. Therefore, uwill be represent by un+1 and un. We want to follow the

same tactic to deal with our equation. Consider that eqn. (2.17) (1 − 1 2kcδ 2 y)un+1i,j = u∗i,j − 1 2kcδ 2 yuni,j.

The terms of left hand side associate with un+1only, and the right hand side contains

u∗ and some terms associate with un. ucan be expressed by combination of un i,j

and un+1i,j as the equation below, so the boundary condition u∗

i,j becomes easier to

implement. u∗i,j = [1 − 1 2kci,jδ 2 y]un+1i,j − 1 2kci,jδ 2 yuni,j.

(22)

Similarly, we solve un+1. By eqn. (2.17)

−kc2hi,j2 un+1i−1,j+ (1 + kci,j h2 )u n+1 i,j − kci,j 2h2u n+1 i+1,j = u∗i,j− kci,j 2h2 u n

i,j−1+ kci,juni,j−

kci,j 2h2 u n i,j+1. For 1 ≤ i ≤ N − 1, we have ¯ Aix¯i = ¯biA¯i =      1 + kci,1 −0.5kci,1

−0.5kai,2 1 + kai,2 −0.5kai,2

. .. . .. . .. −0.5kai,N −1 1 + kai,N −1

     , ¯xi =        un+1i,1 ... un+1i,j ... un+1i,N −1        , ¯ ri =        un

i,1+ 0.5kai,1δ2xuni,1+ kci,1δy2uni,1+ 0.5kbi,1HxHyuni,1

... un

i,j+ 0.5kai,jδx2uni,j + kci,jδy2uni,j+ 0.5kbi,jHxHyuni,j

... un

i,N −1+ 0.5kai,N −1δ2xuni,N −1+ kci,N −1δy2uni,N −1+ 0.5kbi,N −1HxHyuni,N −1

       +       

0.5ai,1un+1i,1

0 ... 0

0.5ai,N −1un+1i,N −1

       .

(23)

3

Conjugate gradient method for anisotropic

dif-fusion problem

In section 3 we introduce some iterative methods and preconditioner. Then apply the iterative methods to anisotropic diffusion problems. Finally, we will show the numerical results

3.1

Steepest Descent method

Steepest descent method [7] is one kind of iterative method which generally converges to the solution and global in nature. Nearly, for any starting initial value will give convergence. Because C.G. method is originated from steepest descent method, we introduce the steepest descent method before we study C.G method. Consider the linear system of equation.

Ax = b, (3.1) where An×n is a large sparse matrix which is symmetric positive definite. b ∈ Rn×1

is a known vector, and x ∈ Rn×1 is solution of the linear system. If we solve eqn.

(3.1) by fully implicit scheme, we will face to save CPU time and limited memory for computing when we invert the matrix. In order to avoid the expensive process, it is by no means of inverting the fully implicit matrix by direct methods. Thus we will change our tactic and turn to take advantage of iterative methods.

In the beginning of study the steepest descent method, we consider the quadratic form which is simply defined by

g(x) = 1

2 < x, Ax > − < x, b >, (3.2) where x ∈ Rn×1 are arbitrary vector, A ∈ Rn×n is defined in eqn. (3.1) and <, > is

(24)

is equivalent to the solution of minimizing problem (3.2) by the detail as follows. Let v 6= 0 ∈ Rn×1 be a fixed vector and t is a real number, then think about

g(x + tv) = 1 2 < x + tv, Ax + tAv > − < x + tv, b > = 1 2 < x, Ax > + t 2 < x, Av > + t2 2 < v, Av > + t 2 < v, Ax > − < x, b > − t < v, b > = g(x) +t 2 2 < v, Av > +t < Ax − b > . (3.3)

Figure 1: The quadratic form with a positive definite matrix has minimal extreme value.

The extension (3.3) can be regarded as function of t, we suppose that h(t) = g(x) + t

2

2 < v, Av > +t < v, Ax − b > .

Since A is positive-definite, with the special structure, and the leading coefficient < v, Av > is always positive when v 6= 0. Since the extreme value will occur at critical point. The first derivative respect to t as follows.

(25)

t = −< v, Ax − b >< v, Av > . After replacing t into eqn. (3.3) we obtain

g(x + tv) = g(x) − < v, Ax >

2

2 < v, Av >,

∀t, t 6= 0 ⇒ g(x + tv) ≥ g(x). (3.4) Now, we show the equivalence of linear system (3.1) and minimization problem (3.2) If there exists certain vector ¯x which satisfies the linear system Ax = b, we have A¯x = b. Therefore, t = A¯x − b = 0. The equality of (3.4) holds, in other words, g(¯x) is minimal. On the other way, the equality of (3.4) holds when t = 0 implies b − Ax = 0. We find a solution of the linear system.

In the method of steepest descent, we give an arbitrary vector x and then approx-imate to the minimal value step by step. We take a series of steps x1, x2, . . ., until we

are satisfied with the numerical solution close enough to exact solution. By theorem, we have the direction of greatest increase in the value g(x) is the direction given by ∇g(x). In other words, the decreasing rate is maximal along the opposite direction of gradient. g(x) = 1 2 < x, Ax > − < x, b > = 1 2 n X i=1 n X j=1 aijxixj − n X i=1 xibi.

And then we calculate the gradient. ∂g(x) ∂xj = n X i=1 aji− bj, ∇g(x) = [∂g(x)∂x 1 ,∂g(x) ∂x2 , . . . ,∂g(x) ∂xn ]T = Ax − b = −(b − Ax) = −r,

Based on the advantage of greatest decreasing rate, we will solve eqn. (3.2) rather than eqn. (3.1). Thus the method of steepest descent will start from an initial vector

(26)

x0, and then define the initial search direction r0 = −∇g(x0) = b−Ax0. The following

steps are recursively given by

xk+1 = xk+ αkrk, k = 0, 1, 2, . . . .

αk are parameters which will be chosen so that g(x) can be minimized. We add b to

both hand sides after multiplying on both hand sides. −Axk+1 = −Axk− αkArk,

b − Axk+1 = b − Axk− αkArk.

Since −∇g(xk) = b − Axk, thus search direction is also given as follows.

rk+1 = rk− αkArk, k = 0, 1, 2, . . . .

A line search is a procedure that minimizing g(x) by the way of choosing αk, we

initiate our calculation from the equation.

g(xk+1) = g(xk+ αkrk) = g(xk) − αk < rk, rk> +

1 2α

2

k < rk, Ark > .

The above equation can be regarded as function of αk. The leading coefficient 1

2 < r

k, Ark > is positive. By partial derivative respect to α

k, the minimal value

occurs at critical point.

∂g(xk+1)/∂αk = − < rk, rk > +αk < rk, Ark > .

Setting ∂g(xk+1)/∂α

k = 0, we find that αk given by

αk =

< rk, rk>

< rk, Ark >.

We check that consecutive residuals are orthogonal for the choice of < rk+1, rk > =< rk− αkArk, rk >=< rk, rk > −αk < rk, Ark >

=< rk, rk > − < r

k, rk>

< rk, Ark > < r

(27)

We collect the steps of the steepest descent method as follows.

1. Choose an arbitrary initial vector x0 and r0 = b − Ax0, and tolerance is given.

2. for k=0,1,2,... , αk= <r k,rk> <rk,Ark>, xk+1 = xk+ α krk, rk+1 = rk− α kArk.

3. If |xk+1− xk| <tolerance, the solution can be approximated by xk+1

else return to step 2.

3.2

Introduction of conjugate gradient method (C.G. method)

The conjugate gradient method [7] of Hestenes and Stiefel was originally developed as a direct method to solve positive definite matrix of linear systems. In this section, we employed the C.G. method as an iterative method. The C.G. method is one of the most prominent and efficient algorithms for the numerical solution of particular linear systems, namely the systems whose matrix is symmetric positive definite. Since the C.G. method is a kind of iterative method, so it can be applied to deal with large sparse systems which can not be deal with by direct method such as cholesky decomposition, Gauss-Seidel, and Jacobi methods. The idea of the C.G. method is from minimization of the quadratic forms. Moreover, preconditioning is a technique in further acceleration. C.G. method is most popular and efficient iterative method for solving large sparse systems of the form.

The conjugate method can be regarded as the modification of the steepest descent method. The most difference is that C.G. method modifies the search direction, and then it becomes more efficient method. We start with an arbitrary x0, then we have

(28)

p0 = r0 = b − Ax0. Therefore, the iterative steps will be defined as follows.

xk+1 = xk+ αkpk, (3.5)

pk = rk+ γk(xk− xk−1). (3.6)

Notice the vector pk is called new search direction to the k-th iteration which is the

modification of steepest descent method pk is linear combination of original defined

search direction rk and difference between consecutive steps is xk− xk−1. We re-write

eqn. (3.6)

pk = rk+ γk(xk− xk−1) = rk+ γkαk−1pk−1.

Then let γkαk−1 = βk−1. The above equation will become pk+1 = rk+1+ βkpk. We

collect the formulas. Therefore we have

xk+1 = xk+ α kpk,

rk+1 = rk− αkApk,

pk+1 = rk+1+ βkpk.

We wish that the above three equations will converge quickly once αk and βk are

determined. Now, we choose αk by the idea as well as method of steepest descent.

g(xk+1) = g(xk+ αkpk) = 1 2 < x k+ α kpk, Axk+ αkApk > − < xk+ αkpk, b > = g(xk) + 1 2α 2 k < pk, Apk> −αk < pk, rk > . (3.7)

Then we calculate the partial derivative respect to αk.

∂g(xk+1)/∂αk = αk< pk, Apk > − < pk, rk> .

Set ∂g(xk+1)/∂α

k= 0, we find that αk can be given by

αk=

< pk, rk >

(29)

We re-write eqn. (3.7)

g(xk+1) = g(xk) − < p

k, rk>2

2 < pk, Apk>. (3.8)

We illustrate the idea why we let the search direction p0 = r0, according to eqn.

(3.8). If we choose the initial search direction p0 equal to r0, we can decrease the

value g(x1) than g(x0) as follows.

g(x1) = g(x0) −

< p0, r0 >2

2 < p0, Ap0 >.

Next, we determine βk by studying eqn. (3.8).

As we know g(xk+1) = g(xk) − <pk,rk>2

2<pk,Apk>. In order to minimize the value of each step, we discuss the minus term −2<p<pkk,r,Apk>k2>.

The numerator part :

< pk, rk > =< rk+ βkpk−1, rk > =< rk, rk > +βk−1 < pk−1, rk>, (3.9) < pk−1, rk > =< pk−1, rk−1− αk−1Ark−1 > =< pk−1, rk−1 > −α k−1 < pk−1, Ark−1 > =< pk−1, rk−1 > − < p k−1, rk−1 > < pk−1, Ark−1 > < p k−1, Ark−1 >= 0. (3.10)

In order to make the equation simple, we replace (3.9) into (3.10) we have the < pk, rk >=< rk, rk >, and re-write eqn. (3.10) as g(xk+1) = g(xk) − <rk>

2<pk,Apk>. The denominator part : Since pk = rk+ β

k−1pk−1. < pk, Apk> =< rk+ βk−1pk−1, A(rk+ βk−1pk−1) > =< rk, Ark> +2βk−1 < rk, Apk−1 > +βk−12 < pk−1, Apk−1 >, ∂ ∂βk−1 [< rk, Ark > +2βk−1< rk, APk−1 > +βk−12 < pk−1, Apk−1 >] = 0,

(30)

βk−1 = −

< rk, Apk−1 >

< pk−1, Apk−1>. Equivalence, βk = −

< rk+1, Apk >

< pk, Apk > . (3.11)

By minimizing the denominator, C.G. method converges more quickly. Next, we describe by saying that consecutive search directions are conjugate. By eqn.(3.11)

< rk+1, Apk > +βk < pk, Apk>= 0.

⇒< rk+1+ β

kpk, Apk>= 0,

⇒< pk+1, Apk >= 0.

3.3

Implementation of the conjugate gradient method for

anisotropic diffusion problems

We consider the anisotropic diffusion equation with constant coefficient matrix. ∂u

∂t = ∇ · (β∇u), where β = ˜a ˜b˜b ˜c



, ˜a > 0, ˜c > 0, and ˜b2− ˜a˜c < 0, u(x, y, 0) = u

0(x, y), (x, y) ∈ Ω,

u(x, y, t) = g(x, y, t), (x, y) ∈ ∂Ω, t ∈ (0, T ] Because of constant coefficient matrix, we can choose suitable discretization such that the linear system Ax = b has a symmetric positive definite matrix.

3.3.1 Discrtiazation by Crank-Nicolson scheme

We extend the anisotropic diffusion equation into simple form as follows. ∂u

∂t = ∇ · (β∇u) ⇒ ut= auxx+ 2buxy + cuyy. Beginning with the formula ut = u

n+1−un

k + O(k

2) for u

t evaluated at t + 1/2, and

using the same idea to deal with uxx, uyy and uxy. We have

un+1i,j − un i,j k = 1 2aδ 2 xun+1i,j + 1 2aδ 2 xuni,j+ 1 2cδ 2 yun+1i,j + 1 2cδ 2 yuni,j+ 1 2bHxHyu n+1 i,j + 1 2bHxHyu n i,j,

(31)

un+1i,j −uni,j = k 2aδ 2 xun+1i,j + k 2aδ 2 xuni,j+ k 2cδ 2 yun+1i,j + k 2cδ 2 yuni,j+ k 2bHxHyu n+1 i,j + k 2bHxHyu n i,j.

We separate un+1 from un, and then (1 − k2aδx2− k 2cδ 2 y− k 2bHxHy)u n+1 i,j = (1 + k 2aδ 2 x+ k 2cδ 2 y + k 2bHxHy)u n i,j.

To simplify, we multiply two on both sides. (2 − kaδ2

x− kcδ2y− kbHxHy)un+1i,j = (2 + kaδx2+ kcδ2y+ kbHxHy)uni,j. (3.12)

We extend the eqn. (3.12) into two parts. To simplify, we define some notations. ˜

d = 2 + 2ka + 2kc, ˜e = 2 − 2ka − 2kc, ˜a = ka, ˜b = kb, ˜c = kc. The left hand side of eqn. (3.12)

(2 + 2ka + 2kc)un+1i,j − ka(un+1i+1,j + un+1i−1,j) − kc(un+1i,j−1+ un+1i,j+1) − kb(un+1i+1,j+1− un+1i−1,j+1 − un+1i+1,j−1+ un+1i−1,j−1).

By notations which is defined above, left part becomes ˜

dun+1

i,j − ˜a(un+1i+1,j+ un+1i−1,j) − ˜c(un+1i,j−1+ un+1i,j+1) −˜b(un+1i+1,j+1− un+1i−1,j+1− un+1i+1,j−1+ un+1i−1,j−1).

The right hand side of eqn. (3.12) which is

(2 − 2ka − 2kc)uni,j+ ka(uni+1,j + uni−1,j) + kc(uni,j−1+ uni,j+1) + kb(uni+1,j+1− uni−1,j+1

− uni+1,j−1+ uni−1,j−1).

turns to ˜

euni,j+ ˜a(uni+1,j+ uni−1,j) + ˜c(uni,j−1+ uni,j+1) + ˜b(uni+1,j+1− uni−1,j+1− uni+1,j−1+ uni−1,j−1). Obviously, the fully implicit scheme will need to solve nine points at the same time.

We solve the equation by fully implicit method. Re-write the eqn. (3.12) in matrix form

(32)

Figure 2: We use regular uniform domain and collect the columns to be a new column.

The vector Au is corresponding to the left hand side part of eqn. (3.12), where u ∈ R(N −1)2×1

is an unknown vector which represented the value of next time step. We take each column from domain and compose them to be a new vector which is denoted by

u = [u1 u2 . . . uN −1]T,

where ui = [un+1i,1 , un+1i,2 , . . . , un+1i,N −1]T. Therefore we have the matrix

A =      D U L . .. ... . .. ... U L D      , where A ∈ R(N −1)2×(N −1)2

is block tri-diagonal and posi-tive definite matrix. The three block matrices are also tri-diagonal.

D =       ˜ d −˜c −˜c . .. ... . .. ... −˜c −˜c d˜       , U =       ˜ a −˜b ˜b ... ... . .. ... −˜b ˜b ˜a       , L =       ˜a ˜b −˜b . .. ... . .. ... ˜b −˜b ˜a       Notice that UT = L, so the matrix has the property of symmetricization.

(33)

The left hand side part of eqn. (3.12) is represented by vector Bx, where x ∈ R(N −1)2×1 is a known vector which take the information for solving next time step. B =      D′ U′ L′ . .. ... . .. ... U′ L′ D′     

, where B is also a block tri-diagonal matrix, and

D′ =      ˜ e −˜c −˜c . .. ... . .. ... ˜c −˜c ˜e      , U′ =       −˜a ˜b −˜b . .. ... . .. ... ˜b −˜b −˜a       , L′ =       −˜a −˜b ˜b . .. ... . .. ... −˜b ˜b −˜a       The column vector R ∈ R(N −1)2×1

contains boundary condition. R = [R1, R2, . . . , RN −1]T, R1 =            

˜aun+10,1 + ˜cun+11,0 + ˜bun+10,0 − ˜bun+12,1 − ˜bun+10,2 ˜aun+10,2 + ˜bun+10,1 − ˜bun+10,3

...

˜aun+1i−1,j + ˜bun+1i−1,j−1− ˜bun+1i−1,j+1 ...

˜aun+10,N −2+ ˜bun+10,N −3− ˜bun+10,N −1 ˜

aun+10,N + ˜cun+11,N −1+ ˜bun+10,N −1 − ˜bun+12,N −1− ˜bun+10,N +1             , RN −1 =            

˜aun+1N,1 + ˜cun+1N −1,0+ ˜bun+1N −2,0+ ˜bun+1N,2 − ˜bun+1N,0 ˜aun+1N,2 + ˜bun+1N,1 + ˜bun+1N,3

...

˜aun+1i+1,j− ˜bun+1i+1,j−1+ ˜bun+1i+1,j+1 ..

. ˜

aun+1N,N −2− ˜bun+1N,N −3+ ˜bun+1N,N −1

˜aun+1N,N −1+ ˜cun+1N −1,N + ˜bun+1N,N − ˜bun+1N,N −2− ˜bun+1N −2,N             ,

(34)

Ri =             ˜

cun+1i,0 + ˜bun+1i−1,0− ˜bun+1i+1,0 0 ... ... ... 0 ˜

cun+1i,N + ˜bun+1i+1,N − ˜bun+1i−1,N             , 2 ≤ i ≤ N − 2.

We apply conjugate gradient method to deal with the anisotropic diffusion problem with constant coefficient matrix. Next, we consider the matrix whose entries are variable type, and we will use biconjugate gradient method to deal with the problem. 3.3.2 The biconjugate gradient method

Different from conjugate gradient method, biconjugate gradient method does not require symmetric matrix, but conjugate transpose AT. The algorithm is as follows.

1. Choose a initial vector x0, r0= b − Ax0 and select ˜r0 such that < r0, ˜r0 >6= 0.

2. Set p0 = r0 and ˜p0 = ˜r0, 3. For k = 0, 1, 2, . . . αk = <r krk> <pk,A˜pk>, xk+1 = xk+ α kpk, ˜ xk+1 = ˜xk+ α kp˜k, rk+1 = rk− α kApk, ˜ rk+1 = ˜rk− α kATp˜k, βk= <r k+1rk+1> <rkrk> ,

(35)

pk+1 = rk+1+ β kp˜k,

˜

pk+1 = ˜rk+1+ β

kp˜k, if convergence ends.

3.3.3 The preconditioned conjugate gradient method

A technique resulting in further acceleration of the conjugate gradient method is the preconditioned conjugate gradient method.[3] The basic idea of the preconditioned conjugate method is to replace the system.

Ax = b by

C−1AC−1(Cx) = C−1b.

Since A is large sparse matrix, we apply incomplete LU factorization which is a kind of precondition. The factorization is used to solve sparse square matrices. Incompleting LU factorization produces a unit lower triangular matrix L, an upper triangular matrix U, and residuals R.

A = LU − R

We let C = L, and C−1AC−1 is a matrix for which the conjugate gradient method

converges faster than it does with A itself. We define ˜

A = C−1AC−1 ˜

x = Cx ˜b = C−1b

Since ˜A is symmetric positive definite, then we apply conjugate gradient method to the linear system ˜A˜x = ˜b.

(36)

1. Start with a initial vector ˜x0. Initial search direction ˜r0 = ˜b − ˜x0. 2. For k = 0, 1, 2, . . ., ˜ αk = <˜p krk> <˜pk, ˜pk>, ˜ xk+1 = ˜xk+ ˜α kp˜k, ˜ rk+1 = ˜r − ˜α kA˜˜pk, ˜ pk+1 = ˜pk− ˜β kp˜k, ˜ βk= −<˜r k+1, ˜pk>

<˜pk, ˜pk> , if convergence ends.

3.3.4 The preconditioned biconjugate gradient method

We apply the same idea to the precondition of biconjugate gradient method. The algorithm

1. Choose initial vector x0, two vectors ˜x0, ˜b and a preconditioner M,M can be I.

2. r0 = b0− Ax0, ˜r0 = ˜b0− ATx˜0. 3. p0 = M−1x0, ˜p0 = (M−1)r˜0). 4. For k = 0, 1, 2, . . . αk = <˜r,M −1rk> <˜pk,Apk> , xk+1 = xk+ α kpk, ˜xk+1 = ˜xk+ ˜αkp˜k, rk+1 = rk− α kApk, ˜rk+1= ˜r − ˜αkATp˜k, βk= <˜r k+1,M−1rk+1> <˜rk,M−1rk> , pk+1 = M−1rk+1+ β kpk, ˜pk+1 = M−1r˜k+1+ ˜βkp˜k.

(37)

Notice that the terms rk and ˜rk satisfy with rk= b − Axk, ˜rk = ˜b − A˜xk. And xk

(38)

4

Numerical results

In this section, we discuss some numerical simulations performed by A.D.I. and itera-tive methods to characterize the advantages and disadvantages of these two different kinds of approach. To analyze the order of accuracy of previous algorithms, we firstly consider two dimensional test cases with no mix-derivative term which is heat equa-tion, then constant and variable coefficients diffusivity. We will also demonstrate the CPU time to compare the efficiency of each method.

4.1

Numerical results for heat equations

In order to compare these two numerical methods, we consider two dimensional heat equation.

ut= auxx+ cuyy, u(0) = u0, ∂u|Ω = g(t),

u ∈ C2, and Ω = {(x, y)|0 ≤ x, y ≤}, 0 ≤ t ≤ T.

The first example is a two dimensional parabolic differential equation with initial and dirichlet boundary conditions.

ut= uxx+ uyy, 0 < x, y < 1 and t > 0,

u(x, y, 0) = ex+y, 0 ≤ x, y ≤ 1, u(0, y, t) = e2t+y, u(1, y, t) = e2t+y+1, u(x, 0, t) = e2t+x, u(x, 1, t) = e2t+x+1. The exact solution is

u(x, y, t) = ex+y+2t.

No matter what the coefficient matrix be, A.D.I. method works. The discretized matrix of above equation is constant and symmetric positive definite. We employ

(39)

C.G. method then list a table about order of accuracy and CPU time immediately. Using exact solution, we compute 2-norm relative errors and calculation is run up to time T = 1.

Conjugate Gradient method A.D.I method

∆t = h relative error time order relative error time order

1/20 3.3598e-005 0.17 1.3355e-005 0.06

1/40 8.0106e-006 0.29 2.0684 3.1992e-006 0.18 2.0616

1/80 1.9545e-006 2.43 2.0351 7.8148e-007 0.86 2.0334

1/160 4.8263e-007 35.90 2.0178 1.9303e-007 5.56 2.0174

Next, we consider another test equation with variable matrix which is not symmetric but positive definite and apply B.I.C.G. method.

ut= x2uxx+ y2uyy, 0 < x, y < 1 and t > 0

u(x, y, 0) = x2y + xy2, 0 ≤ x, y ≤ 1 u(0, y, t) = 0, u(1, y, t) = e2t(y2+ y) u(x, 0, t) = 0, u(x, 1, t) = e2t(x2+ x)

The exact solution is u(x, y, t) = e2t(x2y + xy2). The numerical results are as follows.

Bi-Conjugate Gradient method A.D.I method

∆t = h relative error time order relative error time order

1/20 2.1829e-005 0.20 2.1785e-005 0.06

1/40 5.0507e-006 0.71 2.1117 5.0486e-006 0.16 2.1094

1/80 1.2156e-006 7.85 2.0548 1.2155e-006 0.88 2.0544

1/160 2.9814e-007 126.53 2.0276 2.9821e-007 5.44 2.0271

Finally, we test the third example and analyze the result. ut= 2 3(1 + x + y) 2u xx+ 2 3(1 + x + y) 2u yy, 0 < x, y < 1 and t > 0, u(x, y, 0) = (1 + x + y)1.5, 0 ≤ x, y ≤ 1, u(0, y, t) = et(1 + y)1.5, u(1, y, t) = et(2 + y)1.5,

(40)

u(x, 0, t) = et(1 + x)1.5, u(x, 1, t) = et(2 + x)1.5, The exact solution is u(x, y, t) = et(1 + x + y)1.5.

Bi-Conjugate Gradient method A.D.I method

∆t = h relative error time order relative error time order

1/20 2.2056e-006 0.15 8.3317e-007 0.06

1/40 5.3968e-007 0.50 2.0310 2.0462e-007 0.18 2.0257

1/80 1.3306e-007 5.21 2.0200 5.0611e-008 0.85 2.0154

1/160 3.3048e-008 41.25 2.0095 1.2580e-008 5.55 2.0084

We analyze the examples previously by figures. The discussion contains order of accuracy and CPU time. Since heat equation contains pure second derivative terms respect to space. When we evaluate it at tk+1/2 by Crank-Nicolson scheme, A.D.I.

and iterative methods will converge in the second order of accuracy. Figure 3 shows that A.D.I., C.G. and B.I.C.G method satisfy the expected results. Next, we want to figure out which method is more efficient. For each method, the grid sizes are 1/20, 1/40, 1/80, and 1/160. There are four kinds of grid points, we construct polynomials which are degree 3 to fit the data. We notice the iterative methods need more and more time in computing process, but the increasing rate of CPU time of A.D.I is quiet small. It seems linearly.

4.2

Comparison of iterative methods and the A.D.I. method

Mix-derivative term is the only difference between heat and anisotropic diffusion equation. In the following examples, We will illustrate how the mix-derivative term effects these methods.

ut= auxx+ 2buxy+ cuyy, 0 < x, y < 1 and t > 0, where β =

 1 −1 −1 3  , u(x, y, 0) = ex+y, 0 ≤ x, y ≤ 1,

(41)

2 2.05 2.1 2.15 C.G. method A.D.I. method 2 2.05 2.1 2.15 2 2.05 2.1 2.15 C.G. method A.D.I. method C.G. method A.D.I. method

Figure 3: Comparison of order of accuracy for case 1, 2, 3, respectively. (y-axis is order of accuracy, x-axis is number of grid point)

u(x, 0, t) = ex+2t, u(x, 1, t) = ex+1+2t.

The exact solution is u(x, y, t) = ex+y+2t.

Bi-Conjugate Gradient method A.D.I method

∆t = h relative error time order relative error time order

1/20 2.2459e-004 0.25 8.3664e-004 0.06

1/40 5.7107e-005 0.89 1.9755 4.0900e-004 0.16 1.0325

1/80 1.4421e-005 7.32 1.9855 2.0468e-004 0.90 1.0200

1/160 3.6279e-006 53.37 1.9910 1.0007e-004 5.63 1.0110

The second case is as follows.

(42)

20 40 80 160 0

20 40

Time(s)

Number of grid points A.D.I. method C.G. method 20 40 80 160 0 100 200 Time(s)

Number of grid points

20 40 80 160

0 50

Time(s)

Number of grid points A.D.I. method

C.G. method

A.D.I. method C.G. method

Figure 4: CPU time of case 1, 2, 3, repectively. a = c = 12cos2[π(x + y)/6]/π2, b = −3cos2[π(x + y)/6]/π2,

u(x, y, 0) = tan[π(x + y)/6], 0 < x, y < 1,

u(0, y, t) = ettan[πy/6], u(1, y, t) = ettan[π(1 + y)/6],

u(x, 0, t) = ettan[πy/6], u(x, 1, t) = ettan[π(1 + y)/6]. The exact solution is u(x, y, t) = ettan[π(x + y)/6]

Bi-Conjugate Gradient method A.D.I method

∆t = h relative error time order relative error time order

1/20 1.0265e-004 0.19 1.1431e-004 0.07

1/40 2.4869e-005 0.55 2.0453 6.3867e-005 0.14 0.8398

1/80 6.1116e-006 3.52 2.0247 3.3412e-005 0.89 0.9347

(43)

0.8 1 1.2 1.4 1.6 1.8 2 Order C.G. method A.D.I. method 0.8 1 1.2 1.4 1.6 1.8 2 Order C.G. method A.D.I. method

Figure 5: Order of accuracy of case 4, 5, repectively. ( y axis represent order of accuracy)

To the A.D.I. method, we see that the order of accuracy is first order only since we deal with the mixed derivative term as explicit type. Iterative methods still maintain the second order of accuracy, in spite the mix-derivative part makes block tri-diagonal matrix more complex. Mix-derivative term does not effect CPU time of A.D.I. method because the structure of tri-diagonal linear system do not change at all. By fitting the data, we can see the tendency of increasing of CPU time, and the result is given by figure 6.

(44)

20 40 80 160 0 20 40 60 Time(s)

Number of grid points A.D.I. method C.G. method 20 40 80 160 0 10 20 30 40

Number of grid points

Time(s)

A.D.I. method C.G. method

Figure 6: CPU time of case 4, 5, respectively.

4.3

Conclusion

By using A.D.I. and iterative methods to solve heat equations without mix-derivative term, the second order of accuracy has shown in figure 3. In calculating process, iterative methods cost more time than A.D.I. method, the defect will react to effi-ciency of simulating. Since A.D.I method solves two tri-diagonal systems during a single time step, the time consuming of anisotropic diffusion problem is as well as heat equation. As we employ iterative methods to the same problem, mix-derivative term makes coefficients of matrix complex. Therefore iterative methods take more time to deal with the large sparse matrix. If we need to refine the grid to search for higher precise simulations, A.D.I. method can outperform iterative method with a

(45)

speedy calculating. 20 40 80 160 0 10 20 30 40 50 60 Time(s)

Number of grid points A.D.I heat equation

A.D.I.anisotropic diffusion equation C.G. heat equation

C.G. anisotropic diffusion equation

Figure 7: A case satisfied heat and anisotropic diffusion, we fit the curve of data solved by A.D.I. and iterative methods. and compare with the CPU time.

(46)

References

[1] Michele Benzi, C. D. Meyer and Miroslav Tuma, ”A sparseapproximation inverse preconditioner for the conjugate gardient method”, SIAM J. Sci. Comput. Vol.7, No.5, September (1996) pp. 1135-1149

[2] R. M. Beam and R. F. Watming, ”Alternating direction implicit methods for parabolic equation with a mixed derivative”, SIAM J. Sci. Stat. Comput. Vol.1, No.1 March (1980).

[3] P. Concus, G. H. Golub and G. Meurant, ”Block perconditioning for the conju-gate gradient method”, SIAM J. Scr. Stat. Comput. Vol.6, No.1, January (1985). [4] J. (Jr.) Douglas and J. E. Gunn, ”A general formulation of alternating direction methods. Part 1. Parabolic and hyperbolic problems”, Num Math., Vol.6 (1964), pp. 428-453.

[5] U. Diewald, T. PreuBer, and M. Rumpf, ”Anisotropic diffusion in vector field visualization on Euclidean domains and surfaces”, Ieee Transactions On Visual-ization and Computer Graphics, Vol. 6, No. 2, April-June (2000).

[6] J. C. Gilbert and J. Nocedal, ”Global convergence properties of conjugate gra-dient methods for optimization”, SIAM J. Optimization, Vol.2, No.1, February (1992) pp. 21-42.

[7] M. R. Hestenes and E. Stiefel, ”Methods of Conjugate Gradients for Solving Lin-ear Systems”, Journal of ResLin-earch of the National Bureau of Standards. Vol.49, No.6, December (1952) Research Paper 2379.

(47)

[8] A. R. Mitchell and G. Fairweather ”Improved forms of the alternating direction methods of Douglas, Peaceman and Rachford for solving parabolic and elliptic equations”, Num. Math., Vol.6 (1964), pp. 285-292.

[9] S. McKee and A. R. Mitchell, ”Alternating direction methods for parabolic equa-tions in two space dimensions with a mixed derivative”, The Computer Journal. Vol.13, No.1, March (1970).

[10] S. McKee and A. R. Mitchell, ”Alternating direction methods for parabolic equa-tions in three space dimensions with a mixed derivative”, The Computer Journal. Vol.14, No.3, February (1970).

[11] D. W. Peaceman and H. H. Rachford, Jr., ”The numerical solution of parabolic and elliptic differential equations”, J. Soc. Indust. Appl. Math. Vol.3, No.1, March (1955). pp. 28-41

[12] T. K. Sarkar, ”On the Application of the Generalized BiConjugate Gradi-ent Method”, Jounal of Electromagnetic Waves and Applications, Vol.1, No.3, (1987). pp. 223-242.

[13] O. B. Widlund, ”On stability of certain difference schemes”, Maths. Comp., Vol.5, (1965). pp. 201-210.

[14] E. L. Wachspress and G. J. Habetler. J, ”An alternationg directoin implicititer-ation technique” , Soc. Indust. Appl. Math. Vol.8, No.2, June (1960).

[15] Li Wang, Jun Zhang, ”A new stabilization strategy for incomplete LU precon-ditioning of indefinite matrices”, Applied Mathematics and Computation 144 (2003) pp. 75-87.

數據

Figure 1: The quadratic form with a positive definite matrix has minimal extreme value.
Figure 2: We use regular uniform domain and collect the columns to be a new column.
Figure 3: Comparison of order of accuracy for case 1, 2, 3, respectively. (y-axis is order of accuracy, x-axis is number of grid point)
Figure 4: CPU time of case 1, 2, 3, repectively.
+4

參考文獻

相關文件

GCG method is developed to minimize the residual of the linear equation under some special functional.. Therefore, the minimality property does not hold... , k) in order to construct

Reading Task 6: Genre Structure and Language Features. • Now let’s look at how language features (e.g. sentence patterns) are connected to the structure

Then, we recast the signal recovery problem as a smoothing penalized least squares optimization problem, and apply the nonlinear conjugate gradient method to solve the smoothing

Then, we recast the signal recovery problem as a smoothing penalized least squares optimization problem, and apply the nonlinear conjugate gradient method to solve the smoothing

Define instead the imaginary.. potential, magnetic field, lattice…) Dirac-BdG Hamiltonian:. with small, and matrix

Abstract In this paper, we consider the smoothing Newton method for solving a type of absolute value equations associated with second order cone (SOCAVE for short), which.. 1

This kind of algorithm has also been a powerful tool for solving many other optimization problems, including symmetric cone complementarity problems [15, 16, 20–22], symmetric

Recently, the paper [36] investigates a family of smoothing functions along with a smoothing-type algorithm to tackle the absolute value equation associated with second-order