• 沒有找到結果。

Discretized Form for Flame Sheet Model

We use the finite difference approximations to discrete the governing equations for flame sheet model on the grid points in the computational domain, shown in Figure 1, which cover from r = 0 to 7.5cm in the radial direction and from z = 0 to 30cm in the axial direction. In the diffusion and source terms, we use standard centered differences. In the axial, we use a monotonicity preserving upwind scheme to discrete the convective terms.

Next we write down the discretized form for each equation.

Radial Velocity :

(iv) For i = 2 ∼ (n − 1) and j = 2 ∼ (m − 1)

(ii) For i = 2 ∼ (n − 1) and j = 2 ∼ (m − 1)

(i) For i = 3 ∼ (n − 1) and j = 2 ∼ (m − 1)

(v) For i = 2 ∼ (n − 1) and j = 2 ∼ (m − 1)

(viii) For i = 2 ∼ (n − 1) and j = 2 ∼ (m − 1)

(ii) For i = 3 ∼ (n − 1) and j = 2 ∼ (m − 1)

Fuel

Air Air

RI

RO Rmax

r z

L

Figure 1: Physical configuration for diffusion flame model (not in scale)

Outer Boundary (r = Rmax) :

∂Vr

∂r = 0 (i)

, ∂Vz

∂r = 0 (ii)

, ω = ∂Vr

∂z (iii)

, S = 0 (iv)

.

(i) For i = n and j = 1 ∼ m

∂Vr

∂r (Vr)n,j− (Vr)n−1,j rn− rn−1 . (ii) For i = n and j = 1 ∼ m

∂Vz

∂r (Vz)n,j− (Vz)n−1,j rn− rn−1 . (iii) For i = n and j = 1 or m

∂ω

∂r ωn,j − ωn−1,j rn− rn−1 . For i = n and j = 2 ∼ (m − 1)

ω −∂Vr

∂z ≈ ωn,j (Vr)n,j+1− (Vr)n,j−1

zj+1− zj−1 . (iv) For i = n and j = 1 ∼ m

S ≈ Sn,j.

Axis of Symmetry (r = 0) :

Vr = 0 (i)

, ∂Vz

∂r = 0 (ii)

, ω = 0 (iii)

, ∂S

∂r = 0 (iv)

.

(i) For i = 1 and j = 2 ∼ (m − 1)

Vr ≈ (Vr)1,j.

(ii) For i = 1 and j = 2 ∼ (m − 1)

We use the result (ii) to deal with axial velocity equation.

2Vz

(iv) For i = 1 ∼ (n − 1) and j = 1

S − S0(r) ≈ Si,1− S0(ri).

Outlet Boundary (z = L) :

Vr = 0 (i)

, ∂Vz

∂z = 0 (ii)

, ∂ω

∂z = 0 (iii)

, ∂S

∂z = 0 (iv)

.

(i) For i = 1 ∼ (n − 1) and j = m

Vr ≈ (Vr)i,m. (ii) For i = 1 ∼ (n − 1) and j = m

∂Vz

∂z (Vz)i,m− (Vz)i,m−1 zm− zm−1 . (iii) For i = 1 and j = m

ω ≈ ωi,m. For i = 2 ∼ (n − 1) and j = m

∂ω

∂z ωi,m− ωi,m−1 zm− zm−1 . (iv) For i = 1 ∼ (n − 1) and j = m

∂S

∂z Si,m− Si,m−1

zm− zm−1 .

2 Krylov Subspace Methods

A Krylov subspace method is a method for which the subspace Km(A, r0) of Rn is the Krylov subspace with the form

Km(A, r0) = span{r0, Ar0, A2r0, . . . , Am−1r0},

where r0 = b − Ax0 be initial residual in general and x0 be initial guess. The Krylov sub-space has well property that matrix-vector multiplication of basis is cheap to compute. In other words, if we given a basis for Km, then we can cheaply compute a basis for Km+1. In this chapter for a start, we introduce the general projection methods, namely orthogonal projection or oblique projection. Then we will show two optimal results. One is orthog-onal projection just to minimize the A-norm of the error when A is symmetric positive definite. The other is oblique projection just to minimize the 2-norm of the residual when A be an arbitrary square matrix. In the second section, we introduce the Arnoldi’s method to construct the Krylov subspace. This method is an orthogonal projection method onto K for general non-Hermitian matrix. In the last section, we introduce the full orthogo-nalization method and its variant version, called incomplete orthogoorthogo-nalization method, to apply to linear system.

2.1 Projection Methods

Consider the linear system

Ax = b (2.1)

where A is an n × n real matrix (or sparse matrix). We will use projection method for extracting an approximation to the solution of a linear system. This idea is to restrict the next step in an iterative method to a small subspace but pick “best” step in that subspace. In order to reach our goal as stated above, we find methods which can cheaply find the next iterate as far as possible but still minimize some measure of the error or residual at each step.

Let K and L are two m-dimensional subspaces of Rn. A projection technique onto the subspace K and orthogonal to L is just to find an approximate solution ˜x to (2.1) by

imposing two conditions. One is to find ˜x belong to K and the other is making residual vector must be orthogonal to L. Now suppose we have current guess x0 and let initial residual vector r0 = b − Ax0. The projection method is repressed as

Find x ∈ x˜ 0+ K, such that ˜r = b − A˜x ⊥ L.

or equivalently

Find δ ∈ K, such that ˜r = b − A(x0+ δ) = r0− Aδ ⊥ L. (2.2) Let V = [v1· · · vm] and W = [w1· · · wm] are n × m matrix whose column-vectors form a basis for K and L, respectively. Then ˜x = x0+ δ = x0+ V y for some y ∈ Rm. Thus, we can transform (2.2) into matrix representation as following

WTAV y = WTr0 (2.3)

If we can find WTAV is nonsingular, then (2.3) has unique solution. For this purpose, the following proposition describes two ideal cases.

Proposition 1. Let A, L, and K satisfy either one of the two following conditions, i. A is positive definite and L = K,or

ii. A is nonsingular and L = AK.

Then the matrix B = WTAV is nonsingular for any bases V and W of K and L, respec-tively.

Proof. To prove first case. Let V and W be any basis of K and L, respectively. Since L and K are the same, we let W = V G, where G is a m × m nonsingular matrix. Then

B = WTAV = GTVTAV.

Because VTAV is positive definite, then we shows that B is nonsingular.

Consider the second case. Since L = AK, we let W = AV G, where G is a m × m nonsingular matrix. Then

B = WTAV = GT(AV )TAV.

Since A is nonsingular, the n × m matrix AV is full rank. Then we have (AV )TAV is

nonsingular. So we have a result. ¥

Suppose Proposition 1. hold, then the approximate solution ˜x can be repressed as

˜

x = x0+ V y = x0+ V (WTAV )−1WTr0.

A question is how much the quality of the approximate solution obtained from a gen-eral projection method? In order to answer this problem, two optimal results will be established.

Proposition 2. Assume that A is symmetric positive definite and L = K. Then a vector

˜

x is the result of an (orthogonal) projection method onto K with the starting vector x0 if and only if it minimizes the A-norm of the error over x0+ K, that is,

E(˜x) = min

x∈x0+KE(x), where

E(x) ≡ (A(x− x), x− x)1/2 ≡ kx− xkA. Proof. Let ˜x = x0+ ˜δ, where ˜δ ∈ K. Then

x∈xmin0+Kkx− xkA= min

δ∈K kx− (x0 + δ)kA

= min

δ∈K kd0− δkA (where d0 = x− x0)

= kd0− ˜δkA Therefore, we have

d0− ˜δ ⊥AK.

It would be better to say that ˜δ is the A-orthogonal projection of d0 onto K. ¥ Proposition 3. Let A be an arbitrary square matrix and assume that L = AK. Then a vector ˜x is the result of an (oblique) projection method onto K orthogonally to L with the starting vector x0 if and only if it minimizes the 2-norm of the residual vector b − Ax over x ∈ x0+ K, that is,

R(˜x) = min

x∈x0+KR(x), where

R(x) ≡ kb − Axk2.

Proof. Let ˜x = x0+ ˜δ, where ˜δ ∈ K. Then

x∈xmin0+Kkb − Axk2 = min

δ∈Kkb − A(x0+ δ)k2

= min

Aδ∈AKkr0− Aδk2 (where r0 = b − Ax0)

= kr0− A˜δk2 Therefore, we have

r0− A˜δ ⊥ AK = L.

It would be better to say that A˜δ is the orthogonal projection of r0 onto AK. ¥ By Proposition 2., the result of the projection process can be interpreted as orthog-onal projector acts on the initial error. The same is true of the Proposition 3. can be interpreted as oblique projector acts on the initial residual. The following properties will state conclusions from the above properties.

Proposition 4. Let ˜x be the approximate solution obtained from an orthogonal projection process onto K, and let ˜d = x− ˜x be the associated error vector. Then,

d = (I − P˜ A)d0,

where PA denotes the projector onto the subspace K, which is orthogonal with respect to the A-inner product.

Proof. By the result of Proposition 2. ¥

Proposition 5. Let ˜x be the approximate solution obtained from a projection process onto K orthogonally to L = AK, and let ˜r = b − A˜x be the associated residual. Then,

˜

r = (I − P )r0,

where P denotes the orthogonal projector onto the subspace AK.

Proof. By the result of Proposition 3. ¥

相關文件