• 沒有找到結果。

CONVERGENCE-RATES OF ITERATIVE SOLUTIONS OF ALGEBRAIC MATRIX RICCATI-EQUATIONS

N/A
N/A
Protected

Academic year: 2021

Share "CONVERGENCE-RATES OF ITERATIVE SOLUTIONS OF ALGEBRAIC MATRIX RICCATI-EQUATIONS"

Copied!
18
0
0

加載中.... (立即查看全文)

全文

(1)

Convergence Rates of Iterative Solutions of Algebraic

Matrix Riccati Equations

J o n q J u a n g

Department of Applied Mathematics National Chiao Tung University Hsinchu, Taiwan

a n d

P a u l N e l s o n

Department of Computer Science Texas A ~ M University

College Station, T X 77843

Transmitted by Melvin Scott

A B S T R A C T

We consider the iterative solutions of a certain class of algebraic m a t r i x Riccati equations with two parameters, c(0 < c < 1) and c~(0 < a < 1). Here c denotes the fraction of scattering per collision and a is an angular shift. Equations of this class are induced via invariant imbedding and the shifted Ganss-Lengendre q u a d r a t u r e formula from a "simple t r a n s p o r t model."

The purpose of this paper is to describe the effects of the p a r a m e t e r s c, a, and N (the dimension of the matrix) on the convergence rates of the it- erative solutions. We also compare the convergence rates of those iterative methods.

1. I N T R O D U C T I O N

C o n s i d e r t h e a l g e b r a i c m a t r i x R i c c a t i e q u a t i o n of t h e f o r m

B - A S - S D + S C S = O. (1) H e r e A, B , C , a n d D are m a t r i c e s of a p p r o x i m a t e d i m e n s i o n s h a v i n g t h e

APPLIED MATHEMATICS AND COMPUTATION 72:125-142 (1995)

(~) Elsevier Science Inc., 1995 0096-3003/95/$9.50

(2)

following structure: 1 1 1 A N - x N - = diag c ( w ~ + a ) ' c ( w 2 + @ ' ' ' ' ' c ( w ~ _ + a ) where 1 1

c~

c~

c)_ ]

:= D A -- ia T, 1 i = ; 1 . ° [ 1 1 1 ] D N + x N + diag c ( w + 1 _ a ) ' c ( w + - c~) ' " " " ' c ( w + + - a ) :t_ c~ 4 ~ + 2 ( ~ t - ~) ' 2 ( , 0 2 - ~) ' " "

2(~+~+

- ~) T i T := D D -- diT; B = iiT; and C = da T.

Equation (1) contains two parameters c and a. Here c denotes the average total number of particles emerging from a collision, which is assumed to be conservative, i.e., 0 < c < 1, and a(0 < a < 1) is an angular shift. The

(3)

dimensionally dependent quantities w~- and w + denote the G a u s s - L e g e n d r e sets on [ - a , 1] and [~, 1], respectively; and c [ and c + are, respectively, their corresponding weights. Such equation is induced via invariant imbedding (see, e.g., [1, 2]), and the shifted G a u s s - L e g e n d r e q u a d r a t u r e formula (see, e.g., [3]), from a "simple t r a n s p o r t model" [4, 5].

For a = 0, two iterative procedures, one corresponding to a nonlinear version of the G a u s s - J o c o b i m e t h o d (G J) and the other associated with a nonlinear version of the Gauss-Seidel m e t h o d (GS) were proposed, respec- tively, by Shimizu and Aoki [6], and J u a n g and Lin [7]. Sufficient conditions for the convergence of the G J and GS m e t h o d s were given in [8] and [7], respectively. It was noted (see [9, Table 2]) t h a t those sufficient conditions would fail if c is not far away from 1. And it was also observed (see [9, T h e o r e m 1]) t h a t b o t h iterations converge as long as (1) with c~ = 0 has a nonnegative solution (in the componentwise sense). Such observation can be easily extended to the case t h a t a ~ 0. Physically, one would expect t h a t (1) has a nonnegative solution for all 0 < c < 1 and 0 <_ ~ < 1. This is recently proved in [10]. Therefore, we shall not be worried a b o u t the problem of convergence in this article.

T h e purpose of this work is to analyze the behavior of the conver- gence rates of G J and GS as p a r a m e t e r s c, a and N + vary. In partic- ular, we show t h a t for fixed ~ and N + two m e t h o d s G J and GS con- verge slower as c increases, and t h a t for fixed c and N +, G J and GS converge faster as c~ increases on [c~*, 1], for some 0 < ~* < 1. Some estimates for the convergence rates of b o t h methods are obtained. Fur- thermore, we show t h a t the GS m e t h o d indeed converges no slower t h a n the G J method. Finally, some numerical results and concluding r e m a r k s are presented.

Since for c~ ~ 0 and c ~ 1, such iterative procedures are extremely slow, the relaxation methods, such as the Jacobi overrelaxation (JOR) and successive overrelaxation (SOR) methods, are more desirable. However, the convergence of the relaxation m e t h o d s is difficult to prove. Moreover, the search for the o p t i m a l w could be too expensive to be practical. Our analysis shall be a step forward toward understanding the phenomenon of the slow convergence as c ~ 1 and a ~ 0, and can be hopefully put to use in developing b e t t e r algorithms such as multilevel methods.

2. F O R M U L A T I O N

We first rewrite (1) as

(4)

In component form, (2) is

¢(Wi- +O0(W~---O0 I

1 Nk~

1

C k S k j Sij = i ~ -+w--S)" 1 + ~ : (w k + ( ~ )

[

1 N - c ~ S k j

][

1 e k Sik +

]

: = c r i j 1 + ~ (w~-+(~) I + ~ E ( w + _ ( ~ )

=

k=l

:= W J S i j .

The Gauss-Jacobi iteration is then defined as follows:

s(P +1) _--

W j S (p) ij ij " The Gauss-Seidel iteration can be formulated as

CrS(P+I)

1 N - CkDk_ j" ] S(p+l) 1 ~ kj

ij

=

crij 1 + 2

k=l ( wk + 0/) -j- 2 =" (W k + 0~) J

k ~k 1

t,+ K'(P)

~k ~ik x + + 5 := W s ( i , j , p ) .

Consequently, the J O R and SOR are, respectively,

and

T(p+ 1)

ij z vv J o i j ,

Txz ~(p)

S (p+I) =

ij w T (p+I) ij + ( 1 - w~S(p). J ij , 1 % Sik + 1 + ~ = (~k+--~) (3) (4)

(5)

(6a) (6b)

T/(p+I) --

W s ( i , j , p),

(7a)

S (p+I)

ij =

wT(j p+I) -{- (1

- w ) ~ j

\ r~(p)

(Tb) Since we are only interested in positive solutions, the initial iteration for the procedures described above is defined to be

S~ °) = 0 for all i, j. (8)

To effectively analyze the convergence rate of both methods, the follow- ing transformation of Sij into Rij is essential, for reasons we shall detail

(5)

later. In the case of the Gauss-Jacobi, consider the following iteration { ~ , ( v ) ~ o o . £tij Ip=O"

R(p+l) [ l ~ C C - k ( w + - a ) R ~ P '

ij = 1 + 5

k=l

wT+w ~

N + + -

"~ n(p)

1 ~

c% (W i + 0~)£Lik (9a) x 1 + 2 a''~ w ; + +

k=l

Wk

:=

Uj(i,j,p)

RI O) = 1 for all

i,j.

(9b)

fD(P)

1oo

The J O R version of the iteration t*qj

Ip=O

is then defined as

T[ p+I) = Uj (i, j, p) (10a)

R(P+ 1)

ij

=

wT(P+ 1)

ij

+ (1

_ WJ£LiJ'n(P)

(lOb)

RI°)= 1

for all

i,j.

(10c)

~f R(P) o~

Similarly, the Gauss-Seidel and the SOR forms of the iteration

~--ij }v=O

can be formulated, respectively, as follows:

R(P+l)

[

1 ~ cck-(w + - c~)R (p+I)

1 ~ cck-(w + - o~) R(*}).]

0 = 1 + ~ k=l w-'-J + + w---~ + 2 k=i w+ + wV J

[

1 ~

CCqk(W'~ +OL'R(p+I),

ik

1 gk~

j cc + (wf._ + a) ttik''-'(P)-

x l + ~ k = l w~-+w+k + ~ = wi + w+

( l l a ) : = u (i,y,v) R} O) = 1

for all i,j;

( l i b ) and

T(p+ 1)

0

= Us(i,j,p)

~ n(p)

R(P+I)ij = wT(P+I) +

(1 -

w)nij

R!_°.) = 1 for all

i,j.

(12a) (125)

(12c)

REMARKS

(1) The existence and multiplicity of the positive solutions of (1) have been addressed in [10]. Since the iteration procedures defined by (4), (5), and (8) are monotonically increasing, any positive solution of (1) is an upper bound for both iterations. Therefore, the convergence

(6)

of such iterations as well as the iterations defined by (9) and (11) (p) are assured. We shall denote the limits of the iterations {Sij }p=O

(P) oo (OO) (OO)

and {R~j }p=0 by S~j and Rij , respectively.

Here, the matrix R (~) = ( R ~ ) ) is a positive solution of the follow- ing equation:

1 N- cc k (w + _ ~)Rkj 1 cc k

J 1 +

R , j = = w ; = w;+ k +

(13) (P) for all i,j, and p, we conclude (2) Noting, via induction, that Sij >_ Sij

t h a t S (~) = (S!~)~ x - 7,.7 , " is the minimal positive solution of (1) in the sense t h a t if S is any positive solution of (1), then Sij _> Si (~) for all i , j . Similarly, R (~) is the minimal positive solution of (13).

r , ( p ) l ~ and {S}~)}p=0 are The relationship between the iterations t;t~j ~fp=O c ~

provided by the following lemma:

LEMMA 1. The Gauss-Jacobi, Gauss-Seidel, JOR, and SOR versions r a(p) l ~

of toij Ip=o are, respectively, related to those of {RI p) }p=0 by the following formula.

S(P+I)ij = crij~tijn(p) for all i , j and aUp = O, 1,2, . . . , c~. (14)

Consequently, for each i, j the iteration {S} p) }~=o enjoys the same conver-

( p )

gence rate as its corresponding iteratwn { R~j }p=O.

PROOF. Since the proof leading to the assertion of the lemma for each iterative procedure is the same, we shall only illustrate the SOR method.

For p = 0, and i , j , clearly,

S}1 ~_ D(0)

) : c r i j -~- ¢ : l i j x t i j .

Suppose (14) is true up to p = n + 1, and i _< k - 1 and j _< l - 1. Then S(~ +1) = w W s ( k , ~, n) + (1 - w)S(~ ) crktwUs(k,g,n 1) + (1 .~_ ~(n-1) = - - - - "w}c.-I k £ . q . k ~ = c r k t ( w T ~ ) + (1 - wlR~? -1)) A_ D ( n ) (..-I k £ 1 C k , l .

The second equality above is justified by the induction hypothesis. This proves the first assertion of the lemma. The last assertion of the lemma

(7)

W-- N -

To further simplify the notation, the quadrature sets { ~ } i = 1 and

r + ' , N +

{wi )i=l

and their corresponding weights {c~-}~N__l and {c + N+ }i=1 over the

intervals [ - a , 1] and [a, 1], respectively, are to be normalized into the stan- dard interval [-1, 1]. In particular, s u p p o s e { x i-

}i=lN-

and S~+IN+ are the t'~i li=1 quadrature sets and {d~ }i=l and tm Ji=x are their associate quadrature - N - / , 4 + l . N +

weights over the interval [-1, 1]. Then, for all i,

w~- = 1 - o ~ + x ~ ( l + a ) 2 ' w+ = l + ~ + x + ( 1 - a ) 2 (15a) c~- = (1 +2a)d~- , c+ - (1 - 2 a)d+ (15b) Without loss of generality, we shall assume henceforth that

- 1 < x + < x + < . . . < x++ < 1 and

(16)

- 1 < x l < x 2 < . . . < x N_ < 1 .

Substituting (15) into (9), (10), (11), and (12), respectively, we obtain, respectively,

~j = 1 + ~ = 2-~-~x~-0 --~-) + - - x - ~ l ' ~ ; ) J k~l +

(i

2

)(1 +

- (P)

l

:=

1 + -~ E dk gj(xk )RkJ

c - - (P) 1 +

4

E d k +fi(xk+

)Rik

(P) (17a)

k = l k = l

:=

"Uj(i,j,p)

RI °) = 1 for all

i,j;

(17b)

and

T (p+I) = ij

-Ua(i, j, p)

(18a)

"'~(P) (18b)

R (p+I),j = wT (p+I) +

(1 -

w)n~j

RI°)=

1 for all

i,j;

(18c)

R(~+I) c c + + (p)

k = l k=i

× 1 + 4 k : l c

E d-kgj(x-k)R~ p+I) + -4 Ek=j d;g~(x-k)R~)

(19a)

(8)

and

T(i p+I) = - U s ( i , j , p )

R(p+ 1) ij = wT(P+ 1) ~j + (1 - - W ) 1 ~ i j ""(P)

R l ° ~ - - 1 for all i,j.

We shall henceforth work on the equations (17)-(20).

(2oa)

(2Ob)

(2Oc)

3. T H E MAIN RESULTS

Our objective in this section is to study the effect of parameters c, a and t h a t of the dimension on the iterative procedures defined by (17)-(18).

LEMMA 2. Let {RlP)}p= o be the sequence defined by c c (17), and let ~(p) - m a x : =

"~(P) and-(P) n(p) Then the following holds: m a x i , j l ~ i j , l - m i n : : m l n i , j l ~ i j .

(i) For fixed i, j, a, and g +, then RI p) is increasing with respect to c for p = 1 , 2 , . . . , o c .

(ii) For fixed c, a, and j, R} p)~z is increasing with respect to i for all p =

1 , 2 , . . . , c~. Likewise, RI p) is increasing with respect to j for all fixed c , a , j , andp. Here p = 1 , 2 , . . . , c c .

(iii) For fixed c and N +, there exists an a*, where 0 < a* < 1 such that R} p) is decreasing with respect to a on [a*, 1] for all i , j and

p = 1 , 2 , . . . , c ~ .

(iv) For a = O, N - = N +, R (°~) is a symmetric matrix.

PROOF. T h e assertion of (i) and (ii) follows from (17a) and an inducti- on on p. To see (iii), differentiating fi (x +) with respect to a, we obtain t h a t

of~(x~

~ , : 1 (1 + x : ) r(x+L j - x~-)~ 2 - 2~(2 + x : + ~+) + ( x f - x : ) ] (2 + x [ ( 1 + a) + x+(1 - a ) ) 2 a=, - 4 ( 1 + x 0 2

= ( 2 + x ; ( l + ~ ) + x ~ ( 1 _ ~ ) ) 2

<0.

Similarly, Ogi(x;)/Oa[a=l < O.

An induction on p from (17a) will give /iii) as asserted. To prove (iv),

( p ) ( P )

we see, again, that an induction will give Rij = Rji for all i, j and p, and

(o~)

(9)

REMARKS. (1) From computational data, it is expected t h a t for i _< j, RI~ ) is decreasing in a on [0, 1], and t h a t for i > j, R ~ ) is increasing in on [0, K] and decreasing in ~ on [~, 1]. Here ~ depends on i, j and is in between 0 and 1. (2) Similar assertions to (i)-(iii) of Lemma 2 hold for the

fD(P) lOO

iteration

l~ij

Ip=0 defined by (19).

r r ) ( P ) l c~

Let

lnij Ip=O

be the sequence defined by (17), and recall t h a t {R~P)}~= 0

converges monotonically upward to the minimal positive solution R ~ ) of (13). Set R I ? > - R l y > := el~ > (21) so t h a t e ( P + l ) c i j ~ N - N +

E

dk gj(x k )ekj + ~ ~ ak Ji~ k )eik

_ _ (p) c x - - ' , + ~ ~x+~ (p)

k = l k = l + + (~) - ( ~ > ~ d k ~ ( X k ) R ~ k k = l k = l \ k----1 c c + x+ (~) _(p)

<_ ~ Z d ; g 3 ( ~ ; )

l + ~ d ~ f ~ ( k ) n ~ ~3 k = l k = l C C _(p) k = l k = l N - N +

"--'-- -

+ -: V '

4 ~ (fikgj)eik .

- - (~) (22) 4 k = l k = l Let E = ( e l l , e l 2 , . . . , e l N + , e 2 1 , . . - , e 2 N + , e 3 1 , . . . , e N - 1 , . . . , e N - N + ) T - The inequality (22) can be written as

E(p +1) _<

GjE (p).

Here

Gj

is a large and sparse matrix with the following structure;

I

Dll + B1

O.21 D 2 2 ÷ B 2

D12

. . . . . . D 1 N -

]

c

a j = ~

::i :::

(10)

where Dij = diag [gjlfi, g j : - f i , . . . , gj+fi], and Bk = ( g i ~ j ) N + xN+- In the

case of the Gauss-Seidel iteration, one can similarly obtain that

i-1

j-1

el~+l)

_ 4c ~--~(gkjfi)ekj~"~'~ -7 ,

(p+l) 4C E-(f/k~'~J'~e(P+l)ik

k=l k=l N - N + c - - (v) c - _ (~)

(23)

<- 4 E ( g k j f i ) e k j ÷ 4 k=i k=j

Let G j = D ÷ L + U, where L and U are strictly lower and strictly upper triangular matrices of G j . Then (23) in the matrix form is

E(v +1) < G s E (p),

where G8 = ( I - L ) - I ( D + U).

DEFINITION 1. The convergence rates of the Gauss-Jacob± and the Gauss-Seidel methods are defined to be the spectral radii p ( G j ) and p(Gs)

of G j and Gs, respectively. To emphasize the dependency of p(Gy) on the

parameters, we shall denote p(Gj) by CGJ(c, ~, N ± ) . Likewise, p(Gs) by

CGS(c, a, N + ) .

REMARK. As noted earlier the iterations defined by the Gauss-Jacob± and the Gauss-Seidel methods converges to the minimal positive solution of (13). Therefore, it is reasonable to assume from hereafter t h a t p ( G j )

and p(G8) are no greater than 1.

We are now ready to state our first main result.

THEOREM 1.

(i) For fixed c, c~ and N ±, the Gauss-Seidel iteration always converges no slower than the Gauss-Jacob± method, i.e., C G J ( c , c ~ , N ±) >_ C G S (c,c~,N±).

(ii) For fixed c~ and N ±, CGJ and CGS are increasing in c.

(iii) For fixed c and N ±, CGJ and CGS are decreasing in o~ for c~ E [c~*, 1], where ~* is as in Lemma 2(iii).

PROOF. We first note that G j and Gs are nonnegative matrices (in the

componentwise sense). It follows from the well-known Perron-Frobenius theorem t h a t p(Gs) is an eigenvalue of Gs. Let p(Gs) = ,k _< 1, then there

exists a vector E ~ 0 such that

(11)

Hence,

(AL + D + U ) E = AE.

Since G j > AL + D + U, we conclude t h a t

p ( G j ) > p(AL + D + U) >_ A.

T h e second and t h e last assertions of t h e t h e o r e m are direct consequences of L e m m a 2 and t h e P e r r o n - F r o b e n i u s t h e o r e m . •

REMARK. T h e numerical results suggest t h a t a* = 0.

L E M M A 3.

(i) maxl<_j<g+ jgj (x) = gg+ (X), m a x l < i < N - f i ( x ) = f g - (X).

(ii)

~ k = l d-kgj(xk ) and ~'~.k=l dk fi(xk ) converges up to f1_1 gj(x) dx and N - N + + +

fl_ 1 f i ( x ) dx, respectively, as N + ---+ oc.

(iii) s u p l < N + < o ~ maxl<_j<N+ f_l I gj(x) dx = limj-~N+ fl_ 1 g j ( x ) d x = 2 ( 1 -

~)en 2/(1 -

~) := a(~).

(iv) suPl<_N_<o o m a x l < i < N - fl_l f i ( x ) d x = limi--.N- fl_ 1 f i ( x ) d x = 2 ( 1 + a ) e n 2 / ( 1 + a ) := F ( a ) .

PROOF. A direct calculation will give t h e assertion of L e m m a 30). To see (ii), we note, (e.g., see [(3, 5.3.29)] t h a t

1 N

f

h(x) dx - Z dkh(~k)

E N ( h ) J - 1 k = l 22n+1(n!) 4 h(2n) (r~)

(2n + 1)[(2n!)]2

(2n)!

T h e assertion of t h e L e m m a 3(ii) now follows f r o m t h e a b o v e error formula. T h e p r o o f of (iii) a n d (iv) is similar, we illustrate only (iv).

L e t Yi = (1 + a ) ( 1 + xi). T h e n 1

f_ f~(x) dx

(1

+ C~)(1 + Xi)

en (1 + ~)x~ + (3 - ~)

z (1 + o~)(1 + xi) = Yi gn yi + 2 - 2a := H(y~). Yi Now, d H 2 - 2(~ dyi ~n(yi + 2 2 a ) ~n yi Yi + 2 - 2c~

(12)

a n d

dYi (Yi + 2 -- 2ct) 2 (Yi + 2 -- 2ct)yi <- O.

Since 0 _< xi _< 1, we have t h a t 0 _< y, _< 2(1 + c~). Therefore, t h e m i n i m u m of dH/dy~ occurs at Yi = 2(1 + c~). Hence, dH/dyi >_ dH/dyily~=2(l+c,) =

f n 2 - ~n(1 + c~) - (1 - c~)/2 >_ 0. Therefore, H is increasing in xi. More- over, xi are zeros of Legendre polynomial, so t h a t XN- -* 1 as N - --* oo.

Consequently,

f__i

sup m a x f i ( x ) dx = sup H ( ( 1 + a ) ( 1 + Xl))

l < N - < o c I < N < N - 1 I < N - <oo

= , ( 2 ( 1 + O l ) ) = 2(1 + o l ) g n ( 1 - ~ ) . II

In t h e following, we are to o b t a i n u p p e r and lower b o u n d s for C G J ( c , c~, + N ) . Using (22) a n d L e m m a 3(i), we see i m m e d i a t e l y t h a t

N - N + C

)(fJ% +

e(P+I) < ~ /~,~"~{gkg+ - - ( P ) C ( p ) ~ - - i j - - k = l k = l N = N + C - 4 k = l k = j

(24)

L e t r a n k one m a t r i c e s M 2 and N2 be defined as follows: M2 --- ( ? N -

g j g + ) N - x N - and N2 = ( f N - ( g g + ) N + x g + , t h e n (24) gives t h e following form:

E (p+D < G E (p),

c M

where G = ~( 2 × I + I × N2). T h e n o t a t i o n x d e n o t e s t h e K o n e c k e r p r o d u c t (see, e.g., [1, p. 235]). Noting t h a t M2 and N2 are m a t r i c e s of r a n k one, we conclude t h a t a n d + + (~) (25a)

c Z dk IN-(xk)R -

p(M2) = k=lE d;gN+ (x k 1 Jr- ~ k = l = dk f N - (X 1 [ k = l c - x - (~) k = l (25b)

(13)

It follows from the fact 0 _<

Gj < G

that c M

C G J ( c , a , N +) <_ ~ ( p ( 2 ) ) +p(N2)). As Y + ~ c~, we see that

c/4(p(M2)) +

p(N2) approaches

(I - ~2)c2

C[F(~) + G(~)] + 8

×

3+c~+y,(l_c~)dY '+

,3_~Tx,-(1-+c~) dx'

: =

U(c, ~, ~).

(26)

Here F ( a ) and G(c~) are defined as in the Lemma 3, and

R(x, y)

satisfies the following nonlinear integral equation:

R(x,y)= (1+ c(1-c~2)

12 + y_~ll__~) ~x~(l +c~)

+ y)R(x',y)

)

1 2 + x ( l + c ~ ) + Y ' ( 1 - ~ ) d Y '

.

(27) Noting that max-l<~,y<l

R(x, y) =

R(1, 1), we see that (26) is bounded above by

c [ G(~) + F((~) + 4

cR(I'I---~)G(c~)F(~)]

2

"

(28)

Similarly, replacing M2 and N2 by M1 = (?lgjl) and Yl = (~igl), respectively, one would conclude that

p(M1) +

p(N1) <_ CGJ(c, c~, n+).

Here

cZd:f (xk

p(M1) ~-

E d;gl(x;

1 + -~

)Rlk

k = l k = l (29a)

+

c dZgl(x;)R 7)

p(N1) =

E d k f l ( x

1 + -~

k = l k = l (29b)

(14)

We summarize the above results in the following theorem:

THEOREM 2.

(i)

p(M1) + p(N1) <_ C G J ( c , ~ , N +) ~_

p ( M 2 ) + p(N2),

where

p(M2), p(N2), p(M1)

and

p(N1)

are defined in

(25a), (25b)

and

(29a), (295)

respectively.

(ii)

CGJ(c,~, c~) <_ U(c,(~, oc) < c/4[G(a)+r(~)+R(1,

1)F(~)G(c~)/2],

where U(c, c~, c~) is defined as in

(26).

REMARKS

(1) For N ± = 1,c = 1 and a = 0, (17) reduces to

R (p+l)

= 1 + - - (30a)

R (°) = 0. (30b)

A simple calculation would give t h a t {R (p) }~°= 0 converges extremely slowly to the exact solution R = 4. In this special case

p(M1) -~

p(N1) = p(M2) = p(N2) = 1/2, and hence C G J ( 1 , 0 , 1 ) = 1; this indeed, suggests t h a t the entire scheme almost stalls.

(2) W h e n c ~ 1 and a ~ 0, the upper bound for CGJ(c, a, N + ) , N + = 1 , 2 , . . . , ~ , is not as good as when c ~ 0 and a ~-. 1.

(3) If the convergence rate analysis were to proceed without using the transformation (14), then we shall accordingly obtain f i ( w +) --

1/(w + - a)

and

gi(wk) = 1/(w; + a),

b o t h of which are then not even integrable over [a, 1] and I - a , 1], respectively.

We shall next consider the asymptotic rates of convergence of the G J m e t h o d as N + = N - --~ oc. Consider the following iteration:

1 2 d y ' ,

:= Uj(x,y,p),

R(°)(x,y) = 1.

Suppose (27) has a positive solution

R(x, y).

T h e n the monotonically increasing sequence

{R (°) (x, y)}p=,

has an upper bound

R(x, y).

Therefore,

(15)

Let

e (p) (x,

y ) : = R (~)(x, y) - R (p) (x, y).

T h e n a similar procedure as done in (22) will give

x 1 +

1 2 + y ( l _ o ~ ) + x , ( l + t ~ ) d x '

x [1

+ c(l

- ~2)

f 4

l

2

+(e+x)R(~)(x'Y')x(l

7 ~-+--~y'

(f--

a)

dy']

:: (~je(p))(x, y).

Following the standard technique (see, e.g., [11, Theorem 8.7-5]), one will be able to show t h a t G: X -~ X, where X = C([0, 1] x [0, 1]), is a linear compact integral operator. Moreover, the calculations similar to those in L e m m a 3 will give

IICJII _<

U(c,c~,oo).

REMARK. We would expect t h a t the spectral radius of

Gj

as N gets larger would be a good approximation of that of

Gj.

We shall conclude this section with the following example: Let N - = N + = 2, then

{D11

+

B1

D12

)

G j

= ~

D21 D22

+

B2 ----

J

g2f11

By noting t h a t

7i

=

-gi

-~.

:

f .

= ½ g l f ] 2

g21fl

gl2fl

J-

g2f12

0

N

0

g21f2 q- gl/21

for i = 1,2, f o r / = 1,2,

(16)

a n d

gij ~- gji -'~ f j i ~- f i j -- 1 for i ¢ j ,

we conclude t h a t each of the column sums of G j2 is equal to c / 2 ( f 1 + f 2 ) , and hence p(Gj~) = c/2(-f I + ']2).

4. N U M E R I C A L E X A M P L E S AND C O N C L U D I N G R E M A R K S

We provide the following tables:

T A B L E 1 ~ = 0 . 1 AND N - = N + - - 3 c G J GS J O R / w S O R / w UB LB C G J CGS c = 0.1 10 9 8/1.03 7/1.02 0.069 0.025 0.051 0.028 c = 0.2 13 11 10/1.07 9/1.03 0.143 0.051 0.105 0.059 c = 0.3 16 13 11/1.11 10/1.07 0.223 0.078 0.162 0.094 c = 0.4 19 15 13/1.15 11/1.09 0.312 0.106 0.224 0.138 c = 0.5 22 18 15/1.20 12/1.13 0.411 0.134 0.290 0.189 c = 0.6 27 21 18/1.24 14/1.17 0.525 0.164 0.364 0.251 c = 0.7 34 26 22/1.29 16/1.25 0.659 0.196 0.448 0.329 c = 0.8 44 33 27/1.40 20/1.32 0.826 0.229 0.546 0.428 c = 0.9 66 48 38/1.53 26/1.50 1.060 0.266 0.673 0.570 c = 1.0 263 186 140/1.84 88/1.99 1.584 0.312 0.913 0.877 T A B L E 2 c = 0 . 9 AND N = 3 c~ G J GS J O R / w S O R / w UB LB C G J CGS = 0.0 68 50 40/1.52 28/1.49 1.081 0.269 0.684 0.582 = 0.1 66 48 38/1.53 26/1.50 1.060 0.266 0.673 0.570 = 0.2 59 44 36/1.47 24/1.46 1.001 0.258 0.644 0.536 a = 0.3 51 38 31/1.45 22/1.40 0.915 0.244 0.600 0.486 = 0.4 44 33 27/1.39 20/1.32 0.810 0.225 0.544 0.427 a = 0.5 37 28 24/1.30 17/1.28 0.695 0.202 0.480 0.363 = 0.6 30 24 20/1.26 15/1.22 0.573 0.174 0.409 0.296 = 0.7 25 20 17/1.20 13/1.17 0.446 0.141 0.331 0.230 = 0.8 20 16 14/1.14 12/1.10 0.313 0.102 0.242 0.164 = 0.9 14 13 11/1.08 10/1.05 0.169 0.056 0.138 0.092

(17)

TABLE 3 c----0.9 AND c~:0.1

Dim. GJ GS JOR/w SOR/w UB LB CGJ CGS

N = 2 65 52 38/1.53 26/1.65 0.991 0.409 0.673 0.599 N = 3 66 48 39/1.53 26/1.50 1.060 0.266 0.673 0.570 N = 4 66 46 39/1.53 26/1.46 1.088 0.187 0.673 0.552 N = 5 66 44 39/1.53 26/1.42 1.102 0.140 0.673 0.539 N = 6 66 43 39/1.53 26/1.40 1.110 0.109 0.673 0.530

Here the column of GJ denotes the number iterates necessary to solve (13) within the prescribed error by the C a u s s - J a c o b i method. Likewise for GS, JOR, and SOR. The stopping criterion for all the iterative processes is

R(m+l) (m) 10_1] "

maxi,j --ij -- Rij ] < The optimal w in the fourth and fifth columns are obtained by trial and error. The columns of UB and LB give the upper and lower bounds for the convergence rate of the GJ method, which are given in Theorem 20). The eighth and ninth columns of Table 1 gives the numerical convergence rates of the GJ and GS methods. It is estimated by using the quantity maxi,j = [ R I ? +1) _ Ri j(m) [/]Rij(m) _ R i j(m+l) ]. Based upon the results presented here, the following related matters would appear to warrant further investigation:

(1) It appears numerically that the smaller a is the slower the conver- gence is. However, we are unable to prove this at this point.

(2) From Table 3, it appears that the choice of the optimal w is very insensitive to the dimension of the matrix. The advantage of such observation, if confirmed, is apparent.

(3) Although our examples only deal with small dimension, (1) or (13) is indeed large, dense, and nonlinear. And the convergence rates of the G J, GS, JOR, and SOR methods are not satisfactory for c ~ 1 and c~ ~ 0. Therefore, how to accelerate the iteration, a n d / o r how the multigrid methods can be brought in is of great interest.

(4) It would be also interesting to analyze the errors produced by the discretization of the continuous model, such as (27), as well as those created by iterative procedures together.

The first author (J.J) wishes to thank Mr. A. D. Lin ]or providing the nu- merical data.

R E F E R E N C E S

1 R. Bellman and G. M. Wing, An Introduction to Invariant Embedding, Wiley, NY, 1975.

(18)

2 G.M. Wing, An Introduction to Transport Theory, Wiley, NY, 1962. 3 K . E . Atkinson, An Introduction to Numerical Analysis, Wiley, New York,

2nd ed., 1987.

4 F. Coron, Computation of the asymptotic states for linear half space kinetic problem, Trans. Theoret. Statist. Phys. 19(2):89-114 (1990).

5 B.D. Ganapol, An investigation of a simple transport model, Trans. Theoret.

Statist. Phys. 21(12):1-37 (1992).

6 A. Shimizu and K. Aoki, Application of Invariant Embedding to Reactor

Physics, Acad. Press, NY, 1972.

7 J. Juang and Z. T. Lin, Convergence of an iterative technique for alge- braic matrix Riccati equations and applications to transport theory, Trans.

Theoret. Statist. Phys. 21(12):87-100 (1992).

8 P. Nelson, Convergence of a certain monotone iteration in the reflection matrix for a nonmultiplying half-space, Trans. Theoret. Statist. Phys., 13:97- 106 (1984).

9 J. Juang and I. D. Chen, Iterative solution for a certain class of algebraic matrix Riccati equations arising in transport theory, Trans. Theoret. Statist.

Phys. 22(1):65-80 (1993).

10 J. Juang, Existence of algebraic matrix Riccati equations arising in transport theory, Linear Algebra Appl. in press.

11 P.R. Halmos, A. Hilbert Space Problem Book, Springer-Verlag, NY, 2nd ed., 1980.

數據

TABLE  3  c----0.9 AND c~:0.1

參考文獻

相關文件

denote the successive intervals produced by the bisection algorithm... denote the successive intervals produced by the

化成 reduced echelon form 後由於每一個 row 除了該 row 的 pivot 外, 只剩 free variables (其他的 pivot variable 所在的 entry 皆為 0), 所以可以很快地看出解的形式..

In this talk, we introduce a general iterative scheme for finding a common element of the set of solutions of variational inequality problem for an inverse-strongly monotone mapping

Assume that the partial multiplicities of purely imaginary and unimodular eigenvalues (if any) of the associated Hamiltonian and symplectic pencil, respectively, are all even and

generalization of Weyl’s gauge invariance, to a possible new theory of interactions

One of the technical results of this paper is an identifi- cation of the matrix model couplings ti(/x) corresponding to the Liouville theory coupled to a

Matrix model recursive formulation of 1/N expansion: all information encoded in spectral curve ⇒ generates topological string amplitudes... This is what we

Especially, the additional gauge symmetry are needed in order to have first order differential equations as equations