• 沒有找到結果。

Spectral representations of the transition probability matrices for continuous time finite Markov chains

N/A
N/A
Protected

Academic year: 2021

Share "Spectral representations of the transition probability matrices for continuous time finite Markov chains"

Copied!
7
0
0

加載中.... (立即查看全文)

全文

(1)

Spectral Representations of the Transition Probability Matrices for Continuous Time Finite Markov Chains

Author(s): Nan Fu Peng

Source: Journal of Applied Probability, Vol. 33, No. 1 (Mar., 1996), pp. 28-33 Published by: Applied Probability Trust

Stable URL: http://www.jstor.org/stable/3215261 . Accessed: 28/04/2014 10:47

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .

http://www.jstor.org/page/info/about/policies/terms.jsp

.

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.

.

Applied Probability Trust is collaborating with JSTOR to digitize, preserve and extend access to Journal of Applied Probability.

(2)

J. Appl. Prob. 33, 28-33 (1996) Printed in Israel ? Applied Probability Trust 1996

SPECTRAL REPRESENTATIONS OF THE TRANSITION PROBABILITY MATRICES FOR CONTINUOUS TIME FINITE MARKOV CHAINS

NAN FU PENG,* National Chiao Tung University

Abstract

Using an easy linear-algebraic method, we obtain spectral representations, without the need for eigenvector determination, of the transition probability matrices for completely general continuous time Markov chains with finite state space. Comparing the proof presented here with that of Brown (1991), who provided a similar result for a special class of finite Markov chains, we observe that ours is more concise.

MARKOV CHAINS; TRANSITION PROBABILITY MATRICES; SPECTRAL REPRESENTATIONS AMS 1991 SUBJECT CLASSIFICATION: PRIMARY 60J35

SECONDARY 60J27

1. Introduction

It is undoubtedly important to calculate numerically the time-dependent transition probabilities of continuous time Markov chains. We focus our attention on those with a finite state space. Keilson developed in his book [5] the methods of spectral decomposi- tion and the uniformization technique. Ross [10] found the external uniformization; this was followed by related work such as [7] and [12]. Some results on finite queues can be found in [1], [8], [9] and [11]. Brown [2] gave spectral representations, without eigenvectors, of the transition probability matrices of finite continuous time Markov chains with diagonalizable infinitesimal matrices (see also theorem 5 of [3]). Here we present an easy linear-algebraic technique which enables us to extend the result of [2] to completely general continuous time Markov chains with finite state space. The method used in this paper is also more concise and efficient than that of [2].

2. A simple linear-algebraic method

Consider a Markov chain (X(t)) defined on a finite state space {0, 1,

2,..., N}. Denote

by LO= 0,

2{,-.., 2N (maybe complex) the eigenvalues of its infinitesimal matrix

Q.

It is

well known [5] that the transition probability matrix P(t) of X(t) is Received 15 July 1994; revision received 19 December 1994.

* Postal address: Institute of Statistics, National Chiao Tung University, 1001 TA Hsueh Road, Hsinchu, Taiwan.

(3)

Spectral representations of the transition probability matrices 29

(1) P(t) = e'Qt=-

n!

n=o n!

Obviously, (1) implies the following:

(2) P (0)=1 and dt" t (Pdtn" p Q"', Vn?1. dtdt nt=0-dO /

If P(t) is a transition function or, more generally, sufficiently smooth, then (2) implies (1); hence we obtain the equivalence of (1) and (2). The linear algebra used below can be found in many textbooks, e.g. [4].

Lemma 1. Let A and B be two complex n x n matrices and {1,.-., ,} be any basis

of C". Then Aa, = Ba' for all i implies A = B.

Although Theorem 1 is a special case of Theorem 3 below, it is worth listing the proof here for comparison with that of Theorem 3 and that of [2].

Theorem 1. If the 2i are all distinct, then

N P(t)= H (I-Q/2,) i=I

(3)

N + 3 (Q/Am) 1 [(I-Q/21j)/(1-Am/2j)]exp(Amt). m= 1 i m, O

Proof. Call the right-hand side of (3) P(t). It is easy to see that, for m=0, 1,..-, N, d"P (t) dt Xm = " m = "m, n = 0, 1, 2, . t=0

where ,m is an eigenvector associated with the eigenvalue 2,,. Since the Am are all distinct,

the Xm form a basis of CN+'. The P(t) is obviously smooth, hence we obtain (3) from

the fact that (2) implies (1) and Lemma 1.

The above proof gives us a natural extension of Theorem 1 to Theorem 2 below. We

allow repeated eigenvalues here, and relabel them Ao = 0, 1,5..., AM as the distinct values.

Theorem 2. If the minimal polynomial of Q is of the form

M

g(x)=xHI (x-2,), i=1 M <N,

with distinct 20 = 0,

2%,.. , 2M, then P(t) is of the form (3) with N replaced by M.

The next corollary also appeared in [2].

Corollary 1. If (X(t)) is a finite birth and death process, then P(t) is of the form (3).

(4)

30 NAN FU PENG

Proof. The infinitesimal matrix Q of (X(t)) is tridiagonal and it is shown in [2] that

its eigenvalues are real and distinct.

The following example makes Theorem 1 more plausible.

Example 1. Consider a continuous time Markov chain having state space {0, 1, 2, 3} and starting from state 0 with infinitesimal matrix

0 1 2 3

0 --A A 0 0

1 0 -2 2 0

=2

0

0

-A

A

3 _2 0 0 -2A

A simple argument shows that

(00 t)4n - 1

(4) P03(t) = P(T=4n-1)=e-t E

n=1 n= 1 (4n - 1)! '

where T is a random variable distributed as Poisson (At). In a similar fashion, we have

(At)4n-3

(5) Pl2(t)= e-At

-n -.

n=1 (4n

-

3)!

Alternatively, observing that 0, - 2, - + i and - -iL are the eigenvalues of Q, we

obtain from (3) that

(6) P03(t)= e- '['et-'-t -~ -Isin(At)] and

(7) P12(t)=

e-•t [iet- e-t + sin(At)].

By introducing the Taylor expansions of the terms in the brackets of the right-hand sides of (6) and (7), we obtain the respective equivalence of (6) and (7) to (4) and (5). 3. The general result

A matrix Q is defined to be lower semitriangular if Q,,= 0 forj > i+ 1. It was claimed

in Theorem 1.2 of [6] that, if Q is lower semitriangular with

Qi,i+l

0 for all i, then its

eigenvalues are distinct but may be complex. This statement is incorrect as the next simple counterexample shows.

Example 2. Let the matrix Q be

0 1 2

Q=1 1 -2 1 .

2 1 0 -1

The eigenvalues of

Q

are 0, -2 and -2. Neither Theorem 1 nor Theorem 2 can be

(5)

Spectral representations of the transition probability matrices 31

deals with general Q and provides us with a way to settle the problem. Several lemmas

are needed in order to prove that theorem. Lemma 2. 0 n<k dn(tke"t) _k! n=k dt" t=O k! "-k n>k.

Proof. By the product rule of derivatives, it is easy to see that if f(t) and g(t) are continuously differentiable functions,

n n

(8)

(fg)(")

=

(

f()g(-i)g

i=0

\1

We immediately obtain the lemma by letting f(t)= tk and g(t)= et.

Lemma 3. For given M? 1, let

(t)

L=

-'

+

1

1 +

,

cit

M=i ( a ij )

where the dm are non-negative integers and am

=#0 for m= 1,--, M- 1. Then f ")(0) =0,

n= 1,2,..., K, if and only if the c, satisfy

(9) - c= M-( C-i, .... im,_,, n= 1, 2,---, K,

im t dm --- i O<il+.+iM__ln m=1 a

with the conventions that co= 1 and the right-hand side of (9) is zero when M= 1.

Proof. A quick application of (8) shows, for M= 2, 3,-.. and any fl~,, fM,

M

dtO

o?il++1+_?fln iLi2"'

M m=1 f(

-

(

with iM

=

n - i

(6)

32 NAN FU PENG d"f(t) n dt" t=O tx ml (d-ia (n-i - 1 --i.... iM_1)! l m i dm Mn l Cn-il ....iM- i=0 =n! C im:dm 11 1 Oil+...+iM l<n m=l1

/

if and only if (9) holds.

Theorem 3. Let the minimal polynomial of

Q

be of the form f(x)= 11i0o (x - 2)di

where the Ai are distinct and di 1. Then

d l R (i DI t (10) P(t)=Z = I (Q ) 2I)J t) e't where (11) R(i,j)=

(•H

(Li_ )dm;

(I?+

C

i,n

(Q-AI)

and

- Ci,n

-

kmd

kn.,_m id, •m,

<i

<

~

6i

km:,!~n

H i

-

kin

with ci,0 = 1.

Remark. It is easy to check that Theorem 3 reduces to Theorem 2 when di= 1 for

i=O 0, , M.

Proof. Call the right-hand side of (14) P(t). Due to the fact (Q -mI)=

(Q-2AI)+(Ai - )I and Lemma 3, Rj,j)(Q-2AI)j for Oj j<di can be written as

(12) R(,,j),(Q- 2,I) j = wp(Q- AI) +

.. +wdi (Q-

Ai ) + (Q - Ai I)

where the w are complex scalars depending on i and

P

d= dm 1.

With some algebra, Lemma 2 together with (10), (11) and (12) yield the following:

P(O), = Ix, and

d"QP(t) n)I

Q

x

dtn I=o

m=0 m

(7)

Spectral representations of the transition probability matrices 33

where Xi belongs to the null space of

(Q

- ALI)"i. Note that (Q - 2) I)m' = if m> di.

Since these Xj form a basis for CN+' and P(t) is sufficiently smooth, Lemma 1 and the

implication of (2) to (1) yield the desired result.

Remark. Supposing the minimal polynomial is difficult to obtain, Theorem 3 still holds if we replace it with the characteristic polynomial.

Corollary 2. If(X(t)) is ergodic, then V' = (1/(N+ 1)) T' n1lm (I- Q/i)di is the unique

stationary vector of P(t), where 1 is the vector with all entries equal to 1.

Proof. Since 0 < P,, (t)< 1, the real part of each Ak (k = 0) is strictly negative and do=

1. Hence P(t) -*+FI~ (I- Q/i)di as t--+oo. Since (X(t)) is ergodic, each row of

I[, I(I- Q/lA)d, is the unique stationary vector TV'.

Note that irreducibility of (X(t)) implies ergodicity of (X(t)) [5].

Example 2 (Continued). The probability transition matrix P(t) corresponding to

the infinitesimal matrix Q is

1/2 1/4 1/4 1/2 -1/4 -1/4 (0 1/2 -1/2

P(t)= 1/2 1/4 1/4 + - 1/2 3/4 - 1/4 e-2t + 0 -1/2 1/2 te-2t.

1/2 1/4 1/4 - 1/2 -1/4 3/4 0 -1/2 1/2/

Acknowledgment

I am very thankful to the referee for many helpful comments. References

[1] ABATE, J., KUIMA, M. AND WARD, W. (1991) Decompositions of the MIMI] transition function. Queueing Systems 9, 323-336.

[2] BROWN, M. (1991) Spectral analysis, without eigenvectors, for Markov chains. Prob. Eng. Inf Sci. 5, 131-144.

[3] FILL, J. A. (1992) Strong stationary duality for continuous-time Markov chains. Part I: Theory. J Theor. Prob. 5, 45-70.

[4] HOFFMAN, K. AND KUNZE, R. (1971) Linear Algebra. Prentice Hall, New York. [5] KEILSON, J. (1979) Markov Chain Models - Rarity and Exponentiality. Springer, Berlin.

[6] KIJIMA, M. (1987) Spectral structure of the first-passage-time densities for classes of Markov chains. J Appl. Prob. 24, 631-643.

[7] KIJIMA, M. (1992) A note on external uniformization for finite Markov chains in continuous time. Prob. Eng Inf Sci. 6, 127-131.

[8] KIJIMA, M. (1992) The transient solution to a class of Markovian queues. Comput. Math. Appl. 24, 17-24.

[9] PARTHASARATHY, P. R. AND SHARAFALI, M. (1989) Transient solution to the many server Poisson queue. A simple approach. J Appl. Prob. 26, 584-594.

[10] Ross, S. M. (1987) Approximating transition probabilities and mean occupation times in continuous- time Markov chains. Prob. Eng. Inf Sci. 1, 251-264.

[11] SHARMA, O. P. AND DASS, S. (1988) Multiserver Markovian queues with finite waiting space. Sankhyci B 50, 428-431.

[12] YOON, B. S. AND SHANTHIKUMAR, J. G. (1989) Bounds and approximations for the transient behavior of continuous-time Markov chains. Prob. Eng. Inf Sci 3, 175-198.

參考文獻

相關文件

The main disadvantage of the Derman-Kani tree is the invalid transition probability problem, in which the transition probability may become greater than one or less than zero.

Using the solution U of Riemann problem to obtain an e approximate solution for the solution U of balance laws..

A factorization method for reconstructing an impenetrable obstacle in a homogeneous medium (Helmholtz equation) using the spectral data of the far-field operator was developed

A factorization method for reconstructing an impenetrable obstacle in a homogeneous medium (Helmholtz equation) using the spectral data of the far-eld operator was developed

The disadvantage of the inversion methods of that type, the encountered dependence of discretization and truncation error on the free parameters, is removed by

For an important class of matrices the more qualitative assertions of Theorems 13 and 14 can be considerably sharpened. This is the class of consistly

• Content demands – Awareness that in different countries the weather is different and we need to wear different clothes / also culture. impacts on the clothing

• Examples of items NOT recognised for fee calculation*: staff gathering/ welfare/ meal allowances, expenses related to event celebrations without student participation,