• 沒有找到結果。

We also present discussion on the usage of the exponential stationary processes for stochastic modelling

N/A
N/A
Protected

Academic year: 2022

Share "We also present discussion on the usage of the exponential stationary processes for stochastic modelling"

Copied!
22
0
0

加載中.... (立即查看全文)

全文

(1)

AND THEIR EXPONENTIALS

MUNEYA MATSUI AND NARN-RUEIH SHIEH

Abstract. We present results on the second order behavior and the expected maximal increments of Lamperti transforms of self-similar Gaussian processes and their exponentials. The Ornstein Uhlenbeck processes driven by fractional Brownian motion (fBM) and its exponentials have been recently studied in [20] and [21], where we essentially make use of some particular properties, e.g., stationary increments of fBM. Here the treated processes are fBM, bi-fBM and sub-fBM; the latter two are not of stationary increments. We utilize decompositions of self-similar Gaussian processes and effectively evaluate the maxima and correlations of each decomposed process. We also present discussion on the usage of the exponential stationary processes for stochastic modelling.

1. Introduction

In this paper we consider stationary processes constructed from self-similar Gaussian processes, among which we especially focus on the fractional Brownian motion and its variants, namely the bi- and the sub- versions . For H∈ (0, 1), a fractional Brownian motion (fBM) BH :={BH(t)}t∈R

is a centered Gaussian process with BH(0) = 0 and Cov(

BH(s), BH(t))

= 1 2

(|t|2H+|s|2H− |t − s|2H)

, (t, s)∈ R2.

It is well known that fBM has both stationary increments and self-similarity with index H, i.e., for any c > 0 {BH(ct)}t∈R d

= {cHBH(t)}t∈R, where = denotes equality in all finite dimensionald distributions. The fBM has the only stationary increments among self similar Gaussian processes, of which a considerable number of theoretical studies have been conducted (see e.g. [11, 22]). These studies include p-variation of its paths with p < H, and the long memory property of the increments for H ∈ (12, 1), as often observed in real-life data.

For Brownian motion (H = 1/2), we may generate the stationary processes in two ways: one is by the stochastic integration of exponential function with respect to Brownian motion which yields the famous Ornstein-Uhlenbeck process, and the other is the Lamperti transform which is introduced in a seminal paper [13]. These two transforms are well-known to be law equivalent; that is, both processes have the same finite dimensional distributions deduced from the corresponding strictly stationary Gaussian process. However, when we replace BM by fBM in the construction, in [10] the authors proved that these two transforms produce different stationary Gaussian processes.

In general, these two transforms yield different stationary processes, which reflect the different focus of constructions; the stationarity by Ornstein-Uhlenbeck (OU) transform is based on the stationary increments property, while the stationarity by Lamperti transform is based on the self- similarity. The OU processes driven by fBM have recently been studied in [20], and the research

Key words and phrases. Self-similarity; Fractional Brownian motion; Bi-fractional Brownian motion; Sub- fractional Brownian motion; Lamperti Transforms; Exponential Processes.

This work was initialized while M. Matsui visited National Taiwan University in August 2011, and is continued while N.-R. Shieh visited York University, Canada, in Fall 2011 and The Chinese University of Hong Kong in springs of 2012 & 13. The hospitalities of the institutes are acknowledged. M. Matsui’s research is partly supported by the JSPS KAKENHI Grant Number 23800065 Grant-in-Aid for Research Activity start-up, and JSPS KAKENHI Grant number 25870879 Grant-in-Aid for Young Scientists (B).

1

(2)

is continued in [21], where the continuous time autoregressive moving average processes driven by fBM are intensively studied.

The purpose of this article is to study the Lamperti transform of fBM. As we have mentioned, for the Lamperti transform to be stationary it is sufficient that the underlying process be self-similar;

thus we may study carefully the Lamperti transform for some more general self-similar Gaussian processes. We remark that a reason for the focus on stationary processes is that the such property is inevitable for statistical applications; for example, the statistical treatment of non-stationary processes requires non-standard asymptotic theory which requires considerable amount of technical complexity in practice.

We pay attention to the following two processes, which are variants of fBM.

Definition 1.1. Let H ∈ (0, 1)∩K ∈ (0, 2) such that HK ∈ (0, 1). A bifractional Brownian motion (bf BM ) is a centered Gaussian process BH,K :={BH,K(t)}t∈R+ with BH,K(0) = 0 and

Cov(

BH,K(s), BH,K(t))

= 1 2K

{(t2H+ s2H)K

− |t − s|2HK}

, (t, s)∈ R2+.

A sub-fractional Brownian motion (sf BM ) is a centered Gaussian process SH :={SH(t)}t∈R+ with SH(0) = 0 and

Cov(SH(s), SH(t)) = 1 2− 22H−1

(

s2H+ t2H1

2{(s + t)2H+|s − t|2H})

, (t, s)∈ R2+. Note that we multiply 1/√

2− 22H−1 to the original process so that we equalize all variances Var(SHK(s)) = s2HK = Var(BHK(s)) = Var(BH,K(s)).

The process BH,K was introduced in [12], aiming to broaden the modelling related to fBM; it namely discards the whole stationarity of increments, which as the authors remarked that the fBM is inadequate for large increments in modelling turbulence. The process is known to be HK self- similar, H¨older continuous of order δ for any δ < HK and BH,1corresponds to fBM. Notably, BH,K for K ̸= 1 does not have stationary increments; as fBM, BH,K is not a semimartingale except for B1/2,1. Other interesting properties have been investigated, e.g., the variational property [28], the path properties [31] and relation to the solution of some stochastic partial differential equations [17], to name just a few. The process SH is derived from certain particle systems by [6], which contributes as an intermediate process between the standard BM and fBM in the sense of the correlation decay of increments. The process SH is H self-similar, H¨older continuous and could have long memory increments such that S1/2 corresponds to BM. For H ̸= 1/2 it is not a semimartingale, nor of stationary increments. Similarly, as BH,K, some generic properties have been intensively studied, e.g., in [32]. The reason for our study of these two processes is that, besides the interesting properties stated above, their structures which one can see in covariance functions are simple and useful for applications; meanwhile, they retain several important properties of fBM, e.g., both processes are quasi-helix in the sense of J.P. Kahane [15, 16]. We also mention that our methodology in this article may work for other extensions of fBM.

Each self-similar process has the correspondence with a stationary process by the following well- known transform (see [13]): for H > 0 a stochastic process {X(t)}t≥0 is H-self-similar if and only if for all λ > 0, the process

(1.1) X(t) = eb −λtX(eHλt)

is stationary. See book [11] devoted to the self-similar processes, in which the significance of Lamperti transforms is well illustrated.

In this article, we discuss the Lamperti transform of self-similar Gaussian processes; we denote the Lamperti transform of BH, BH,K and of SH by

BbH(t) := e−λtBH(eHλt),

(3)

BbH,K(t) := e−λtBH,K(eHKλ t), SbH(t) := e−λtSH(eHλt),

respectively. We remark that the Ornstein-Uhlenbeck transform is another prominent way to produce a stationary process from fBM ; it is by an exponential integration with respect to fBM:

(1.2) YH(t) :=

t

−∞e−λ(t−u)dBH(u),

and is known as a fractional Ornstein-Uhlenbeck process. Although for H = 12 (Brownian motion case) bBH d= YH holds, for H ̸= 12 the finite dimensional distributions of bBH and YH are different [10]. In this article, we will study the correlation decay and the expected maximal increments of BbH and the two related processes bBH,K and bSH; which exhibit quite different behavior from those of YH.

This article is organized as follows. In Section 2, we list some preliminaries, including important decompositions for the bfBM and the sfBM. We present our main results in Section 3. A discussion on the role of the exponential stationary processes is given in Section 4. All the proofs are given in the final Section 5.

2. Some preliminaries

In this section we present some tools for our purpose. Firstly we describe known results about the decompositions for BH,K and SH, which are the key to analyze the expected maximal increments of the processes. For the decompositions we introduce another centered Gaussian process XK (defined by [17]),

(2.1) XK(t) =

0

(1− e−ut)u1+K2 dB(u), K ∈ (0, 1) ∪ (1, 2), such that the covariance function satisfies

Cov(XK(t), XK(s)) =





Γ(1−K)

K [tK+ sK− (t + s)K] if K ∈ (0, 1),

Γ(2−K)

K(K−1)[(t + s)K− tK− sK] if K ∈ (1, 2).

By definition the process XK is self-similar with index K/2, and its paths are shown to be absolutely-continuous on [0,∞) and infinitely-differentiable on (0, ∞) in [17] for K ∈ (0, 1), which are also extended to the case K ∈ (1, 2) by [3] and [27]. We prepare some normalizing constants ci, i = 1, 2, . . . , 5 as

c1 =

2−KK

Γ(1− K), c2 = 21−K2 , c3 =

K(K− 1)

2KΓ(2− K), c4=

H

Γ(1− 2H), c5=

H(2H− 1) Γ(2− 2H). Now the decompositions are as follows.

♢ Decompositions of bfBM BH,K by XK and fBM BH: (B1, by [17]) For H ∈ (0, 1) ∩ K ∈ (0, 1) it follows that

(2.2) {c1XK(t2H) + BH,K(t)}=d {c2BHK(t)}, where BH,K and B of integrator in the definition of XK are independent.

(B2, by [4]) For H ∈ (0, 1) and K ∈ (1, 2) with HK ∈ (0, 1), bfBM BH,K has the decomposition, (2.3) {BH,K(t)}=d {c2BHK(t) + c3XK(t2H)},

where BHK and B of integrator in the definition of XK are independent.

(4)

♢ Decomposition of sfBM SH by XH and fBM BH by [27] (cf. [3]):

(S1) For H ∈ (0,12), SH has a decomposition,

(2.4) {dHSH(t)}=d {c4X2H(t) + BH(t)} where dH =

2− 22H−1 and BH and B of the integrator in X2H are independent.

(S2) For H ∈ (12, 1), it follows that

(2.5) {c5X2H(t) + dHSH(t)}=d {BH(t)} where SH and B of the integrator in X2H are independent.

Now we characterize the sizes of covariance functions for BH, BH,K and SH and bound the probability for the maximum of the processes BH,K and SH. As a basis process for comparisons, we consider a Gaussian Markov process BH :={BH(t)}t∈[0,1], which is a centered Gaussian process with BH(0) = 0 and

Cov(BH(s), BH(t)) = s2H, 0 < s < t≤ 1,

such that its increments are independent. This process is found in, e.g., [29, Lemma 5.7] or [24, Theorem 3.1] where they intensively use the process to investigate properties of fBM. Moreover, we impose two additional self-similar Gaussian processes to our analysis. The first one SH is the original (non-normalized) sfBM, i.e., SH := dHSH, which is a centered Gaussian with covariance

Cov(SH(s), SH(t)) = s2H+ t2H1

2{(s + t)2H+|s − t|2H}.

The other one is the process XH, H ∈ (0,12)∪ H ∈ (12, 1) defined by

XH :=



2H

Γ(1−2H) 1

2−22H X2H if H∈ (0,12),

2H(2H−1) Γ(2−2H) 1

22H−2 X2H if H∈ (12, 1), such that its covariance function is

Cov(XH(s), XH(t)) = 1

|2 − 22H| t2H+ s2H− (t + s)2H . The process XH is the standardized version of X2H, namely Var(XH(s)) = s2H.

Firstly, we present the following relations of sizes of covariance with BH as our standard, and from which we derive the bounds for the probabilities of maxima for self-similar Gaussian processes.

All covariance functions are easily shown to be positive on s, t∈ [0, 1] and all variances are equal except that of SH.

Lemma 2.1. Let H ∈ (0, 1), K ∈ (0, 1) ∪ (1, 2) and HK ∈ (0, 1), and we write Cov(BH(s), BH(t)) =: ΥH(s, t),

Cov(BH,K(s), BH,K(t)) =: ΥH,K(s, t), Cov(BH(s), BH(t)) =: ΥH(s, t),

Cov(SH(s), SH(t)) =: SH(s, t), Cov(SH(s), SH(t)) =: SH(s, t), Cov(XH(s), XH(t)) =: χH(s, t).

Then we have the following relations. (1) bfBM case : for s, t∈ [0, 1],

(5)

HK\ K K∈ (0, 1) K ∈ (1, 2) HK∈ (0,12) ΥH,K(s, t) ≤ ΥHK(s, t) (i)

HK∈ (12, 1) ΥH,K(s, t) ≥ ΥHK(s, t)

In the above table for the range (i), if H ∈ (2(2K1−1),12) then ΥH,K(s, t) ≥ ΥHK(s, t). If HK∈ (12, 1) or H ∈ (2(2K1−1),12)∩ K ∈ (1, 2)

P (

0max≤t≤1BH,K(t)≥ a)

≤ 2P(

BHK(1)≥ a)

, a≥ 0.

(2) sfBM case : for s, t∈ [0, 1],

ΥH(s, t) ⪌ SH(s, t) ⪌ ΥH(s, t) ⪌ SH(s, t) if H ⪌ 1 2 which yields for H ∈ (12, 1) and for a > 0,

P (

0max≤t≤1SH(t)≥ a)

≤ 2P(

BH(1)≥ a)

, a≥ 0.

(3) XH or XH case : for s, t∈ [0, 1],

χH(s, t)≥ ΥH(s, t)≥ ΥH(s, t) if H ∈ (0,1 2), χH(s, t)≥ ΥH(s, t)≥ ΥH(s, t) if H ∈ (1

2, 1), which yields

P (

0max≤t≤1XH(t)≥ a)

≤ 2P(

BH(1)≥ a)

, a≥ 0.

We remark that at a = 0 probability inequalities are trivially satisfied. However, since our goal is the expected maxima of processes, the bounds for tail probabilities (for large a) are significant.

Since BH(1) follows the standard normal distribution, we can explicitly calculate the upper bound as

P(

BH(1)≥ a)

= 1

√2π

a

e−x2/2dx.

Note that if we extend the processes on the unit interval s, t∈ [0, 1] to the whole real line, their covariance functions may be negative, e.g., ΥH(−s, s) ≤ 0 for H ∈ (12, 1) and s > 0. Therefore, the study of covariance relations for these extended ones would be a future topic. Moreover, since results of Lemma 2.1 do not cover all comparisons of the covariances, the complete characterizations of their sizes would be in itself interesting. It would presumably depend on values of both H and K.

Next we consider the maximal increments of XK; we recall Lemma 2.2 in [20], Lemma 2.2. Let H ∈ [12, 1), a≥ 0, r ≥ 0.

P (

0max≤t≤1BtH ≥ a )

√2 π

a

e−x2/2dx.

P (

0max≤t≤1 BtH ≥a )

2

2 π

a

e−x2/2dx.

(6)

E [(

0max≤t≤r BHt )m]

{

rHm 22

π (m− 1)!! if m is odd rHm2 (m− 1)!! if m is even.

Finally we give the maximal inequalities for the self-similar Gaussian process XK which is use- ful for the analysis of expected maximal increments. Note that, since XK is not of stationary increments, we cannot employ the tools used in the previous works.

Lemma 2.3. Let m = 1, 2, . . . and (s, t)∈ R2+. Let XK be a Gaussian process defined by (2.1).

Then m-th power of maximal increments satisfy E

[

s≤t≤s+rmax |XK(t)− XK(s)|m]

≤ rmsm2(K−1)−1CKm(m− 1)!! if K∈ (0, 1), (2.6)

E [

s≤t≤s+rmax |XK(t)− XK(s)|m]

≤ rmsm2(K−2)CKm(m− 1)!! if K∈ (1, 2), (2.7)

where CK is a positive constant depending on K.

It is interesting to observe that, for XK, the maximum increments on the interval [s, s + r]

for r > 0 is a decreasing function in s, which is similar to that of expected squared increments E[(XK(s + r)− XK(s))2].

3. Main results

3.1. Correlation decay. In this section, we rigorously investigate the autocovariane functions of our target processes. We begin with the following correlation decay of bBH, which is cited from [10].

Proposition 3.1. Let H ∈ (0, 1] and t, s ∈ R.

Cov

(BbH(t), bBH(t + s) )

= 1 2eλ|s|

{

1 + e−2λ|s|(

1− eHλ|s|)2H}

= 1 2

{

e−λ|s|+

n=1

(−1)n−1 (2H

n )

e−λ(Hn−1)|s|

} . Thus the leading term of the correlating decay of bBH is, for|s| → ∞,

Cov( bBH(t), bBH(t + s)) =





1

2e−λ|s|+ O (

e−λ(H1−1)|s|

)

if H ∈ (0,12) He−λ(H1−1)|s|+ O(

e−λ|s|)

if H ∈ [12, 1).

In the following, we denote the Lamperti transform of fBM by LfBM, and that for bfBM and sfBM are denoted by LbfBM and LsfBM respectively. From the lemma below, we see that the correlation decay of LbfBM bBH,K and LsfBM bSH are both different from that of LfBM bBH. Lemma 3.2. Let H ∈ (0, 1) and K ∈ (0, 2) such that HK ∈ (0, 1) and t, s ≥ 0.

(1) The correlation of bBH,K has an expansion Cov( bBH,K(t), bBH,K(t + s)) = 1

2Keλs {(

1 + eKs )K

(

1− eHKλ s)2HK}

= 1

2Keλs {

n=1

(K n

)

eKns

n=1

(−1)n (2HK

n )

eHKλ ns }

. Hence as s→ ∞, the asymptotic behavior is

Cov( bBH,K(t), bBH,K(t + s)) =







K

2Keλs(1K2)+ O (

eλs(1K4)∨ eλs(1HK1 ))

if H ∈ (0,12)

2HK

2K eλs(1HK1 )+ O (

eλs(1K2) )

if H ∈ (12, 1).

(7)

(2) The correlation of bSH has an expansion Cov( bSH(t), bSH(t + s)) = eλs

2− 22H−1 (

e−2λs+ 11 2

{(1 + eHλs)2H

+(

1− eHλs)2H})

= e−λs 2− 22H−1

(

1

n=1

(2H 2n

)

e2λs(1Hn) )

= e−λs

2− 22H−1 + O(eλs(1H2)), as s→ ∞.

Note that for bBH,K, K = 1, the result reduces to that of Proposition 3.1 for bBH. Similarly for SbH with H = 1/2, the result reduces to that by BM. However for K̸= 1 and H ̸= 1/2, this is not the case as one sees in Remark 3.3. In [31, Proposition 2.1], they also analyze the correlation decay for the Lamberti transform of BH,K with different parameterizations; our result is consistent with theirs, if we set λ = HK.

Remark 3.3. In view of Proposition 3.1 and Lemma 3.2 , although all of processes exhibit the short memory property, autocorrelations decay in different ways. For λ > 0, the autocorrelation of s-distant points for bBH decreases faster than or the same as e−λs as s→ ∞, whereas bBH,K has the more flexible asymptotic, i.e., it decreases as e−λcH,Ks where cH,K is any positive number adjusted by H and K. Moreover, the autocorrelation of s-distant points for bSH decreases as e−λs only.

We mention that, for a stationary Gaussian process Y , the correlation decay of the power process Ym, m = 1, 2, . . . and of the exponential process Z := eY obey the following relation, which are straightforward generalizations of Proposition 2.2 in [20] (for (a) in Lemma 3.4) and Lemma 2.2 in [21] (for (b) in Lemma 3.4). In fact Proposition 2.2 in [20] is derived for fractional Ornstein- Uhlenbeck processes, but the result holds for stationary Gaussian processes in exactly the same way.

Lemma 3.4. Let m = 1, 2, . . . and let {Y (t)}t∈R be a stationary Gaussian with variance σ2 :=

Var(Y (0)).

(a) Assume that Cov(Y (0), Y (s))→ 0 as s → ∞, then for s → ∞, Cov ((Y (t))m, (Y (t + s))m)

=





m2((m− 2)!!)2σ2(m−1)Cov(Y (0), Y (s)) + O(

(Cov(Y (0), Y (s)))2)

if m is odd,

1 2

(m!(m−3)!!

(m−2)!

)2

σ2(m−2)(Cov(Y (0), Y (s)))2+ O(

(Cov(Y (0), Y (s)))4)

if m is even.

(b) Let Z := eY be the exponential stationary process determined by Y (t). Then Cov(Z(0), Z(s))⪌ 0 if and only if Cov(Y (0), Y (s))⪌ 0.

Moreover, assume that as s→ ∞, Cov(Y (0), Y (s)) → 0. Then it follows that (3.1) Cov(Z(0), Z(s)) = eσ2{Cov(Y (0), Y (s)) + o(Cov(Y (0), Y (s)))} .

Since the correlations of the LfBM, LbfBM and LsfBM are all positive-correlated, we thus have Proposition 3.5. Let H ∈ (0, 1]. We denote LfBM by bBH and the associated exponential process by eBH := eBbH. Then for fixed t∈ R, m = 1, 3, . . . and s → ∞,

Cov(( bBH(t))m, ( bBH(t + s))m)

= m2((m− 2)!!)2





1

2e−λ|s|+ O (

e−λ(H1−1)|s|

)

, if H ∈ (0,12), e−λ(H1−1)|s|+ O(e−λ|s|), if H ∈ [12, 1).

(8)

Then for fixed t∈ R, m = 2, 4, . . . and s → ∞, Cov(( bBH(t))m, ( bBH(t + s))m)

= 1 2

(m!(m− 3)!!

(m− 2)!

)2





1

2e−λ|s|+ O (

e−λ(H1−1)|s|

)

if H ∈ (0,12), e−λ(H1−1)|s|+ O(e−λ|s|) if H ∈ [12, 1).

Moreover, for fixed t∈ R and s → ∞,

Cov( eBH(t), eBH(t + s))

=





1

2e1−λ|s|+ O (

e−λ(H1−1)|s|

)

if H ∈ (0,12), He1−λ(H1−1)|s|+ O(

e−λ|s|)

if H ∈ [12, 1).

Proposition 3.6. Let H ∈ (0, 1) and K ∈ (0, 2) such that HK ∈ (0, 1). Denote LbfBM by bBH,K and its exponential by eBH,K := eBbH,K. Then for m = 1, 3, 5, . . . and t∈ R, the covariance decay as s→ ∞ is given by

Cov(( bBH,K(t))m, ( bBH,K(t + s))m),

= m2((m− 2)!!)2







K

2K eλs(1K2)+ O (

e2λs(1K2)∨ eλs(1HK1 ))

if H ∈ (0,12),

2HK

2K eλs(1HK1 )+ O (

eλs(1K2)∨ e2λs(1HK1 ))

if H ∈ (12, 1), and that for m = 2, 4, 6, . . . is

Cov(( bBH,K(t))m, ( bBH,K(t + s))m)

= 1 2

(m!(m− 3)!!

(m− 2)!

)2







K2

22K e2λs(1K2)+ O (

e3λs(1K2)∨ eλs(2K2HK1 ))

if H ∈ (0,12),

(2HK)2

22K e2λs(1HK1 )+ O (

eλs(2K2HK1 )∨ e3λs(1HK1 ))

if H ∈ (12, 1).

Moreover, for fixed t∈ R and s → ∞,

Cov( eBH,K(t), eBH,K(t + s)) =







Ke

2Keλs(1K2)+ o (

eλs(1K2) )

if H ∈ (0,12),

2HKe

2K eλs(1HK1 )+ o (

eλs(1HK1 ) )

if H ∈ (12, 1).

Proposition 3.7. Let H ∈ (0, 1) and denote LsfBM by bSH its exponential by eSH := eSbH. Then for t∈ R and m = 1, 3, 5, . . ., the correlation decay by s → ∞ is

Cov(( bSH(t))m, ( bSH(t + s))m) = m2(

(m− 2)!!)2 e−λs

2− 22H−1 + O(eλs(1H2)) and for m = 2, 4, 6, . . .,

Cov(( bSH(t))m, ( bSH(t + s))m) = 1 2

(m!(m− 3)!!

(m− 2)!

)2 e−2λs

(2− 22H−1)2 + O(e2λsH ∨ e−4λs).

Moreover, for t∈ R and s → ∞

Cov( eSH(t), eSH(t + s)) = e1−λs

2− 22H−1 + O(eλs(1H2)).

(9)

3.2. Expected maximal increments. We present maximal inequalities for the Lamperti pro- cesses LfBM bBH, LbfBM bBH,K and LsfBM bSH treated in this article and their exponentials eBH, BeH,K and eSH. The idea is to make use of the stationarity of Lamperti processes and decompositions of self-similar processes. As far as exponentials of stationary processes generated by self-similar Gaussian processes are concerned, expect for that by BM, only a few results are known, e.g., that of fractional Ornstein-Uhlenbeck process or CARMA processes; see [20] or [21].

We start with LfBM eBH and its exponential.

Proposition 3.8. Let H ∈ [12, 1) and m = 1, 2, . . . and denote LfBM by bBH and its exponential by BeH := eBbH. Then for s∈ R and r ∈ (0, 1),

(3.2)

E [

maxs≤t≤s+r| bBH(t)− bBH(s)|m]

m! ≤ CmrHm

√m!, and

(3.3) E

[

s≤t≤s+rmax | eBH(t)− eBH(s)| ]

≤ CrH, where C and C are positive constants which are taken uniformly in m.

Next we analyze LbfBM bBH,K and its exponential eBH,K, using of decompositions (2.2) and (2.3).

Proposition 3.9. Let H ∈ (0, 1), K ∈ (0, 2) such that HK ∈ (12, 1) and m = 1, 2, . . . denote LbfBM by bBH,K and its exponential by eBH,K := eBbH,K, then for s∈ R and r ∈ (0, 1),

(3.4)

E [

maxs≤t≤s+r| bBH,K(t)− bBH,K(s)|m]

m! ≤ CmrHKm

√m! , and

(3.5) E

[

s≤t≤s+rmax | eBH,K(t)− eBH,K(s)| ]

≤ CrHK,

where C and C are positive constants and we take these constants uniformly in m.

Finally, we present results for LsfBM bSH and its exponential bSH; similarly as before we utilize decompositions (2.5).

Proposition 3.10. Let H ∈ (12, 1) and m = 1, 2, . . . denote LsfBM by bSH and its exponential by SeH := eSbH, then for s∈ R and r ∈ (0, 1),

(3.6)

E [

maxs≤t≤s+r| bSH(t)− bSH(s)|m]

m! ≤ CmrHm

√m!, and

(3.7) E

[

s≤t≤s+rmax | eSH(t)− eSH(s)| ]

≤ CrH,

where C and C are positive constants and we take these constant uniformly in m.

Remark 3.11. 1. All Lamperti transforms and their exponentials have analogous bounds for their expected maxima of small increments. The results are naturally understandable, since they are de- rived from the self-similar Gaussian processes and each bound reflects the corresponding self-similar parameter of the underlying process.

2. There is literature to discuss the maximum distribution and inequality of fBM; one can see such results in [23], in the monographs [11, 22]), and in a recent overview [26]. Most of them combine

(10)

a Gaussian-Markov process BH with the Slepian’s lemma or combine martingale inequalities with a Gaussian martingale process M (t) such that its variance is ct2−2H for some constant c > 0. All the derived bounds have relations with the self-similar parameter H. Our results on maximal incre- ments for fBM are comparable with these existing literature, since fBM is of stationary increments with BH(0) = 0. However, our results on maximal increments for other two processes, bfBM and sfBM, are different, since they are not stationary increments, and have not been studied so far. In view of Propositions 3.9, our results for maximal increments have similar relations with self-similar parameters as in that for fBM; in this sense the results are nearly optimal.

3. As for the distribution of maximum of stationary Gaussian processes, a large number of stud- ies have been conducted: the tail probability of maximum, ([14], [19], and [7]), the inequality for distributions of maximum of two different Gaussian processes ([30]). Variations of these are found in the monograph for general Gaussian processes [1], and also in [2]. In [8] and [9] the authors evaluate the supremum distribution of Gaussian processes with the stationary increments based on extreme value theory and apply the result to queueing analysis. However, these are not applicable to our purpose. Note that our target processes (Lamperti transforms and their exponentials) are based on self-similar processes and every bound for maximal increments is as a whole controlled by the self-similar parameter.

4. Discussion

This section discusses the role of the exponential stationary processes in stochastic modelling.

Consider the exponential processes eBH, eBH,K, eSH(t), and denote each of them by a common eZ(t), then by the results presented in Section 3, eZ(t) has features: (1) strictly stationary in t; (2) positive valued; (3) positive correlated in any two time instants s, t ; (4) the correlation decay in the time lag [t, t + s] is fast in s, indeed it is of exponential decay; and (5) for an arbitrarily fixed s, the expected maximum increments can form a summable sequence,

E [

max

s≤t≤s+b−k| eZ(t)− eZ(s)|

]

≤ Cb−kH, k = 1, 2, . . . where we may choose any suitable b > 1, uniformly over all s.

Therefore, the mean 1 process,

Z(t)e E[ eZ(t)],

can be used as a mother process to generate a certain multifractal stochastic infinite-product process, which is related to the burst phenomenon of Internet communications; see Section 3 of [20]

(this paper studied the exponential OU transform of fBM), and an earlier paper [18] (this paper studied the general schemes to generate infinite-product processes).

Moreover, positive stationary processes are often required in applications since many real life data are non-negative. For instance, in the continuous time stochastic volatility models ([5]), the log-price of risky asset P (t) is represented as

dP (t) = (µ + βσ(t))dt +

σ(t−)dW (t),

where σ(t) is a positive stationary process and W is BM. The simplest one for σ is the exponential of the ordinary OU process. More complex alternatives include OU by non-negative L´evy processes and their variations. The solutions of different SDEs involving W are also considered (e.g., Hull- White model or Vasicek model). In financial time-series, both stationarity and positivity (sometimes long memory or jumps) are essential for the volatility processes σ(t). Then noticing that the OU process is the Lamperti transform of BM, the exponential of other transformed processes eZ(t) could be good candidates. They are simply defined and model correlation decay more flexibly than that by OU; in addition we could theoretically characterize the sign of auto-correlation functions.

(11)

5. Proofs

Proof of Lemma 2.1. (1) Without loss of generality we let t≥ s. We observe a function f1(t; s) := ΥH,K(s, t)− ΥHKg(s, t)

= 1

2K{(t2H+ s2H)K− (2s2H)K− (t − s)2HK} and its partial derivative with t,

f1(t; s) = 2HK

2K t2HK−1{(1 + (s/t)2H)K−1− (1 − s/t)2HK−1},

such that f1(t; s) is a function of t with parameter s. Then noticing the sign in the brace of f1(t; s), for K ∈ (1, 2) ∩ HK ∈ (12, 1) we have f1(t; s) ≥ 0 which yields f1(t; s) ≥ f1(s; s) = 0.

On the contrary for K ∈ (0, 1) ∩ HK ∈ (0,12), it follows that f1(t; s) ≤ 0, which concludes f1(t; s)≤ f1(s; s) = 0.

In order to obtain results for (i) K ∈ (0, 1) ∩ HK ∈ (12, 1) and (ii) K ∈ (1, 2) ∩ HK ∈ (0,12), we further analyze the sign of f1(t; s); namely we analyze

g(x) = (1 + x2H)K−1− (1 − x)2HK−1, x∈ [0, 1]

with g(0) = 0 and g(1) = 2K−1. (i) Noticing H ∈ (12, 1) and the derivative

g(x) = 2H(K− 1)(1 + x2H)K−2x2H−1+ (2HK− 1)(1 − x)2HK−2,

we have g(x) ≥ 0 for 2H(1 − K) ≥ 2HK − 1, which implies f1(s, t) ≥ 0. Hence we obtain the result (i). (ii) Noticing H ∈ (12, 1), we observe that

g′′(x) = 2H(K− 1)(K − 2)(1 + x2H)K−32H(x2H−1)2 +2H(K− 1)(2H − 1)(1 + x2H)K−2x2H−2

−(2HK − 1)(2HK − 2)(1 − x)2HK−3 ≤ 0.

Now the concavity of g(x) implies g(x)≥ 0.

Finally the last inequality follows from Slepian’s lemma, P

(

0max≤t≤1BH,K(t)≥ a)

≤ P(

0max≤t≤1BHK(t)≥ a)

and the symmetric property and the reflection principle of a Gaussian Markov process as in the proof of Lemma 2.3 in [20]. Notice that BHK is a deterministic time change of B.

(2) Without loss of generality we let t≥ s. The inequality of the right hand side is implied by SHe(s, t)− ΥHe(s, t) = t2H1

2{(t + s)2H + (t− s)2H} ⪋ 0 if H⪌ 1 2. Regarding the inequality of the center, we let

f2(t; s) :=SH(s, t)− ΥHe(s, t)

= 1

2− 22H−1[t2H− s2H1

2{(s + t)2H − (2s)2H + (t− s)2H}], which we regard as a function of t given s. Since the differential with t yields

f2(t; s) = 2H 2− 22H−1

{t2H−1 (s + t)2H−1+ (t− s)2H−1 2

}⪌ 0 if H ⪌ 1 2, noticing f2(s; s) = 0 we conclude that

SH(s, t)⪌ ΥHe(s, t) if H⪌ 1 2.

(12)

In order to analyze

SH(s, t)− ΥH(s, t)

= 22H−1 2− 22H−1

{s2H+ t2H

2 (t + s 2

)2H +

(t− s 2

)2H

−(t− s)2H 2

} , we define a function of a with parameter b as

f3(a; b) = a2H+ (a + b)2H

2 (2a + b 2

)2H

, a≥ 0, b ≥ 0, such that its derivative satisfies

f3(a; b) = 2H

(a2H−1+ (a + b)2H−1

2 (2a + b

2

)2H−1)

⪋ 0 for H ⪌ 1 2,

from which we know that f3 is non-increasing (resp. non-decreasing) for H ∈ (12, 1) (resp. H (0,12)) as a function of a. Now putting b = t− s, we observe that

SH(s, t)− ΥH(s, t) = 22H−1 2− 22H−1

(g(s)− g(0))

⪋ 0 for H ⪌ 1 2. The probability of maximal increments is bounded in the same manner as before.

(3) For H ∈ (0,12), the result is implied by χH(s, t)− ΥH(s, t) = 1

2− 22H

[t2H− s2H− {(t + s)2H− (2s)2H}]

≥ 0.

For H ∈ (12, 1), it suffices to observe

χH(s, t)− ΥH(s, t) := 2− 22H−1

2(22H− 2)H(s, t)− SH(s, t))≥ 0.

Hence the maxima of the process is bounded by that of BH for H ∈ (0,12)∪ H ∈ (12, 1) similarly as

in the proof for (1). □

Proof of Lemma 2.3. In the proof, constants cKi , i = 1, 2, . . . will denote positive constants depend- ing on K ∈ (0, 2) for which the exact values are irrelevant and may vary from line to line.

(1) The law of the iterated logarithm for B at 0 and∞ assures the existence of the pathwise integral and the integral by parts for XK, K ∈ (0, 1), which yields

XK(t)− XK(s) =

0

(e−us− e−ut)u1+K2 dB(u)

=

0

(se−us− te−ut)u1+K2 B(u)du +1 + K

2

0

(e−ut− e−us)u3+K2 B(u)du, t≥ s > 0.

By applying the inequality 1− e−x ≤ x, x ≥ 0 and the triangle inequality several times we obtain

|XK(t)− XK(s)| ≤

0

|se−us− te−ut|u1+K2 |B(u)|du +1 + K

2

0

|e−ut− e−us|u3+K2 |B(u)|du

≤ (t − s)

0

e−utu1+K2 |B(u)|du + s

0

|e−u(t−s)− 1|e−usu1+K2 |B(u)|du +1 + K

2

0

|e−u(t−s)− 1|e−usu3+K2 |B(u)|du

參考文獻

相關文件

好了既然 Z[x] 中的 ideal 不一定是 principle ideal 那麼我們就不能學 Proposition 7.2.11 的方法得到 Z[x] 中的 irreducible element 就是 prime element 了..

Writing texts to convey simple information, ideas, personal experiences and opinions on familiar topics with some elaboration. Writing texts to convey information, ideas,

Wang, Solving pseudomonotone variational inequalities and pseudocon- vex optimization problems using the projection neural network, IEEE Transactions on Neural Networks 17

volume suppressed mass: (TeV) 2 /M P ∼ 10 −4 eV → mm range can be experimentally tested for any number of extra dimensions - Light U(1) gauge bosons: no derivative couplings. =&gt;

For pedagogical purposes, let us start consideration from a simple one-dimensional (1D) system, where electrons are confined to a chain parallel to the x axis. As it is well known

The observed small neutrino masses strongly suggest the presence of super heavy Majorana neutrinos N. Out-of-thermal equilibrium processes may be easily realized around the

Define instead the imaginary.. potential, magnetic field, lattice…) Dirac-BdG Hamiltonian:. with small, and matrix

incapable to extract any quantities from QCD, nor to tackle the most interesting physics, namely, the spontaneously chiral symmetry breaking and the color confinement.. 