• 沒有找到結果。

1.1 Diffusion Process

Diffusion processes have become the standard tool for modeling prices in financial markets for derivative and risk management purposes. Consider an Ito stochastic process that satisfies a stochastic differential equation (SDE) of the form

( ) { ( ), , } { ( ), , } ( )

dy t =a y t t θ dt+b y t t θ dW t (1.1) where { ( ), , }a y t t θ and { ( ), , }b y t t θ are the non-anticipative drift and volatility function respectively, depending on y(t), time t, and an unknown parameter vector θ, and dW t ( ) is increment of standard Wiener process. Although such continuous time process offer analytic tractability, the parameter that govern their dynamics are often difficult to estimate from discrete time data. In a nutshell, estimation is problematic because the model is formulated in continuous time, while sample data are naturally only available at discrete frequencies. This implies that estimates obtained by naive discretizations of diffusion processes can be subject to discretization bias. Since the direct discretization of diffusion processes will cause estimation bias, the estimation scheme of parameter from discrete observations of y at time-points 0=t0< < ⋅⋅⋅ < , a number of methods have t1 tn been proposed to estimate diffusion process.

Among all kinds of continuous time model, however, one cannot derive a simple analytic transition density of the process in all cases. If the transition densities

( t| t 1, )

to estimate θ. The corresponding maximum likelihood estimator ˆθn is known to have the usual good properties. In the case of time-equidistant observations (ti = ∆i ,i=0,1...n for some fixed∆ >0), many papers have proved the consistent and asymptotic normality of ˆθn as n→ ∞ . It is only natural that the number of observations must be large enough for any estimator to be close to the true value, and from a practical point of view it is an important property that ensures the estimator to be close to the true value. Unfortunately,

the transition densities of the continuous time diffusion process are, with the exception of a few special cases, generally unknown or unavailable in closed form so that the conventional likelihood-based inference is often inapplicable.

Traditionally, to overcome the difficulties when the transition densities of y are unknown, the usual alternative is using Ln( )θ to approximate log-likelihood function based on discrete observations of y. Unless max1≤ ≤i n|titi1| is “small”, nevertheless, in the case of time-equidistant observations, Florens-Zmirou (1989) actually show that estimator based on maximizing Ln( )θ is consistent. Pederson (1995) derived a sequence

( ) method to discover some estimation problem of special broadly-applied diffusion process in finance.

1.2 Mean Reversion

When using diffusion process as a tool for financial modeling, many researchers

may add some components to non-anticipative drift term or volatility function for the purpose of explaining specific phenomenon in financial market, for examples, mean reversion effect or volatility cluster effect. Mean reversion is a tendency for a stochastic process to remain near, or tend to return over time to a long-run average value. Mean reversion effect has been observed in interest rate market, especially in short-term market.

In contrast, this behavior is not so obvious in stock market. Vasicek (1977) propose an interest rate model for treasury debt pricing through a mean reversion type stochastic differential equation:

( )

t t t

dy = −κ y −µ dt+ ⋅σ dW

(1.3) where κ is the constant strength of mean reversion, µ is the equilibrium level, σ is the volatility and W is the standard Brownian motion. t

0 20 40 60 80 100 120 140 160 180 200 0

0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5

k=0.1 k=1

Figure 1:Simulated Data forComparison of Different Mean Reversion Strength

Such models that incorporate mean reversion effect are also named as mean-reverted process. To be more precise, a process is mean reverted if increments over disjoint intervals are negatively correlated. This is an important property of mean-reverted process because it represents that there is an invisible strength leading the prices back to its equilibrium. Thus asset prices tend to fall (rise) after hitting a maximum (minimum).

1.3 Motivation

Once we had observed such phenomenon in a certain market, a more crucial question comes immediately – how large is the mean reversion strength? That is, even if we can expect the prices will return to the mean, how long does the market need? Many financial practitioners are concerned about this issue. For instance, typically a mean reverted model is used to suggest that an un-hedged long equity position needs less capital than implied in a non-mean reverted model. However, what if we use a mean

reverted model with unknown transition densities? Shall we simply count on discrete observations to make inferences?

0 10 20 30 40 50 60 70 80 90 100

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45

High Frequecy Data Discrete Data

Figure 2: Discretely Observed Data and High Frequency Data

In Figure 2, the dash line and solid line denote 11 discretely observed data and 100 high frequency data respectively, the dash-dot line has the mean of 0.2. If the only information we can obtain is discretely observed data, as we can see in the above figure, this process seems to revert to the mean between the fourth and fifth discrete observations for the first time. Actually, the whole process had reverted to the mean for several times before the discrete observations did. Similar situation also occurs later. With discrete observations barely available, mean reversion strength was under-evaluated in this process. In this thesis, we will show that based on certain process simulating more

“augmented data” as high frequency data will improve the estimation merely from discretely observed data. This principle is referred to as “Data Augmentation”. The simulating procedure is based on Markov chain Monte Carlo (MCMC) method. The simulated augmented data and discretely observed data together can be called as

complete data so that we can formulate the complete data likelihood function. Simulated likelihood function can be derived after integrating augmented data out of the complete data likelihood function. All the estimation and inference can be conducted through the simulated likelihood function. From the maximum likelihood estimation results, we find that when the degree of augmentation increases, the estimation result can be improved to a certain extent but this depends on the strength of the mean reversion. This corresponds to our claim for the inadequacy of discrete observations for continuous time diffusion process. In fact, the data augmentation is the implementation of the idea of approximate likelihood.

相關文件