• 沒有找到結果。

Brownian Motion

N/A
N/A
Protected

Academic year: 2022

Share "Brownian Motion"

Copied!
82
0
0

加載中.... (立即查看全文)

全文

(1)

Brownian Motion

a

• Brownian motion is a stochastic process { X(t), t ≥ 0 } with the following properties.

1. X(0) = 0, unless stated otherwise.

2. for any 0 ≤ t0 < t1 < · · · < tn, the random variables X(tk) − X(tk−1)

for 1 ≤ k ≤ n are independent.b

3. for 0 ≤ s < t, X(t) − X(s) is normally distributed with mean μ(t − s) and variance σ2(t − s), where μ and σ = 0 are real numbers.

aRobert Brown (1773–1858).

bSo X(t) − X(s) is independent of X(r) for r ≤ s < t.

(2)

Brownian Motion (concluded)

• The existence and uniqueness of such a process is guaranteed by Wiener’s theorem.a

• This process will be called a (μ, σ) Brownian motion with drift μ and variance σ2.

• Although Brownian motion is a continuous function of t with probability one, it is almost nowhere differentiable.

• The (0, 1) Brownian motion is called the Wiener process.

• If condition 3 is replaced by “ X(t) − X(s) depends only on t − s,” we have the more general Levy process.b

aNorbert Wiener (1894–1964). He received his Ph.D. from Harvard in 1912.

bPaul Levy (1886–1971).

(3)

Example

• If { X(t), t ≥ 0 } is the Wiener process, then X(t) − X(s) ∼ N(0, t − s).

• A (μ, σ) Brownian motion Y = { Y (t), t ≥ 0 } can be expressed in terms of the Wiener process:

Y (t) = μt + σX(t). (78)

• Note that

Y (t + s) − Y (t) ∼ N(μs, σ2s).

(4)

Brownian Motion as Limit of Random Walk

Claim 1 A (μ, σ) Brownian motion is the limiting case of random walk.

• A particle moves Δx to the right with probability p after Δt time.

• It moves Δx to the left with probability 1 − p.

• Define

Xi =Δ

⎧⎨

+1 if the ith move is to the right,

−1 if the ith move is to the left.

– Xi are independent with

Prob[ Xi = 1 ] = p = 1 − Prob[ Xi = −1 ].

(5)

Brownian Motion as Limit of Random Walk (continued)

• Recall

E[ Xi ] = 2p − 1,

Var[ Xi ] = 1 − (2p − 1)2.

• Assume n = t/Δt is an integer.Δ

• Its position at time t is

Y (t) = Δx (XΔ 1 + X2 + · · · + Xn) .

(6)

Brownian Motion as Limit of Random Walk (continued)

• Therefore,

E[ Y (t) ] = n(Δx)(2p − 1), Var[ Y (t) ] = n(Δx)2 

1 − (2p − 1)2  .

• With Δx = σΔ

Δt and p = [ 1 + (μ/σ)Δ

Δt ]/2,a E[ Y (t) ] = nσ√

Δt (μ/σ)√

Δt = μt, Var[ Y (t) ] = 2Δt 

1 − (μ/σ)2Δt

→ σ2t, as Δt → 0.

aIdentical to Eq. (42) on p. 296!

(7)

Brownian Motion as Limit of Random Walk (concluded)

• Thus, { Y (t), t ≥ 0 } converges to a (μ, σ) Brownian motion by the central limit theorem.

• Brownian motion with zero drift is the limiting case of symmetric random walk by choosing μ = 0.

• Similarity to the the BOPM: The p is identical to the probability in Eq. (42) on p. 296 and Δx = ln u.

• Note that

Var[ Y (t + Δt) − Y (t) ]

=Var[ Δx Xn+1 ] = (Δx)2 × Var[ Xn+1 ] → σ2Δt.

(8)

Geometric Brownian Motion

• Let X =Δ { X(t), t ≥ 0 } be a Brownian motion process.

• The process

{ Y (t) = eΔ X(t), t ≥ 0 }, is called geometric Brownian motion.

• Suppose further that X is a (μ, σ) Brownian motion.

• By assumption, Y (0) = e0 = 1.

(9)

Geometric Brownian Motion (concluded)

• X(t) ∼ N(μt, σ2t) with moment generating function E



esX(t)



= E [ Y (t)s ] = eμts+(σ2ts2/2) from Eq. (27) on p 171.

• In particular,a

E[ Y (t) ] = eμt+(σ2t/2), Var[ Y (t) ] = E 

Y (t)2 

− E[ Y (t) ]2

= e2μt+σ2t

eσ2t − 1 .

aRecall Eqs. (29) on p. 180.

(10)

0.2 0.4 0.6 0.8 1 Time (t) -1

1 2 3 4 5 6 Y(t)

(11)

An Argument for Long-Term Investment

a

• Suppose the stock follows the geometric Brownian motion

S(t) = S(0) eN(μt,σ2t) = S(0) etN(μ,σ2/t ), t ≥ 0, where μ > 0.

• The annual rate of return has a normal distribution:

N

μ, σ2 t

.

• The larger the t, the likelier the return is positive.

• The smaller the t, the likelier the return is negative.

aContributed by Dr. King, Gow-Hsing on April 9, 2015. See http://www.cb.idv.tw/phpbb3/viewtopic.php?f=7&t=1025

(12)

Continuous-Time Financial Mathematics

(13)

A proof is that which convinces a reasonable man;

a rigorous proof is that which convinces an unreasonable man.

— Mark Kac (1914–1984) The pursuit of mathematics is a divine madness of the human spirit.

— Alfred North Whitehead (1861–1947), Science and the Modern World

(14)

Stochastic Integrals

• Use W =Δ { W (t), t ≥ 0 } to denote the Wiener process.

• The goal is to develop integrals of X from a class of stochastic processes,a

It(X) =Δ t

0

X dW, t ≥ 0.

• It(X) is a random variable called the stochastic integral of X with respect to W .

• The stochastic process { It(X), t ≥ 0 } will be denoted by 

X dW .

aKiyoshi Ito (1915–2008).

(15)

Stochastic Integrals (concluded)

• Typical requirements for X in financial applications are:

– Prob[ t

0 X2(s) ds < ∞ ] = 1 for all t ≥ 0 or the stronger  t

0 E[ X2(s) ] ds < ∞.

– The information set at time t includes the history of X and W up to that point in time.

– But it contains nothing about the evolution of X or W after t (nonanticipating, so to speak).

– The future cannot influence the present.

(16)

Ito Integral

• A theory of stochastic integration.

• As with calculus, it starts with step functions.

• A stochastic process { X(t) } is simple if there exist 0 = t0 < t1 < · · ·

such that

X(t) = X(tk−1) for t ∈ [ tk−1, tk), k = 1, 2, . . . for any realization (see figure on next page).a

aIt is right-continuous.

(17)

J J J J! J" J# : J

= B

J

(18)

Ito Integral (continued)

• The Ito integral of a simple process is defined as

It(X) =Δ

n−1

k=0

X(tk)[ W (tk+1) − W (tk) ], (79) where tn = t.

– The integrand X is evaluated at tk, not tk+1.

• Define the Ito integral of more general processes as a limiting random variable of the Ito integral of simple stochastic processes.

(19)

Ito Integral (continued)

• Let X = { X(t), t ≥ 0 } be a general stochastic process.

• Then there exists a random variable It(X), unique almost certainly, such that It(Xn) converges in probability to It(X) for each sequence of simple

stochastic processes X1, X2, . . . such that Xn converges in probability to X.

• If X is continuous with probability one, then It(Xn) converges in probability to It(X) as

δn = maxΔ

1≤k≤n(tk − tk−1) goes to zero.

(20)

Ito Integral (concluded)

• It is a fundamental fact that 

X dW is continuous almost surely.

• The following theorem says the Ito integral is a martingale.a

Theorem 18 The Ito integral 

X dW is a martingale.

• A corollary is the mean value formula

E

 b

a

X dW



= 0.

aExercise 14.1.1 covers simple stochastic processes.

(21)

Discrete Approximation and Nonanticipation

• Recall Eq. (79) on p. 592.

• The following simple stochastic process { X(t)} can be used in place of X to approximate  t

0 X dW ,

X(s) = X(tΔ k−1) for s ∈ [ tk−1, tk), k = 1, 2, . . . , n.

• Note the nonanticipating feature of X.

– The information up to time s,

{ X(t), W (t), 0 ≤ t ≤ s },

cannot determine the future evolution of X or W .

(22)

Discrete Approximation and Nonanticipation (concluded)

• Suppose, unlike Eq. (79) on p. 592, we defined the stochastic integral from

n−1

k=0

X(tk+1)[ W (tk+1) − W (tk) ].

• Then we would be using the following different simple stochastic process in the approximation,

Y (s) = X(tΔ k) for s ∈ [ tk−1, tk), k = 1, 2, . . . , n.

• This clearly anticipates the future evolution of X.a

aSee Exercise 14.1.2 for an example where it matters.

(23)

:

J

:

J

;

= >

:

(24)

Ito Process

• The stochastic process X = { Xt, t ≥ 0 } that solves Xt = X0 +

t

0

a(Xs, s) ds + t

0

b(Xs, s) dWs, t ≥ 0 is called an Ito process.

– X0 is a scalar starting point.

– { a(Xt, t) : t ≥ 0 } and { b(Xt, t) : t ≥ 0 } are stochastic processes satisfying certain regularity conditions.

– a(Xt, t): the drift.

– b(Xt, t): the diffusion.

(25)

Ito Process (continued)

• Typical regularity conditions are:a

– For all T > 0, x ∈ Rn, and 0 ≤ t ≤ T ,

| a(x, t) | + | b(x, t) | ≤ C(1 + | x |) for some constant C.b

– (Lipschitz continuity) For all T > 0, x ∈ Rn, and 0 ≤ t ≤ T ,

| a(x, t) − a(y, t) | + | b(x, t) − b(y, t) | ≤ D | x − y | for some constant D.

aØksendal (2007).

bThis condition is not needed in time-homogeneous cases, where a and b do not depend on t.

(26)

Ito Process (continued)

• A shorthanda is the following stochastic differential equationb (SDE) for the Ito differential dXt,

dXt = a(Xt, t) dt + b(Xt, t) dWt. (80) – Or simply

dXt = at dt + bt dWt.

– This is Brownian motion with an instantaneous drift at and an instantaneous variance b2t.

• X is a martingale if at = 0.c

aPaul Langevin (1872–1946) in 1904.

bLike any equation, an SDE contains an unknown, the process Xt.

cRecall Theorem 18 (p. 594).

(27)

Ito Process (concluded)

• From calculus, we would expect  t

0 W dW = W (t)2/2.

• But W (t)2/2 is not a martingale, hence wrong!

• The correct answer is [ W (t)2 − t ]/2.

• A popular representation of Eq. (80) is dXt = at dt + bt

dt ξ, (81)

where ξ ∼ N(0, 1).

(28)

Euler Approximation

• Define tn Δ

= nΔt.

• The following approximation follows from Eq. (81),

X(t n+1)

= X(tn) +a( X(tn), tn) Δt + b( X(tn), tn) ΔW (tn). (82)

• It is called the Euler or Euler-Maruyama method.

• Recall that ΔW (tn) should be interpreted as W (tn+1) − W (tn),

not W (tn) − W (tn−1)!a

aRecall Eq. (79) on p. 592.

(29)

Euler Approximation (concluded)

• With the Euler method, one can obtain a sample path X(t 1), X(t2), X(t3), . . .

from a sample path

W (t0), W (t1), W (t2), . . . .

• Under mild conditions, X(tn) converges to X(tn).

(30)

More Discrete Approximations

• Under fairly loose regularity conditions, Eq. (82) on p. 602 can be replaced by

X(t n+1)

= X(tn) + a( X(tn), tn) Δt + b( X(tn), tn)

Δt Y (tn).

– Y (t0), Y (t1), . . . are independent and identically distributed with zero mean and unit variance.

(31)

More Discrete Approximations (concluded)

• An even simpler discrete approximation scheme:

X(t n+1)

= X(tn) + a( X(tn), tn) Δt + b( X(tn), tn)

Δt ξ.

– Prob[ ξ = 1 ] = Prob[ ξ = −1 ] = 1/2.

– Note that E[ ξ ] = 0 and Var[ ξ ] = 1.

• This is a binomial model.

• As Δt goes to zero, X converges to X.a

aHe (1990).

(32)

Trading and the Ito Integral

• Consider an Ito process

dSt = μt dt + σt dWt.

– St is the vector of security prices at time t.

• Let φt be a trading strategy denoting the quantity of each type of security held at time t.

– Hence the stochastic process φtSt is the value of the portfolio φt at time t.

• φt dSt Δ

= φtt dt + σt dWt) represents the change in the value from security price changes occurring at time t.

(33)

Trading and the Ito Integral (concluded)

• The equivalent Ito integral, GT(φ) =Δ

T

0

φt dSt =

T

0

φtμt dt +

T

0

φtσt dWt, measures the gains realized by the trading strategy over the period [ 0, T ].

(34)

Ito’s Lemma

a

A smooth function of an Ito process is itself an Ito process.

Theorem 19 Suppose f : R → R is twice continuously differentiable and dX = at dt + bt dW . Then f (X) is the Ito process,

f (Xt)

= f (X0) + t

0

f(Xs) as ds + t

0

f(Xs) bs dW +1

2 t

0

f(Xs) b2s ds for t ≥ 0.

aIto (1944).

(35)

Ito’s Lemma (continued)

• In differential form, Ito’s lemma becomes df (X)

= f(X) a dt + f(X) b dW + 1

2 f(X) b2 dt (83)

=



f(X) a + 12 f(X) b2



dt + f(X) b dW.

• Compared with calculus, the interesting part is the third term on the right-hand side of Eq. (83).

• A convenient formulation of Ito’s lemma is df (X) = f(X) dX + 1

2 f(X)(dX)2. (84)

(36)

Ito’s Lemma (continued)

• We are supposed to multiply out

(dX)2 = (a dt + b dW )2 symbolically according to

× dW dt

dW dt 0

dt 0 0

– The (dW )2 = dt entry is justified by a known result.

• Hence (dX)2 = (a dt + b dW )2 = b2 dt in Eq. (84).

• This form is easy to remember because of its similarity to the Taylor expansion.

(37)

Ito’s Lemma (continued)

Theorem 20 (Higher-Dimensional Ito’s Lemma) Let W1, W2, . . . , Wn be independent Wiener processes and

X = (XΔ 1, X2, . . . , Xm) be a vector process. Suppose

f : Rm → R is twice continuously differentiable and Xi is an Ito process with dXi = ai dt + n

j=1 bij dWj. Then df (X) is an Ito process with the differential,

df (X) =

m i=1

fi(X) dXi + 1 2

m i=1

m k=1

fik(X) dXi dXk,

where fi = ∂f /∂XΔ i and fik = ∂Δ 2f /∂Xi∂Xk.

(38)

Ito’s Lemma (continued)

• The multiplication table for Theorem 20 is

× dWi dt

dWk δik dt 0

dt 0 0

in which

δik =

⎧⎨

1, if i = k, 0, otherwise.

(39)

Ito’s Lemma (continued)

• In applying the higher-dimensional Ito’s lemma, usually one of the variables, say X1, is time t and dX1 = dt.

• In this case, b1j = 0 for all j and a1 = 1.

• As an example, let

dXt = at dt + bt dWt.

• Consider the process f(Xt, t).

(40)

Ito’s Lemma (continued)

• Then df

= ∂f

∂Xt dXt + ∂f

∂t dt + 1 2

2f

∂Xt2 (dXt)2

= ∂f

∂Xt (at dt + bt dWt) + ∂f

∂t dt +1

2

2f

∂Xt2 (at dt + bt dWt)2

=

∂f

∂Xt at + ∂f

∂t + 1 2

2f

∂Xt2 b2t

dt + ∂f

∂Xt bt dWt. (85)

(41)

Ito’s Lemma (continued)

Theorem 21 (Alternative Ito’s Lemma) Let W1, W2, . . . , Wm be Wiener processes and

X = (XΔ 1, X2, . . . , Xm) be a vector process. Suppose

f : Rm → R is twice continuously differentiable and Xi is an Ito process with dXi = ai dt + bi dWi. Then df (X) is the following Ito process,

df (X) =

m i=1

fi(X) dXi + 1 2

m i=1

m k=1

fik(X) dXi dXk.

(42)

Ito’s Lemma (concluded)

• The multiplication table for Theorem 21 is

× dWi dt

dWk ρik dt 0

dt 0 0

• Above, ρik denotes the correlation between dWi and dWk.

(43)

Geometric Brownian Motion

• Consider geometric Brownian motion Y (t) = eΔ X(t). – X(t) is a (μ, σ) Brownian motion.

– By Eq. (78) on p. 577,

dX = μ dt + σ dW.

• Note that

∂Y

∂X = Y,

2Y

∂X2 = Y.

(44)

Geometric Brownian Motion (continued)

• Ito’s formula (83) on p. 609 implies dY = Y dX + (1/2) Y (dX)2

= Y (μ dt + σ dW ) + (1/2) Y (μ dt + σ dW )2

= Y (μ dt + σ dW ) + (1/2) Y σ2 dt.

• Hence

dY

Y = 

μ + σ2/2

dt + σ dW. (86)

• The annualized instantaneous rate of return is μ + σ2/2 (not μ).a

aConsistent with Lemma 10 (p. 301).

(45)

Geometric Brownian Motion (continued)

• Alternatively, from Eq. (78) on p. 577, Xt = X0 + μt + σ Wt, admits an explicit (strong) solution.

• Hence

Yt = Y0 eμt+σ Wt, (87) a strong solution to the SDE (86) where Y0 = eX0.

(46)

Geometric Brownian Motion (concluded)

• On the other hand, suppose dY

Y = μ dt + σ dW.

• Then X(t) = ln Y (t) followsΔ dX = 

μ − σ2/2

dt + σ dW.

(47)

Exponential Martingale

• The Ito process

dXt = btXt dWt is a martingale.a

• It is called an exponential martingale.

• By Ito’s formula (83) on p. 609, X(t) = X(0) exp



1 2

t

0

b2s ds + t

0

bs dWs

 .

aRecall Theorem 18 (p. 594).

(48)

Product of Geometric Brownian Motion Processes

• Let

dY

Y = a dt + b dWY , dZ

Z = f dt + g dWZ.

• Assume dWY and dWZ have correlation ρ.

• Consider the Ito process

U = Y Z.Δ

(49)

Product of Geometric Brownian Motion Processes (continued)

• Apply Ito’s lemma (Theorem 21 on p. 615):

dU = Z dY + Y dZ + dY dZ

= ZY (a dt + b dWY ) + Y Z(f dt + g dWZ) +Y Z(a dt + b dWY )(f dt + g dWZ)

= U(a + f + bgρ) dt + Ub dWY + Ug dWZ.

• The product of correlated geometric Brownian motion processes thus remains geometric Brownian motion.

(50)

Product of Geometric Brownian Motion Processes (continued)

• Note that

Y = exp

a − b2/2

dt + b dWY  , Z = exp

f − g2/2

dt + g dWZ , U = exp 

a + f 

b2 + g2

/2

dt + b dWY + g dWZ  .

• The strong solutions are:

Y (t) = exp 

a − b2/2

t + b WY (t) , Z(t) = exp 

f − g2/2

t + g WZ(t) , U (t) = exp  

a + f 

b2 + g2 /2

t + b dWY + g WZ(t)  .

(51)

Product of Geometric Brownian Motion Processes (concluded)

• ln U is Brownian motion with a mean equal to the sum of the means of ln Y and ln Z.

• This holds even if Y and Z are correlated.

• Finally, ln Y and ln Z have correlation ρ.

(52)

Quotients of Geometric Brownian Motion Processes

• Suppose Y and Z are drawn from p. 622.

• Let

U = Y /Z.Δ

• We now show thata dU

U = (a − f + g2 − bgρ) dt + b dWY − g dWZ.

(88)

• Keep in mind that dWY and dWZ have correlation ρ.

aExercise 14.3.6 of the textbook is erroneous.

(53)

Quotients of Geometric Brownian Motion Processes (concluded)

• The multidimensional Ito’s lemma (Theorem 21 on p. 615) can be employed to show that

dU

= (1/Z) dY − (Y/Z2) dZ − (1/Z2) dY dZ + (Y/Z3) (dZ)2

= (1/Z)(aY dt + bY dWY ) − (Y/Z2)(fZ dt + gZ dWZ)

−(1/Z2)(bgY Zρ dt) + (Y/Z3)(g2Z2 dt)

= U(a dt + b dWY ) − U(f dt + g dWZ)

−U(bgρ dt) + U(g2 dt)

= U(a − f + g2 − bgρ) dt + Ub dWY − Ug dWZ.

(54)

Forward Price

• Suppose S follows dS

S = μ dt + σ dW.

• Consider functional F (S, t) = SeΔ y(T −t) for constants y and T .

• As F is a function of two variables, we need the various partial derivatives of F (S, t) with respect to S and t.

• Note that in partial differentiation with respect to one variable, other variables are held constant.a

aContributed by Mr. Sun, Ao (R05922147) on April 26, 2017.

(55)

Forward Prices (continued)

• Now,

∂F

∂S = ey(T −t),

2F

∂S2 = 0,

∂F

∂t = −ySey(T −t).

• Thena

dF = ey(T −t) dS − ySey(T −t) dt

= Sey(T −t) (μ dt + σ dW ) − ySey(T −t) dt

= F (μ − y) dt + F σ dW.

aOne can also prove it by Eq. (85) on p. 614.

(56)

Forward Prices (concluded)

• Thus F follows dF

F = (μ − y) dt + σ dW.

• This result has applications in forward and futures contracts.

• In Eq. (60) on p. 490, μ = r = y.

• So dF

F = σ dW, a martingale.a

aIt is consistent with p. 566. Furthermore, it explains why Black’s formulas (68)–(69) on p. 518 use the same volatility σ as the stock’s.

(57)

Ornstein-Uhlenbeck (OU) Process

• The OU process:

dX = −κX dt + σ dW, where κ, σ ≥ 0.

• For t0 ≤ s ≤ t and X(t0) = x0, it is known that

E[ X(t) ] = e−κ(t−t0)E[ x0 ], Var[X(t) ] = σ2

2κ



1 − e−2κ(t−t0)

+ e−2κ(t−t0) Var[x0 ], Cov[X(s), X(t) ] = σ2

2κ e−κ(t−s) 

1 − e−2κ(s−t0)  +e−κ(t+s−2t0)Var[x0 ].

(58)

Ornstein-Uhlenbeck Process (continued)

• X(t) is normally distributed if x0 is a constant or normally distributed.

– E[ x0 ] = x0 and Var[ x0 ] = 0 if x0 is a constant.

• X is said to be a normal process.

• The OU process has the following mean-reverting property if κ > 0.

– When X > 0, X is pulled toward zero.

– When X < 0, it is pulled toward zero again.

(59)

Ornstein-Uhlenbeck Process (continued)

• A generalized version:

dX = κ(μ − X) dt + σ dW, where κ, σ ≥ 0.

• Given X(t0) = x0, a constant, it is known that

E[ X(t) ] = μ + (x0 − μ) e−κ(t−t0), (89) Var[ X(t) ] = σ2



1 − e−2κ(t−t0)  , for t0 ≤ t.

(60)

Ornstein-Uhlenbeck Process (concluded)

• The mean and standard deviation are roughly μ and σ/√

2κ , respectively.

• For large t, the probability of X < 0 is extremely

unlikely in any finite time interval when μ > 0 is large relative to σ/√

2κ .

• The process is mean-reverting.

– X tends to move toward μ.

– Useful for modeling term structure, stock price volatility, and stock price return.a

aSee Knutson, Wimmer, Kuhnen, & Winkielman (2008) for the bio- logical basis for mean reversion in financial decision making.

(61)

Square-Root Process

• Suppose X is an OU process.

• Consider

V = XΔ 2.

• Ito’s lemma says V has the differential, dV = 2X dX + (dX)2

= 2

V (−κ√

V dt + σ dW ) + σ2 dt

= 

−2κV + σ2

dt + 2σ√

V dW, a square-root process.

(62)

Square-Root Process (continued)

• In general, the square-root process has the SDE, dX = κ(μ − X) dt + σ√

X dW,

where κ, σ > 0, μ ≥ 0, and X(0) ≥ 0 is a constant.

• Like the OU process, it possesses mean reversion: X tends to move toward μ, but the volatility is

proportional to

X instead of a constant.

(63)

Square-Root Process (continued)

• When X hits zero and μ ≥ 0, the probability is one that it will not move below zero.

– Zero is a reflecting boundary.

• Hence, the square-root process is a good candidate for modeling interest rates.a

• The OU process, in contrast, allows negative interest rates.b

• The two processes are related.c

aCox, Ingersoll, & Ross (1985).

bSome rates did go negative in Europe in 2015.

cRecall p. 635.

(64)

Square-Root Process (concluded)

• The random variable 2cX(t) follows the noncentral chi-square distribution,a

χ

4κμ

σ2 , 2cX(0) e−κt

, where c = (2κ/σΔ 2)(1 − e−κt)−1 and μ > 0.

• Given X(0) = x0, a constant, E[ X(t) ] = x0e−κt + μ

1 − e−κt , Var[ X(t) ] = x0 σ2

κ

e−κt − e−2κt

+ μ σ2

1 − e−κt2 , for t ≥ 0.

aWilliam Feller (1906–1970) in 1951.

(65)

Modeling Stock Prices

• The most popular stochastic model for stock prices has been the geometric Brownian motion,

dS

S = μ dt + σ dW.

• The logarithmic price X = ln S followsΔ dX =

μ σ2 2

dt + σ dW by Eq. (86) on p. 618.

(66)

Local-Volatility Models

• The deterministic-volatility model for “smile” posits dS

S = (rt − qt) dt + σ(S, t) dW,

where instantaneous volatility σ(S, t) is called the local-volatility function.a

– “The most popular model after Black-Scholes is a local volatility model as it is the only completely consistent volatility model.”

• A (weak) solution exists if Sσ(S, t) is continuous and grows at most linearly in S and t.b

aDerman & Kani (1994); Dupire (1994).

bSkorokhod (1961); Achdou & Pironneau (2005).

(67)

Local-Volatility Models (continued)

• One needs to recover the local volatility surface σ(S, t) from the implied volatility surface.

• Theoretically,a

σ(X, T )2 = 2

∂C∂T + (rT − qT)X ∂X∂C + qTC X2 ∂∂X2C2 .

(90) – C is the call price at time t = 0 (today) with strike

price X and time to maturity T .

– σ(X, T ) is the local volatility that will prevail at future time T and stock price ST = X.

aDupire (1994); Andersen & Brotherton-Ratcliffe (1998).

(68)

Local-Volatility Models (continued)

• For more general models, this equation gives the

expectation as seen from today, under the risk-neural probability, of the instantaneous variance at time T given that ST = X.a

• In practice, the σ(S, t)2 derived by Dupire’s formula (90) may have spikes, vary wildly, or even be negative.

• The term ∂2C/∂X2 in the denominator often results in numerical instability.

aDerman & Kani (1997); R. W. Lee (2001); Derman & M. B. Miller (2016).

(69)

Local-Volatility Models (continued)

• Denote the implied volatility surface by Σ(X, T ) and the local volatility surface by σ(S, t).

• The relation between Σ(X, T ) and σ(X, T ) isa

σ(X, T )2 = Σ2 + 2Στ ∂Σ

∂T + (rT − qT)X ∂X∂Σ

1 − XyΣ ∂X∂Σ 2

+ XΣτ 

∂X∂Σ XΣτ4  ∂Σ

∂X

2

+ X ∂X2Σ2

 ,

τ = T − t,Δ

y = ln(X/SΔ t) +

 T

t (qs − rs) ds.

aAndreasen (1996); Andersen & Brotherton-Ratcliffe (1998);

Gatheral (2003); Wilmott (2006); Kamp (2009).

(70)

Local-Volatility Models (continued)

• Although this version may be more stable than Eq. (90) on p. 641, it is expected to suffer from the same

problems.

• Small changes to the implied volatility surface may produce big changes to the local volatility surface.

(71)

Implied and Local Volatility Surfaces

a

0 0.5

1 1.5

2 2.5

3

0 0.2 0.4 0.6 0.8 1 20 30 40 50 60 70 80 90 100 110

Strike ($)

Implied Vol Surface

Time to Maturity (yr)

Implied Vol (%)

0 0.5

1 1.5

2 2.5

3

0 0.2 0.4 0.6 0.8 1 20 30 40 50 60 70 80 90 100 110

Stock ($)

Local Vol Surface

Time (yr)

Local Vol (%)

aContributed by Mr. Lok, U Hou (D99922028) on April 5, 2014.

(72)

Local-Volatility Models (continued)

• In reality, option prices only exist for a finite set of maturities and strike prices.

• Hence interpolation and extrapolation may be needed to construct the volatility surface.a

• But then some implied volatility surfaces generate option prices that allow arbitrage opportunities.b

aDoing it to the option prices produces worse results (Li, 2000/2001).

bSee Rebonato (2004) for an example.

(73)

Local-Volatility Models (concluded)

• There exist conditions for a set of option prices to be arbitrage-free.a

• Some adopt parameterized implied volatility surfaces that guarantee freedom from certain arbitrages.b

• For some vanilla equity options, the Black-Scholes model seems better than the local-volatility model in predictive power.c

• The exact opposite is concluded for hedging in equity index markets!d

aKahal´e (2004); Davis & Hobson (2007).

bGatheral & Jacquier (2014).

cDumas, Fleming, & Whaley (1998).

dCr´epey (2004); Derman & M. B. Miller (2016).

(74)

Local-Volatility Models: Popularity

• Hirsa and Neftci (2014), “most traders and firms actively utilize this [local-volatility] model.”

• Bennett (2014), “Of all the four volatility regimes,

[sticky local volatility] is arguably the most realistic and fairly prices skew.”

• Derman & M. B. Miller (2016), “Right or wrong, local volatility models have become popular and ubiquitousin modeling the smile.”

(75)

Implied Trees

• The trees for the local-volatility model are called implied trees.a

• Their construction requires option prices at all strike prices and maturities.

– That is, an implied volatility surface.

• The local volatility model does not imply that the implied tree must combine.

• Exponential-sized implied trees exist.b

aDerman & Kani (1994); Dupire (1994); Rubinstein (1994).

bCharalambousa, Christofidesb, & Martzoukosa (2007); Gong & Xu (2019).

(76)

Implied Trees (continued)

• How to construct a valid implied tree with efficiency has been open for a long time.a

– Reasons may include: noise and nonsynchrony in data, arbitrage opportunities in the smoothed and interpolated/extrapolated implied volatility surface, wrong model, wrong algorithms, nonlinearity,

instability, etc.

• Inversion is an ill-posed numerical problem.b

aRubinstein (1994); Derman & Kani (1994); Derman, Kani, & Chriss (1996); Jackwerth & Rubinstein (1996); Jackwerth (1997); Coleman, Kim, Li, & Verma (2000); Li (2000/2001); Rebonato (2004); Moriggia, Muzzioli, & Torricelli (2009).

bAyache, Henrotte, Nassar, & X. Wang (2004).

(77)

Implied Trees (continued)

• It is finally solved for separable local volatilities.a

– The local-volatility function σ(S, t) is separableb if σ(S, t) = σ1(S) σ2(t).

• A solution is also available for any upper- and lower-bounded σ.c

aLok (D99922028) & Lyuu (2015, 2016, 2017).

bBrace, G¸atarek, & Musiela (1997); Rebonato (2004).

cLok (D99922028) & Lyuu (2016, 2017, 2020).

(78)

Implied Trees

a

(concluded)

10 103 20

102

2 30

101 1 1.5

40

0.5 100 0

50

Root

aPlot supplied by Prof. Lok, U Hou (D99922028) on May 4, 2019.

(79)

Delta Hedge under the Local-Volatility Model

• Delta by the implied tree differs from delta by the Black-Scholes model’s implied volatility.

– The latter is by formula (46) or (47) (p. 343) after calculating the implied volatility from the same option price by the implied tree.

• Hence the profits and losses of their delta hedges will differ.

• The next plot shows the best 100 out of 100,000 random paths where the implied tree delta outperforms the

Black-Scholes delta.a

aIn terms of profits and losses. Plot supplied by Mr. Chiu, Tzu-Hsuan (R08723061) on November 20, 2021. We are hedging a long call.

(80)
(81)

Delta Hedge under the Local-Volatility Model (concluded)

• The next plot shows the best 100 out of 100,000 random paths where the Black-Scholes delta outperforms the

implied tree delta.a

aPlot supplied by Mr. Chiu, Tzu-Hsuan (R08723061) on November 20, 2021. We are again hedging a long call.

(82)

參考文獻

相關文件

In this paper, we provide new decidability and undecidability results for classes of linear hybrid systems, and we show that some algorithms for the analysis of timed automata can

From Remark 3.4, there exists a minimum kernel scale σ min , such that the correspondence produced by the HD model with the kernel scale σ ≥ σ min is the same as the correspondence

Study the following statements. Put a “T” in the box if the statement is true and a “F” if the statement is false. Only alcohol is used to fill the bulb of a thermometer. An

• We have found a plausible model in AdS/CFT that captures some essential features of the quantum hall eect as well as a possible experimental prediction. • Even if this is the

The Hull-White Model: Calibration with Irregular Trinomial Trees (concluded).. • Recall that the algorithm figured out θ(t i ) that matches the spot rate r(0, t i+2 ) in order

With λ selected by the universal rule, our stochastic volatility model (1)–(3) can be seen as a functional data generating process in the sense that it leads to an estimated

The angle descriptor is proposed as the exterior feature of 3D model by using the angle information on the surface of the 3D model.. First, a 3D model is represented

In the study, a forecasting model, which is based on the gray Lotka-Volterra model is proposed to forecast the passenger volumes of the four intercity public transportation systems