### Multivariate Contingent Claims

*• They depend on two or more underlying assets.*

*• The basket call on m assets has the terminal payoﬀ*
max

_{m}

*i=1*

*α*_{i}*S*_{i}*(τ )* *− X, 0*

*,*
*where α*_{i}*is the percentage of asset i.*

*• Basket options are essentially options on a portfolio of*
stocks (or index options).^{a}

*• Option on the best of two risky assets and cash has a*
*terminal payoﬀ of max(S*_{1}*(τ ), S*_{2}*(τ ), X).*

aExcept that membership and weights do *not change for basket op-*
tions (Bennett, 2014).

### Multivariate Contingent Claims (concluded)

^{a}

Name Payoﬀ

Exchange option max(*S*1(*τ) − S*2(*τ), 0)*
Better-oﬀ option max(*S*1(*τ), . . . , S** _{k}*(

*τ), 0)*Worst-oﬀ option min(

*S*

_{1}(

*τ), . . . , S*

*(*

_{k}*τ), 0)*

Binary maximum option *I{ max(S*_{1}(*τ), . . . , S** _{k}*(

*τ)) > X }*Maximum option max(max(

*S*1(

*τ), . . . , S*

*(*

_{k}*τ)) − X, 0)*Minimum option max(min(

*S*1(

*τ), . . . , S*

*(*

_{k}*τ)) − X, 0)*Spread option max(

*S*1(

*τ) − S*2(

*τ) − X, 0)*

Basket average option max((*S*1(*τ) + · · · + S** _{k}*(

*τ))/k − X, 0)*Multi-strike option max(

*S*1(

*τ) − X*1

*, . . . , S*

*(*

_{k}*τ) − X*

_{k}*, 0)*

Pyramid rainbow option max(*| S*_{1}(*τ) − X*_{1} *| + · · · + | S** _{k}*(

*τ) − X*

_{k}*| − X, 0)*

Madonna option max(

(*S*1(*τ) − X*1)^{2} + *· · · + (S** _{k}*(

*τ) − X*

*)*

_{k}^{2}

*− X, 0)*

aLyuu & Teng (R91723054) (2011).

### Correlated Trinomial Model

^{a}

*• Two risky assets S*_{1} *and S*_{2} follow
*dS*_{i}

*S*_{i}*= r dt + σ*_{i}*dW*_{i}*in a risk-neutral economy, i = 1, 2.*

*• Let*

*M** _{i}* =

^{Δ}

*e*

^{rΔt}*,*

*V** _{i}* =

^{Δ}

*M*

_{i}^{2}

*(e*

^{σ}

^{i}^{2}

^{Δt}*− 1).*

**– S**_{i}*M*_{i}*is the mean of S*_{i}*at time Δt.*

**– S**_{i}^{2}*V*_{i}*the variance of S*_{i}*at time Δt.*

aBoyle, Evnine, & Gibbs (1989).

### Correlated Trinomial Model (continued)

*• The value of S*_{1}*S*_{2} *at time Δt has a joint lognormal*
*distribution with mean S*_{1}*S*_{2}*M*_{1}*M*_{2}*e*^{ρσ}^{1}^{σ}^{2}^{Δt}*, where ρ is*
*the correlation between dW*_{1} *and dW*_{2}.

*• Next match the 1st and 2nd moments of the*

approximating discrete distribution to those of the continuous counterpart.

*• At time Δt from now, there are 5 distinct outcomes.*

### Correlated Trinomial Model (continued)

*• The ﬁve-point probability distribution of the asset prices*
is

Probability Asset 1 Asset 2
*p*_{1} *S*_{1}*u*_{1} *S*_{2}*u*_{2}
*p*_{2} *S*_{1}*u*_{1} *S*_{2}*d*_{2}
*p*_{3} *S*_{1}*d*_{1} *S*_{2}*d*_{2}
*p*_{4} *S*_{1}*d*_{1} *S*_{2}*u*_{2}

*p*_{5} *S*_{1} *S*_{2}

*• As usual, impose u*_{i}*d** _{i}* = 1.

### Correlated Trinomial Model (continued)

*• The probabilities must sum to one, and the means must*
be matched:

1 = *p*_{1} *+ p*_{2} *+ p*_{3} *+ p*_{4} *+ p*_{5}*,*

*S*_{1}*M*_{1} = *(p*_{1} *+ p*_{2}*) S*_{1}*u*_{1} *+ p*_{5}*S*_{1} *+ (p*_{3} *+ p*_{4}*) S*_{1}*d*_{1}*,*
*S*_{2}*M*_{2} = *(p*_{1} *+ p*_{4}*) S*_{2}*u*_{2} *+ p*_{5}*S*_{2} *+ (p*_{2} *+ p*_{3}*) S*_{2}*d*_{2}*.*

### Correlated Trinomial Model (concluded)

*• Let R* *= M*^{Δ} _{1}*M*_{2}*e*^{ρσ}^{1}^{σ}^{2}* ^{Δt}*.

*• Match the variances and covariance:*

*S*_{1}^{2}*V*1 = *(p*1 *+ p*2)

*(S*1*u*1)^{2} *− (S*1*M*1)^{2}

*+ p*5

*S*_{1}^{2} *− (S*1*M*1)^{2}
*+(p*3 *+ p*4)

*(S*1*d*^{1})^{2} *− (S*1*M*^{1})^{2}
*,*
*S*_{2}^{2}*V*2 = *(p*1 *+ p*4)

*(S*2*u*2)^{2} *− (S*2*M*2)^{2}

*+ p*5

*S*_{2}^{2} *− (S*2*M*2)^{2}
*+(p*2 *+ p*3)

*(S*2*d*2)^{2} *− (S*2*M*2)^{2}
*,*

*S*1*S*2*R* = *(p*1*u*1*u*2 *+ p*2*u*1*d*2 *+ p*3*d*1*d*2 *+ p*4*d*1*u*2 *+ p*5*) S*1*S*2*.*

*• The solutions appear on p. 246 of the textbook.*

### Correlated Trinomial Model Simpliﬁed

^{a}

*• Let μ*^{}_{i}*= r*^{Δ} *− σ*_{i}^{2}*/2 and u*_{i}*= e*^{Δ} ^{λσ}^{i}

*√**Δt* *for i = 1, 2.*

*• The following simpler scheme is often good enough:*

*p1* = 1

4

1
*λ2*

+

*√**Δt*
*λ*

*μ1*
*σ1*

+ *μ2*
*σ2*

+ *ρ*

*λ2*

*,*

*p2* = 1

4

1
*λ2*

+

*√**Δt*
*λ*

*μ1*

*σ1* *−* *μ2*
*σ2*

*−* *ρ*
*λ2*

*,*

*p3* = 1

4

1
*λ2*

+

*√**Δt*
*λ*

*−**μ1*

*σ1* *−* *μ2*
*σ2*

+ *ρ*

*λ2*

*,*

*p4* = 1

4

1
*λ2*

+

*√**Δt*
*λ*

*−**μ1*
*σ1*

+ *μ2*
*σ2*

*−* *ρ*
*λ2*

*,*

*p5* = *1 −* 1
*λ2*

*.*

aMadan, Milne, & Shefrin (1989).

### Correlated Trinomial Model Simpliﬁed (continued)

*• All of the probabilities lie between 0 and 1 if and only if*

*−1 + λ**√*
Δ*t*

*μ*^{}_{1}

*σ*1 + *μ*^{}_{2}
*σ*2

* ≤ ρ ≤ 1 − λ**√*
Δ*t*

*μ*^{}_{1}

*σ*1 *−* *μ*^{}_{2}
*σ*2

*,(116)*

1 *≤ λ.* (117)

*• We call a multivariate tree (correlation-) optimal if it*
guarantees valid probabilities as long as

*−1 + O(√*

*Δt) < ρ < 1* *− O(√*

*Δt),*
such as the above one.^{a}

aW. Kao (R98922093) (2011); W. Kao (R98922093), Lyuu, & Wen (D94922003) (2014).

### Correlated Trinomial Model Simpliﬁed (continued)

*• But this model cannot price 2-asset 2-barrier options*
accurately.^{a}

*• Few multivariate trees are both optimal and able to*
handle multiple barriers.^{b}

*• An alternative is to use orthogonalization.*^{c}

aSee Y. Chang (B89704039, R93922034), Hsu (R7526001, D89922012),

& Lyuu (2006); W. Kao (R98922093), Lyuu, & Wen (D94922003) (2014) for solutions.

bSee W. Kao (R98922093), Lyuu, & Wen (D94922003) (2014) for an exception.

cHull & White (1990); Dai (B82506025, R86526008, D8852600), C.

Wang (F95922018), & Lyuu (2013).

### Correlated Trinomial Model Simpliﬁed (concluded)

*• Suppose we allow each asset’s volatility to be a function*
of time.^{a}

*• There are k assets.*

*• Can you build an optimal multivariate tree that can*
*handle two barriers on each asset in time O(n** ^{k+1}*)?

^{b}

aRecall p. 315.

bSee Y. Zhang (R05922052) (2019) for a complete solution.

### Extrapolation

*• It is a method to speed up numerical convergence.*

*• Say f(n) converges to an unknown limit f at rate of*
*1/n:*

*f (n) = f +* *c*

*n* *+ o*

1
*n*

*.* (118)

*• Assume c is an unknown constant independent of n.*

**– Convergence is basically monotonic and smooth.**

### Extrapolation (concluded)

*• From two approximations f(n*1*) and f (n*_{2}) and
ignoring the smaller terms,

*f (n*_{1}) = *f +* *c*
*n*_{1} *,*
*f (n*_{2}) = *f +* *c*

*n*_{2} *.*

*• A better approximation to the desired f is*
*f =* *n*_{1}*f (n*_{1}) *− n*2*f (n*_{2})

*n*_{1} *− n*2 *.* (119)

*• This estimate should converge faster than 1/n.*^{a}

*• The Richardson extrapolation uses n*2 *= 2n*_{1}.

aIt is identical to the forward rate formula (22) on p. 150!

### Improving BOPM with Extrapolation

*• Consider standard European options.*

*• Denote the option value under BOPM using n time*
*periods by f (n).*

*• It is known that BOPM convergences at the rate of*
*1/n,*^{a} consistent with Eq. (118) on p. 830.

*• The plots on p. 306 (redrawn on next page) show that*
*convergence to the true option value oscillates with n.*

*• Extrapolation is inapplicable at this stage.*

aL. Chang & Palmer (2007); F. Diener & M. Diener (2004).

5 10 15 20 25 30 35 n

11.5 12 12.5 13

Call value

0 10 20 30 40 50 60 n

15.1 15.2 15.3 15.4 15.5

Call value

### Improving BOPM with Extrapolation (concluded)

*• Take the at-the-money option in the left plot on p. 833.*

*• The sequence with odd n turns out to be monotonic*
and smooth (see the left plot on p. 835).^{a}

*• Apply extrapolation (119) on p. 831 with n*2 *= n*_{1} + 2,
*where n*_{1} is odd.

*• Result is shown in the right plot on p. 835.*

*• The convergence rate is amazing.*

*• See Exercise 9.3.8 (p. 111) of the text for ideas in the*
general case.

aThis can be proved (L. Chang & Palmer, 2007; F. Diener & M.

Diener, 2004).

5 10 15 20 25 30 35 n

12.2 12.4 12.6 12.8 13 13.2 13.4

Call value

5 10 15 20 25 30 35 n

12.11 12.12 12.13 12.14 12.15 12.16 12.17

Call value

*Numerical Methods*

All science is dominated by the idea of approximation.

— Bertrand Russell

### Finite-Diﬀerence Methods

*• Place a grid of points on the space over which the*
desired function takes value.

*• Then approximate the function value at each of these*
points (p. 839).

*• Solve the equation numerically by introducing diﬀerence*
equations in place of derivatives.

0 0.05 0.1 0.15 0.2 0.25 80

85 90 95 100 105 110 115

### Example: Poisson’s Equation

*• It is ∂*^{2}*θ/∂x*^{2} *+ ∂*^{2}*θ/∂y*^{2} = *−ρ(x, y), which describes the*
electrostatic ﬁeld.

*• Replace second derivatives with ﬁnite diﬀerences*
through central diﬀerence.

*• Introduce evenly spaced grid points with distance of Δx*
*along the x axis and Δy along the y axis.*

*• The ﬁnite-diﬀerence form is*

*−ρ(x*_{i}*, y** _{j}*) =

*θ(x*

_{i+1}*, y*

*)*

_{j}*− 2θ(x*

_{i}*, y*

_{j}*) + θ(x*

_{i−1}*, y*

*)*

_{j}*(Δx)*

^{2}

+*θ(x*_{i}*, y** _{j+1}*)

*− 2θ(x*

_{i}*, y*

_{j}*) + θ(x*

_{i}*, y*

*)*

_{j−1}*(Δy)*^{2} *.*

### Example: Poisson’s Equation (concluded)

*• In the above, Δx* *= x*^{Δ} _{i}*− x*_{i−1}*and Δy* *= y*^{Δ} _{j}*− y** _{j−1}* for

*i, j = 1, 2, . . . .*

*• When the grid points are evenly spaced in both axes so*
*that Δx = Δy = h, the diﬀerence equation becomes*

*−h*^{2}*ρ(x*_{i}*, y*_{j}*) = θ(x*_{i+1}*, y*_{j}*) + θ(x*_{i−1}*, y** _{j}*)

*+θ(x*

_{i}*, y*

_{j+1}*) + θ(x*

_{i}*, y*

*)*

_{j−1}*− 4θ(x*

_{i}*, y*

_{j}*).*

*• Given boundary values, we can solve for the x** _{i}*s and the

*y*

*s within the square [*

_{j}*±L, ±L ].*

*• From now on, θ** _{i,j}* will denote the ﬁnite-diﬀerence

*approximation to the exact θ(x*

*, y*).

### Explicit Methods

*• Consider the diﬀusion equation*^{a}
*D(∂*^{2}*θ/∂x*^{2}) *− (∂θ/∂t) = 0, D > 0.*

*• Use evenly spaced grid points (x*_{i}*, t** _{j}*) with distances

*Δx and Δt, where Δx* *= x*^{Δ} _{i+1}*− x*_{i}*and Δt* *= t*^{Δ} _{j+1}*− t** _{j}*.

*• Employ central diﬀerence for the second derivative and*
forward diﬀerence for the time derivative to obtain

*∂θ(x, t)*

*∂t*

*t=t**j*

=

*θ(x, t** _{j+1}* )

*− θ(x, t*

*)*

_{j}Δ*t* + *· · · ,* (120)

*∂*^{2}*θ(x, t)*

*∂x*^{2}

*x=x**i*

=

*θ( x**i+1* *, t) − 2θ( x**i* *, t) + θ( x**i−1* *, t)*

(Δ*x)*^{2} + *· · · .*(121)

aIt is a parabolic partial diﬀerential equation.

### Explicit Methods (continued)

*• Next, assemble Eqs. (120) and (121) into a single*
*equation at (x*_{i}*, t** _{j}*).

*• But we need to decide how to evaluate x in the ﬁrst*
*equation and t in the second.*

*• Since central diﬀerence around x**i* is used in Eq. (121),
*we might as well use x*_{i}*for x in Eq. (120).*

*• Two choices are possible for t in Eq. (121).*

*• The ﬁrst choice uses t = t** _{j}* to yield the following
ﬁnite-diﬀerence equation,

*θ*_{i,j+1}*− θ*_{i,j}

*Δt* *= D* *θ*_{i+1,j}*− 2θ*_{i,j}*+ θ*_{i−1,j}

*(Δx)*^{2} *.*

(122)

### Explicit Methods (continued)

*• The stencil of grid points involves four values, θ** _{i,j+1}*,

*θ*

_{i,j}*, θ*

_{i+1,j}*, and θ*

*.*

_{i−1,j}*• Rearrange Eq. (122) on p. 843 as*

*θi,j+1 =* *DΔt*

*(Δx)2* *θi+1,j +*

*1 −* *2DΔt*
*(Δx)2*

*θi,j +* *DΔt*

*(Δx)2* *θi−1,j .* (123)

*• We can calculate θ*_{i,j+1}*from θ*_{i,j}*, θ*_{i+1,j}*, θ** _{i−1,j}*, at the

*previous time t*

*(see exhibit (a) on next page).*

_{j}### Stencils

t_{j} t_{j }
x_{i }

x_{i }
x_{i}

t_{j} t_{j }
x_{i }

x_{i }
x_{i}

= >

### Explicit Methods (concluded)

*• Starting from the initial conditions at t*_{0}, that is,
*θ*_{i,0}*= θ(x*_{i}*, t*_{0}*), i = 1, 2, . . . , we calculate*

*θ*_{i,1}*,* *i = 1, 2, . . . .*

*• And then*

*θ*_{i,2}*,* *i = 1, 2, . . . .*

*• And so on.*

### Stability

*• The explicit method is numerically unstable unless*
*Δt* *≤ (Δx)*^{2}*/(2D).*

**– A numerical method is unstable if the solution is**
highly sensitive to changes in initial conditions.

*• The stability condition may lead to high running times*
and memory requirements.

*• For instance, halving Δx would imply quadrupling*
*(Δt)** ^{−1}*, resulting in a running time 8 times as much.

### Explicit Method and Trinomial Tree

*• Recall Eq. (123) on p. 844:*

*θ**i,j+1* = *DΔt*

*(Δx)*^{2} *θ**i+1,j* +

*1 −* *2DΔt*
*(Δx)*^{2}

*θ**i,j* + *DΔt*

*(Δx)*^{2} *θ**i−1,j**.*

*• When the stability condition is satisﬁed, the three*
*coeﬃcients for θ*_{i+1,j}*, θ*_{i,j}*, and θ** _{i−1,j}* all lie between
zero and one and sum to one.

*• They can be interpreted as probabilities.*

*• So the ﬁnite-diﬀerence equation becomes identical to*
backward induction on trinomial trees!

### Explicit Method and Trinomial Tree (concluded)

*• The freedom in choosing Δx corresponds to similar*
freedom in the construction of trinomial trees.

*• The explicit ﬁnite-diﬀerence equation is also identical to*
backward induction on a binomial tree.^{a}

**– Let the binomial tree take 2 steps each of length**
*Δt/2.*

**– It is now a trinomial tree.**

aHilliard (2014).

### Implicit Methods

*• Suppose we use t = t** _{j+1}* in Eq. (121) on p. 842 instead.

*• The ﬁnite-diﬀerence equation becomes*
*θ*_{i,j+1}*− θ*_{i,j}

*Δt* *= D* *θ*_{i+1,j+1}*− 2θ*_{i,j+1}*+ θ*_{i−1,j+1}

*(Δx)*^{2} *.*

(124)

*• The stencil involves θ*_{i,j}*, θ*_{i,j+1}*, θ*_{i+1,j+1}*, and θ** _{i−1,j+1}*.

*• This method is now implicit:*

**– The value of any one of the three quantities at t*** _{j+1}*
cannot be calculated unless the other two are known.

**– See exhibit (b) on p. 845.**

### Implicit Methods (continued)

*• Equation (124) can be rearranged as*

*θ*_{i−1,j+1}*− (2 + γ) θ*_{i,j+1}*+ θ** _{i+1,j+1}* =

*−γθ*

_{i,j}*,*

*where γ*

*= (Δx)*

^{Δ}

^{2}

*/(DΔt).*

*• This equation is unconditionally stable.*

*• Suppose the boundary conditions are given at x = x*0

*and x = x** _{N +1}*.

*• After θ*_{i,j}*has been calculated for i = 1, 2, . . . , N , the*
*values of θ*_{i,j+1}*at time t** _{j+1}* can be computed as the
solution to the following tridiagonal linear system,

### Implicit Methods (continued)

⎡

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎣

*a* 1 0 *· · ·* *· · ·* *· · ·* 0

1 *a* 1 0 *· · ·* *· · ·* 0

0 1 *a* 1 0 *· · ·* 0

.. .

...

...

...

...

... .. . ..

.

... ... ... ... ... .. .

0 *· · ·* *· · ·* 0 1 *a* 1

0 *· · ·* *· · ·* *· · ·* 0 1 *a*

⎤

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎦

⎡

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎣

*θ1,j+1*
*θ2,j+1*
*θ3,j+1*

..
.
..
.
..
.
*θN,j+1*

⎤

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎦

=

⎡

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎣

*−γθ1,j − θ0,j+1*

*−γθ2,j*

*−γθ3,j*
..
.
..
.

*−γθN−1,j*

*−γθN,j − θN+1,j+1*

⎤

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎦
*,*

*where a* =^{Δ} *−2 − γ.*

### Implicit Methods (concluded)

*• Tridiagonal systems can be solved in O(N) time and*
*O(N ) space.*

**– Never invert a matrix to solve a tridiagonal system.**

*• The matrix above is nonsingular when γ ≥ 0.*

**– A square matrix is nonsingular if its inverse exists.**

### Crank-Nicolson Method

*• Take the average of explicit method (122) on p. 843 and*
implicit method (124) on p. 850:

*θi,j+1 − θi,j*
*Δt*

= 1

2

*D* *θi+1,j − 2θi,j + θi−1,j*

*(Δx)2* *+ D* *θi+1,j+1 − 2θi,j+1 + θi−1,j+1*
*(Δx)2*

*.*

*• After rearrangement,*

*γθi,j+1 −* *θi+1,j+1 − 2θi,j+1 + θi−1,j+1*

2 *= γθi,j +* *θi+1,j − 2θi,j + θi−1,j*

2 *.*

*• This is an unconditionally stable implicit method with*
excellent rates of convergence.

### Stencil

*t*

_{j}*t*

_{j+1}*x*

_{i}*x*

_{i+1}*x*

_{i+1}### Numerically Solving the Black-Scholes PDE (94) on p.

### 685

*• See text.*

*• Brennan and Schwartz (1978) analyze the stability of*
the implicit method.

### Monte Carlo Simulation

^{a}

*• Monte Carlo simulation is a sampling scheme.*

*• In many important applications within ﬁnance and*
without, Monte Carlo is one of the few feasible tools.

*• When the time evolution of a stochastic process is not*
easy to describe analytically, Monte Carlo may very well
be the only strategy that succeeds consistently.

aA top 10 algorithm (Dongarra & Sullivan, 2000).

### The Big Idea

*• Assume X*1*, X*_{2}*, . . . , X** _{n}* have a joint distribution.

*• θ* *= E[ g(X*^{Δ} _{1}*, X*_{2}*, . . . , X*_{n}*) ] for some function g is*
desired.

*• We generate*

*x*^{(i)}_{1} *, x*^{(i)}_{2} *, . . . , x*^{(i)}_{n}

*, 1* *≤ i ≤ N*

independently with the same joint distribution as
*(X*_{1}*, X*_{2}*, . . . , X** _{n}*).

*• Output Y* *= (1/N )*^{Δ} _{N}

*i=1* *Y** _{i}*, where

*Y*

_{i}*= g*

^{Δ}

*x*^{(i)}_{1} *, x*^{(i)}_{2} *, . . . , x*^{(i)}_{n}

*.*

### The Big Idea (concluded)

*• Y*_{1}*, Y*_{2}*, . . . , Y** _{N}* are independent and identically
distributed random variables.

*• Each Y** _{i}* has the same distribution as

*Y*

*= g(X*

^{Δ}

_{1}

*, X*

_{2}

*, . . . , X*

_{n}*).*

*• Since the average of these N random variables, Y ,*
*satisﬁes E[ Y ] = θ, it can be used to estimate θ.*

*• The strong law of large numbers says that this*
procedure converges almost surely.

*• The number of replications (or independent trials), N, is*
called the sample size.

### Accuracy

*• The Monte Carlo estimate and true value may diﬀer*
owing to two reasons:

1. Sampling variation.

2. The discreteness of the sample paths.^{a}

*• The ﬁrst can be controlled by the number of replications.*

*• The second can be controlled by the number of*
observations along the sample path.

aThis may not be an issue if the ﬁnancial derivative only requires
discrete sampling along time, such as the *discrete barrier option.*

### Accuracy and Number of Replications

*• The statistical error of the sample mean Y of the*
*random variable Y grows as 1/√*

*N .*
**– Because Var[ Y ] = Var[ Y ]/N .**

*• In fact, this convergence rate is asymptotically optimal.*^{a}

*• So the variance of the estimator Y can be reduced by a*
*factor of 1/N by doing N times as much work.*

*• This is amazing because the same order of convergence*
*holds independently of the dimension n.*

aThe Berry-Esseen theorem.

### Accuracy and Number of Replications (concluded)

*• In contrast, classic numerical integration schemes have*
*an error bound of O(N*^{−c/n}*) for some constant c > 0.*

*• The required number of evaluations thus grows*

*exponentially in n to achieve a given level of accuracy.*

**– The curse of dimensionality.**

*• The Monte Carlo method is more eﬃcient than*

*alternative procedures for multivariate derivatives for n*
large.

### Monte Carlo Option Pricing

*• For the pricing of European options on a*

dividend-paying stock, we may proceed as follows.

*• Assume*

*dS*

*S* *= μ dt + σ dW.*

*• Stock prices S*_{1}*, S*_{2}*, S*_{3}*, . . . at times Δt, 2Δt, 3Δt, . . .*
can be generated via

*S*_{i+1}

= *S*_{i}*e*^{(μ−σ}^{2}^{/2) Δt+σ}

*√**Δt ξ**, ξ* *∼ N(0, 1), (125)*
by Eq. (87) on p. 619.

### Monte Carlo Option Pricing (continued)

*• If we discretize dS/S = μ dt + σ dW directly, we will*
obtain

*S*_{i+1}*= S*_{i}*+ S*_{i}*μ Δt + S*_{i}*σ√*

*Δt ξ.*

*• But this is locally normally distributed, not lognormally,*
hence biased.^{a}

*• In practice, this is not expected to be a major problem*
*as long as Δt is suﬃciently small.*

aContributed by Mr. Tai, Hui-Chin (R97723028) on April 22, 2009.

### Monte Carlo Option Pricing (continued)

Non-dividend-paying stock prices in a risk-neutral economy
*can be generated by setting μ = r and Δt = T .*

1: *C := 0;* *{Accumulated terminal option value.}*

2: **for i = 1, 2, 3, . . . , N do**

3: *P := S* *× e*^{(r−σ}^{2}^{/2) T +σ}^{√}^{T ξ}*, ξ* *∼ N(0, 1);*

4: *C := C + max(P* *− X, 0);*

5: **end for**

6: *return Ce*^{−rT}*/N ;*

### Monte Carlo Option Pricing (concluded)

Pricing Asian options is also easy.

1: *C := 0;*

2: **for i = 1, 2, 3, . . . , N do**

3: *P := S; M := S;*

4: **for j = 1, 2, 3, . . . , n do**

5: *P := P* *× e*^{(r−σ}^{2}*/2)(T /n)+σ**√*

*T /n ξ*;

6: *M := M + P ;*

7: **end for**

8: *C := C + max(M/(n + 1)* *− X, 0);*

9: **end for**

10: *return Ce*^{−rT}*/N ;*

### How about American Options?

*• Standard Monte Carlo simulation is inappropriate for*
American options because of early exercise.

**– Given a sample path S**_{0}*, S*_{1}*, . . . , S** _{n}*, how to decide

*which S*

*is an early-exercise point?*

_{i}**– What is the option price at each S*** _{i}* if the option is
not exercised?

*• It is diﬃcult to determine the early-exercise point based*
on one single path.

*• But Monte Carlo simulation can be modiﬁed to price*
American options with small biases.^{a}

aLongstaﬀ & Schwartz (2001). See pp. 931ﬀ.

### Obtaining Proﬁt and Loss of Delta Hedge

^{a}

*• Proﬁt and loss of delta hedge should be calculated under*
the real-world probability measure.^{b}

*• So stock prices should be sampled from*
*dS*

*S* *= μ dt + σ dW.*

*• Suppose backward induction on a tree under the*
risk-neutral measure is performed for the delta.^{c}

aContributed by Mr. Lu, Zheng-Liang (D00922011) on August 12, 2021.

bRecall p. 711.

cBecause, say, no closed-form formulas are available for the delta.

### Obtaining Proﬁt and Loss of Delta Hedge (concluded)

*• Note that one needs a delta per stock price.*

*• So Nn trees are needed for the distribution of the proﬁt*
*and loss from N paths with n + 1 stock prices per path.*

*• These are a lot of trees!*

*• How to do it eﬃciently?*^{a}

aHint: Eq. (43) on p. 299.

### Delta and Common Random Numbers

*• In estimating delta, it is natural to start with the*
ﬁnite-diﬀerence estimate

*e*^{−rτ}*E[ P (S + ) ]* *− E[ P (S − ) ]*

*2* *.*

**– P (x) is the terminal payoﬀ of the derivative security***when the underlying asset’s initial price equals x.*

*• Use simulation to estimate E[ P (S + ) ] ﬁrst.*

*• Use another simulation to estimate E[ P (S − ) ].*

*• Finally, apply the formula to approximate the delta.*

*• This is also called the bump-and-revalue method.*

### Delta and Common Random Numbers (concluded)

*• This method is not recommended because of its high*
variance.

*• A much better approach is to use common random*
numbers to lower the variance:

*e*^{−rτ}*E*

*P (S + )* *− P (S − )*
*2*

*.*

*• Here, the same random numbers are used for P (S + )*
*and P (S* *− ).*

*• This holds for gamma and cross gamma.*^{a}

aFor multivariate derivatives.

### Problems with the Bump-and-Revalue Method

*• Consider the binary option with payoﬀ*

⎧⎨

⎩

*1,* *if S(T ) > X,*
*0,* otherwise.

*• Then*

*P (S+)−P (S−) =*

⎧⎨

⎩

*1,* *if S + > X and S* *− < X,*
*0,* otherwise.

*• So the ﬁnite-diﬀerence estimate per run for the*
*(undiscounted) delta is 0 or O(1/).*

*• This means high variance.*

### Problems with the Bump-and-Revalue Method (concluded)

*• The price of the binary option equals*
*e*^{−rτ}*N (x* *− σ√*

*τ ).*

**– It equals minus the derivative of the European call***with respect to X.*

**– It also equals Xτ times the rho of a European call (p.**

362).

*• Its delta is*

*N*^{}*(x* *− σ√*
*τ )*
*Sσ√*

*τ* *.*

### Gamma

*• The ﬁnite-diﬀerence formula for gamma is*
*e*^{−rτ}*E*

*P (S + )* *− 2 × P (S) + P (S − )*
^{2}

*.*

*• For a correlation option with multiple underlying assets,*
the ﬁnite-diﬀerence formula for the cross gamma

*∂*^{2}*P (S*_{1}*, S*_{2}*, . . . )/(∂S*_{1}*∂S*_{2}) is:

*e*^{−rτ}*E*

*P (S*1 + 1*, S*2 + 2) *− P (S*1 *− *1*, S*2 + 2)
412

*−P (S*1 + 1*, S*2 *− *2) +*P (S*1 *− *1*, S*2 *− *2)
*.*

### Gamma (continued)

*• Choosing an of the right magnitude can be*
challenging.

**– If is too large, inaccurate Greeks result.**

**– If is too small, unstable Greeks result.**

*• This phenomenon is sometimes called the curse of*
diﬀerentiation.^{a}

aA¨ıt-Sahalia & Lo (1998); Bondarenko (2003).

### Gamma (continued)

*• In general, suppose (in some sense)*

*∂*^{i}

*∂θ*^{i}*e*^{−rτ}*E[ P (S) ] = e*^{−rτ}*E*

*∂*^{i}*P (S)*

*∂θ*^{i}

*holds for all i > 0, where θ is a parameter of interest.*^{a}
**– A common requirement is Lipschitz continuity.**^{b}

*• Then Greeks become integrals.*

*• As a result, we avoid , ﬁnite diﬀerences, and*
resimulation.

aThe *∂*^{i}*P (S)/∂θ** ^{i}* within

*E[ · ] may not be partial diﬀerentiation in*the classic sense.

bBroadie & Glasserman (1996).

### Gamma (continued)

*• This is indeed possible for a broad class of payoﬀ*
functions.^{a}

**– Roughly speaking, any payoﬀ function that is equal**
to a sum of products of diﬀerentiable functions and
indicator functions with the right kind of support.

**– For example, the payoﬀ of a call is**

*max(S(T )* *− X, 0) = (S(T ) − X)I**{ S(T )−X≥0 }**.*
**– The results are too technical to cover here (see next**

page).

aTeng (R91723054) (2004); Lyuu & Teng (R91723054) (2011).

### Gamma (continued)

*• Suppose h(θ, x) ∈ H with pdf f(x) for x and g*_{j}*(θ, x)* *∈ G*
*for j* *∈ B, a ﬁnite set of natural numbers.*

*• Then*

*∂*

*∂θ*

*h(θ, x)*

*j**∈B***1{g***j (θ,x)>0}**(x) f (x) dx*

=

*hθ (θ, x)*

*j**∈B***1{g***j (θ,x)>0}**(x) f (x) dx*

+

*l**∈ B*

⎡

⎢*⎣h(θ, x)Jl(θ, x)*

*j**∈B\l***1{g***j (θ, x)>0}**(x) f (x)*

⎤

⎥⎦

*x=χ**l (θ)*
*,*

where

*Jl(θ, x) = sign*

*∂gl(θ, x)*

*∂xk*

*∂gl(θ, x)/∂θ*

*∂gl(θ, x)/∂x* *for l ∈ B.*

### Gamma (concluded)

*• Similar results have been derived for Levy processes.*^{a}

*• Formulas are also recently obtained for credit*
derivatives.^{b}

*• In queueing networks, this is called inﬁnitesimal*
perturbation analysis (IPA).^{c}

aLyuu, Teng (R91723054), & S. Wang (2013).

bLyuu, Teng (R91723054), Tseng, & S. Wang (2014, 2019).

cCao (1985); Y. C. Ho & Cao (1985).

### Biases in Pricing Continuously Monitored Options with Monte Carlo

*• We are asked to price a continuously monitored*
*up-and-out call with barrier H.*

*• The Monte Carlo method samples the stock price at n*
*discrete time points t*_{1}*, t*_{2}*, . . . , t** _{n}*.

*• A sample path*

*S(t*_{0}*), S(t*_{1}*), . . . , S(t** _{n}*)
is produced.

**– Here, t**_{0} *= 0 is the current time, and t*_{n}*= T is the*
expiration time of the option.

### Biases in Pricing Continuously Monitored Options with Monte Carlo (continued)

*• If all of the sampled prices are below the barrier, this*
*sample path pays max(S(t** _{n}*)

*− X, 0).*

*• Repeat these steps and average the payoﬀs for a Monte*
Carlo estimate.

1: *C := 0;*

2: **for i = 1, 2, 3, . . . , N do**

3: *P := S; hit := 0;*

4: **for j = 1, 2, 3, . . . , n do**

5: *P := P × e*^{(r−σ}^{2}*/2) (T /n)+σ**√*

*(T /n) ξ**; {By Eq. (125) on p.*

*863.}*

6: **if P ≥ H then**

7: hit := 1;

8: break;

9: **end if**

10: **end for**

11: **if hit = 0 then**

12: *C := C + max(P − X, 0);*

13: **end if**

14: **end for**

15: *return Ce*^{−rT}*/N ;*

### Biases in Pricing Continuously Monitored Options with Monte Carlo (continued)

*• This estimate is biased.*^{a}

**– Suppose none of the sampled prices on a sample path**
*equals or exceeds the barrier H.*

**– It remains possible for the continuous sample path**
*that passes through them to hit the barrier between*
sampled time points (see plot on next page).

**– Hence the knock-out probability is underestimated.**

aShevchenko (2003).

H

### Biases in Pricing Continuously Monitored Options with Monte Carlo (concluded)

*• The bias can be lowered by increasing the number of*
observations along the sample path.

* – For trees, the knock-out probability may decrease as*
the number of time steps is increased.

*• However, even daily sampling may not suﬃce.*

*• The computational cost also rises as a result.*

### Brownian Bridge Approach to Pricing Barrier Options

*• We desire an unbiased estimate which can be calculated*
eﬃciently.

*• The above-mentioned payoﬀ should be multiplied by the*
*probability p that a continuous sample path does not*
hit the barrier conditional on the sampled prices.

**– Formally,**

*p* *= Prob[ S(t) < H, 0*^{Δ} *≤ t ≤ T | S(t*0*), S(t*_{1}*), . . . , S(t*_{n}*) ].*

*• This methodology is called the Brownian bridge*
approach.

### Brownian Bridge Approach to Pricing Barrier Options (continued)

*• As a barrier is not hit over a time interval if and only if*
*the maximum stock price over that period is at most H,*

*p = Prob*

*0≤t≤T*max *S(t) < H* *| S(t*0*), S(t*_{1}*), . . . , S(t** _{n}*)

*.*

*• Luckily, the conditional distribution of the maximum*
over a time interval given the beginning and ending
stock prices is known.

### Brownian Bridge Approach to Pricing Barrier Options (continued)

**Lemma 22 Assume S follows dS/S = μ dt + σ dW and define**^{a}
*ζ(x)* = exp^{Δ}

*− 2 ln(x/S(t)) ln(x/S(t + Δt))*
*σ*^{2}*Δt*

*.*
*(1) If H > max(S(t), S(t + Δt)), then*

Prob

*t≤u≤t+Δt*max *S(u) < H*

* S(t),S(t + Δt)*

*= 1 − ζ(H).*

*(2) If h < min(S(t), S(t + Δt)), then*
Prob

*t≤u≤t+Δt*min *S(u) > h*

* S(t),S(t + Δt)*

*= 1 − ζ(h).*

aHere, Δ*t is an arbitrary positive real number.*

### Brownian Bridge Approach to Pricing Barrier Options (continued)

*• Lemma 22 gives the probability that the barrier is not*
hit in a time interval, given the starting and ending
stock prices.

*• For our up-and-out*^{a} *call, choose n = 1.*

*• As a result,*

*p =*

⎧⎨

⎩

1 *− exp*

*−**2 ln(H/S(0)) ln(H/S(T ))*
*σ*^{2}*T*

*, if H > max(S(0), S(T )),*

0*,* otherwise.

aSo *S(0) < H by deﬁnition.*

### Brownian Bridge Approach to Pricing Barrier Options (continued)

*The following algorithm works for up-and-out and*
down-and-out calls.

1: *C := 0;*

2: **for i = 1, 2, 3, . . . , N do**

3: *P := S × e*^{(r−q−σ}^{2}^{/2) T +σ}

*√**T ξ( )*;

4: **if (S < H and P < H) or (S > H and P > H) then**

5: *C := C+max(P −X, 0)×*

*1 − exp*

*−**2 ln(H/S)×ln(H/P )*
*σ*^{2}*T*

;

6: **end if**

7: **end for**

8: *return Ce*^{−rT}*/N ;*

### Brownian Bridge Approach to Pricing Barrier Options (concluded)

*• The idea can be generalized.*

*• For example, we can handle more complex barrier*
options.

*• Consider an up-and-out call with barrier H** _{i}* for the

*time interval (t*

_{i}*, t*

*], 0*

_{i+1}*≤ i < m.*

*• This option contains m barriers.*

*• Multiply the probabilities for the m time intervals to*
obtain the desired probability adjustment term.

### Pricing Barrier Options without Brownian Bridge

*• Let T*_{h}*denote the amount of time for a process X** _{t}* to hit

*h for the first time.*

*• It is called the ﬁrst passage time or the ﬁrst hitting time.*

*• Suppose X*_{t}*is a (μ, σ) Brownian motion:*

*dX*_{t}*= μ dt + σ dW*_{t}*,* *t* *≥ 0.*