• 沒有找到結果。

Efficient Importance Sampling Estimation for Joint Default Probability: the First Passage Time Problem

N/A
N/A
Protected

Academic year: 2021

Share "Efficient Importance Sampling Estimation for Joint Default Probability: the First Passage Time Problem"

Copied!
15
0
0

加載中.... (立即查看全文)

全文

(1)

Efficient Importance Sampling Estimation for

Joint Default Probability: the First Passage

Time Problem

Chuan-Hsiang Han

August 2, 2010

Abstract

Motivated from credit risk modeling, this paper extends the two-dimensional first passage time problem studied by Zhou (2001) to any finite dimension by means of Monte Carlo simulation. We provide an importance sampling method to estimate the joint default probabil-ity, and apply the large deviation principle to prove that the proposed importance sampling is asymptotically optimal. Our result is an al-ternative to the interacting particle systems proposed by Carmona, Fouque, and Vestal (2009).

1

Introduction

Estimation of the joint default probability under a structural-form models emerged pretty early in the presence of stochastic financial theory. In models of Black and Scholes [3] and Merton [15], default can only happen at expiration of debt when its issuer’s asset value is less than the debt value. Black and Scholes modeled the asset value process by a geometric Brownian motion, then Merton incorporated an additional compound Poisson jump term. Black and Cox [4] gen-eralized these models by allowing that default can occur at any time before the expiration of debt. They considered a first passage time

Department of Quantitative Finance, National Tsing Hua University, Hsinchu,

Tai-wan, 30013, ROC, chhan@mx.nthu.edu.tw. Work supported by NSC 97-2115-M-007-002-MY2, Taiwan. We are also grateful to one anonymous referee and Professor Nicolas Pri-vault . Other acknowledgments: NCTS, National Tsing-Hua University; TIMS, National Taiwan University; CMMSC, National Chiao-Tung University.

(2)

problem for the geometric Brownian motion in one dimension. Zhou [16] extended this one-dimensional geometric Brownian motion to the jump-diffusion model as Merton did. Later in Zhou [17], the joint default for two-dimensional geometric Brownian motions was treated. A comprehensive technical review can be found in [5].

In this paper, we focus on generalizing the joint default from a two-dimensional first passage time problem studied in [17] to higher di-mensions through a Monte Carlo study. A high-dimensional setup of the first passage time problem under correlated geometric Brownian motions is the following. We assume that each firm value process, Sit 1 ≤ i ≤ n, has the dynamics,

dSit= µiSitdt + σiSitdWit, (1) where σi is a constant volatility and Brownian motions Wis are corre-lated by d hWi, Wjit= ρijdt with < ·, · >t the quadratic variation at time t. Each firm also has a constant default barrier Bi for 1 ≤ i ≤ n, and its default happens at the first time when the asset value Sit falls below the barrier level. Therefore, default time τi of the i-th firm is defined as

τi = inf{t ≥ 0 : Sit≤ Bi}. (2) Let F be the filtration generated by all Si·, i = 1, · · · , n, under a probability measure P . At time 0, the joint default probability with a terminal time T is defined by

DP = E {Πni=1I (τi ≤ T ) |F0} . (3) In general, there is no closed-form solution for the probability of joint default (3) in high dimension, so one has to rely on numerical meth-ods. Using a deterministic approach such as numerical PDE (partial differential equation) or binomial tree to calculate the joint default probability typically suffers from the curse of dimensionality. Due to the property of dimension independence, Monte Carlo simulation becomes a feasible approach. Moreover, it is crucial to reduce the variance of Monte Carlo estimators for convergence improvement. Recently, Carmona, Fouque, and Vestal [8] study a first passage time problem and estimate the loss density function for a credit portfo-lio under a stochastic volatility model. They use interacting particle systems for variance reduction. Alternatively, we propose an efficient Monte Carlo method which incorporates importance sampling for vari-ance reduction in order to accurately estimate joint default probabil-ities under the classical Black-Cox model in high dimension. Han et al. [12] apply the same importance sampling for risk management

(3)

applications with empirical studies and backtesting under stochastic volatility models.

There are many ways to analyze the variance of an importance sam-pling estimator. In a strong sense, one can minimize the variance, say over a parametrized space, by solving an optimization problem. See for example [1,2], in which authors proposed an adaptive scheme, namely Robbins Monro algorithms, to solve a class of importance sampling estimators. They utilize constant change of drift in high dimension and solve nontrivial optimization problems. See also [10] for using an asymptotic result to approximate the optimal change of measure. In a weak sense, one seeks to minimize asymptotically the rate of variance, instead of the variance itself, of an importance sampling estimator. To demonstrate fundamental ideas, we provide an introduction in Section 2that include constructions of importance sampling schemes for the standard normal random variable, and an asymptotic variance analysis of our proposed estimator by means of the large deviation principle. Details about the general definition of rate of variance and its analysis can be found in Chapter 5 of [7]. In this entire paper, we adopt variance analysis in the weak sense. By means of the large deviation principle, we show that the proposed importance sampling estimator for the one-dimensional first passage time problem has a zero variance rate. That implies that our proposed importance sampling scheme is asymptotically optimal.

The organization of this paper is as follows: Section 2 provides a fundamental understanding of importance sampling in a simple Gaus-sian random variable model with a spatial scale. Approximation to variance rate of the efficient importance sampling estimator can be ob-tained by an application of Cramer’s theorem. Section 3explores the high-dimensional first passage time model, known as the Black-Cox model. We show that the proposed importance sampling method is asymptotically optimal (or called efficient) in one dimension by an ap-plication of Freidlin-Wentzell Theorem. Then we conclude in Section 4.

2

Efficient Importance Sampling for a

One-Dimensional Toy Model

We start from a simple and static model to estimate the probability of default defined by Pc

1 := E{I (X > c)}, where the standard normal random variable X ∼ N (0, 1) and c > 0 stand for the loss of a portfolio and its loss threshold, respectively. Of course, P1c is simply a tail probability and admits a closed-form solution N (−c), where N (x)

(4)

denotes the cumulative normal integral function. Using basic Monte Carlo method to estimate P1cis not accurate enough particularly when the loss threshold c is large. This is because the variance of I (X > c) is of the same order of the default probability P1c when a rare event occurs. For example, E{(I (X > c))2} − (Pc

1)2 ≈ P1c( 1) when c is large enough. This implies that the relative error of the basic Monte Carlo estimator is rather large. A problem of rare event simulation arises when the spatial scale c is sufficiently large.

Applying importance sampling as a variance reduction method to treat rare event simulation has been extensively studied [7]. A general pro-cedure, known as exponential change of measure, for constructing an importance sampling scheme in order to estimate P1c is reviewed be-low.

Assume that the density function of a real-valued random variable X is f (x) > 0 for each x ∈ <. One can change the probability measure by incorporating a likelihood function f (x)/fµ(x) so that the default probability P1c can be evaluated under the new probability measure Pµ: E {I(X > c)} = Eµ  I(X > c)f (X) fµ(X)  ,

where the new density function of X is fµ(x) > 0 for each x ∈ <. The twist or tilted probability measure refers to the choice of a new density

fµ(x) =

exp(µ x) f (x)

M (µ) , (4)

where M (µ) = E[exp(µ X)] denotes the moment generating function of X. We define the leading term of the estimator variance as P2c(µ) := Eµ

n

I(X > c)ff22(X) µ(X)

o

. By substituting fµ(x) given above into P2c(µ), one can obtain the following results:

P2c(µ) = E 

I(X > c)f (X) fµ(X)



= M (µ)E {I(X > c) exp(−µ X)}

≤ M (µ) exp(−µ c), (5)

where µ and c are assumed positive numbers for this upper bound to hold. To minimize the logarithm of this upper bound, its first order condition satisfies d ln(M (µ) exp(−µ c)) dµ = M0(µ) M (µ) − c = 0.

(5)

Let µ? solve for MM (µ0(µ??)) = c. It follows that the expected value of

X under the new probability measure Pµ? is exactly the loss

thresh-old c. This is confirmed by evaluating Eµ?(X) = R x fµ?(x) dx and

substituting fµ?(x) defined in (4) so that

Eµ?(X) =

M0(µ?) M (µ?)

= c. (6)

Remark that this whole procedure of exponential change of measure can be extended to high dimension.

From the simulation point of view, the final result shown in (6) is ap-pealing because the rare event of default under the original probability measure is no longer rare under this new measure Pµ?. In addition,

even in the situation that moment generating functions are difficult or impossible to find, one can still possibly use other change of measure techniques to fulfill (6), i.e., “the expected value of a defaultable asset is equal to its debt value” in financial terms. We will see such example in Section 3for high-dimensional Black-Cox model.

In the concrete case of X being a standard normal random vari-able, the minimizer µ? is exactly equal to c. This result is derived from a direct calculation of c = M0(µ?)/M (µ?) given that the mo-ment generating function of X is M (µ) = exp(µ2/2). The twist or tilted density function fµ?(x) = exp(µ?x) f (x)/M (µ?) becomes

exp(−(x − c)2/2)/√2 π. Hence, random samplings are generated from X ∼ N (c, 1) under this new density function, instead of X ∼ N (0, 1) under the original measure. The default probability P1ccan be explic-itly expressed by P1c = √1 2π Z ∞ −∞ I (x > c) e −x2/2 e−(x−c)2/2e −(x−c)2/2 dx = Ec n I (X > c) ec2/2−cXo, (7) and its second moment becomes

P2c(c) := Ec n

I (X > c) ec2−2cXo. (8) Naturally, one can ask an optimization problem which minimizes vari-ance of all possible importvari-ance sampling estimators associated with fµ(x) = exp(−(x − µ)2/2)/

2 π for each µ ∈ <. That is, given that P1c = Eµ  I (X > c) f (X) fµ(X)  = Eµ n I (X > c) eµ2/2−µX o , (9)

(6)

we seek to minimize its second moment P2c(µ) = Eµ

n

I (X > c) eµ2−2µXo. (10) The associated minimizer guarantees the minimal variance within the µ-parametrized measures but solving the minimizer via (10) requires numerical computation. In Section2.2, we compare variance reduction performance between the optimal estimator (9) by minimizing (10), and the efficient estimator (7). We will see that in our numerical experiments these two estimators reach the same level of accuracy, but the computing times are very different. Efficient estimators perform more effectively then the optimal estimators.

2.1

Asymptotic Variance Analysis by Large

Deviation Principle

It is known that the number of simulation to estimate a quantity in a certain level of accuracy should be proportional to P2c(µ)/ (P1c)2− 1. See for example Section 4.5 in [7]. If the decay rate of P2c(µ) and (Pc

1)

2 are the same asymptotically, we say that the asymptotic vari-ance rate is zero and the corresponding importvari-ance sampling estima-tor is asymptotically optimal or efficient. In this section, we aim to prove that when the density parameter µ is particularly chosen as the default loss threshold c, the variance reduced by the importance sam-pling scheme defined in (7) is asymptotically optimal by an application of Cramer’s theorem in large deviation theory. Recall that

Theorem 1. (Cramer’s theorem [7]) Let {Xi} be real-valued i.i.d. random variables under P and EX1 < ∞. For any x ≥ E {X1}, we have lim n→∞ 1 nln P  Sn n ≥ x  = − inf y≥xΓ ∗(y), (11)

where Sn=Pni=1Xidenotes the sample sum of size n, Γ(θ) = ln EeθX1

denotes the cumulant function, and Γ∗(x) = supθ∈<[θ x − Γ(θ)]. From this theorem and the moment generating function E {exp(θX)} = exp(θ2/2) for X being a standard normal univariate, we obtain the fol-lowing asymptotic approximations.

Lemma 1. (Asymptotically Optimal Importance Sampling) When c approaches infinity, the variance rate of the estimator, defined in (7), approaches zero. That is,

lim c→∞ 1 c2 ln P c 2(c) = 2 limc→∞ 1 c2 ln P c 1 = −1.

(7)

Therefore, this importance sampling is asymptotically optimal or effi-cient.

Proof: Given that Xi, i = 1, 2. · · · are i.i.d. one-dimensional standard normal random variables, it is easy to obtain

lim n→∞ 1 nln P  Pn i=1Xi n ≥ x  = −x 2 2 , or equivalently P Pn i=1Xi n ≥ x  ≈ exp(−nx2 2 ) by an application of Theorem1. Introduce a rescaled default probability P (X ≥√nx) for n large, then P X ≥√nx = P  Pn i=1 Xi n ≥ √ nx  = P  Pn i=1Xi n ≥ x  ,

in which each random variable Xi has the same distribution as X. Hence, let 1  c := √nx, the approximation to the first moment of I(X ≥ c) or default probability P1cis obtained:

lim n→∞ 1 (√nx)2 ln P X ≥ √ nx = −1 2, or equivalently P (X ≥ c) ≈ exp(−c 2 2). (12)

Given the second moment defined in (10), it is easy to see that P2c(µ) = E−µ{I (X > c)} eµ

2

= E0{I (X > µ + c)} eµ 2

.

The first line is obtained by changing measure via dPµ/dP−µ, and the second line shifts the mean value of X ∼ N (−µ, 1) to 0. With the choice µ = c, we get P2c(c) = E0{I (X > 2 c)} ec

2

and its ap-proximation P2c(c) ≈ exp −c2 can be easily obtained from the same derivation as in (12). As a result, we verify that the decay rate of the second moment P2c(c) is two times the decay rate of the probability P1c, i.e., lim c→∞ 1 c2 log P c 2(c) = 2 limc→∞ 1 c2 log P c 1 = −1. (13)

(8)

 We have shown that the proposed importance sampling scheme de-fined in (7) is efficient. This zero variance rate should be understood as an optimal variance reduction in an asymptotic sense because the variance rate cannot be less than zero.

Remark: This lemma can be generalized to any finite dimension and possible extended to other generalized distribution such as Student’s t. We refer to [14] for further details with applications in credit risk.

For our thematic topic of estimating joint default probabilities un-der high-dimensional Black-Cox model, this technique unfortunately does not work because the moment generating function of multivari-ate first passage times is unknown. We overcome this difficulty by considering a simplified problem, namely change measure for the joint distributions of underlying processes at maturity rather than their first passage times. Remarkably, we find that this measure change can still be proven asymptotically optimal for the original first passage time problem. Details can be found in Section 3.

2.2

Numerical Results

Table1demonstrates performance of two importance sampling schemes, including optimal estimator and efficient estimator, to estimate the de-fault probability P (X > c) for X ∼ N (0, 1) with various loss threshold values c. In Column 2, exact solutions of N (−c) are reported. In each column of simulation, Mean and SE stand for the sample mean and the sample standard error, respectively. IS(µ = c) represents the scheme in (7) using the pre-determined choice of µ = c suggested from the asymptotic analysis in Lemma 1, while IS(µ = µ?) represents the op-timal scheme in (7) using µ = µ?, which minimizes P2c(µ) numerically. We observe that standard errors obtained from these two importance sampling schemes are comparable in terms of the same order of accu-racy, while the computing time is not. From the last row, the optimal importance sampling scheme IS(µ = µ?) takes about 50 times more than the efficient importance sampling scheme IS(µ = c). These nu-merical experiments are implemented in Matlab on a laptop PC with 2.40GHz Intel Duo CPU T8300.

There have been extensive studies and applications, see Chapter 9 of [11] for various applications in risk management, by using this con-cept of minimizing an upper bound of the second moment under a parametrized twist probability, then construct an importance sam-pling scheme. However, it remains to check whether this scheme is

(9)

Table 1: Estimation of default probability P (X > c) with different loss threshold c when X ∼ N (0, 1). The total number of simulation is 10,000.

DP Basic MC IS(µ = c) IS(µ = µ?)

c true M ean SE M ean SE M ean SE

1 0.1587 0.1566 0.0036 0.1592 0.0019 0.1594 0.0018

2 0.0228 0.0212 0.0014 0.0227 3.49E-04 0.0225 3.37E-04

3 0.0013 1.00E-03 3.16E-04 0.0014 2.53E-05 0.0014 2.51E-05

4 3.17e-05 - - 3.13E-05 6.62E-07 3.11E-05 6.66E-07

time 0.004659 0.020904 1.060617

efficient (with zero variance rate) or not.

3

Efficient Importance Sampling for

High-Dimensional First Passage Time

Prob-lem

In this section, we review an importance sampling scheme developed by Han and Vestal [13] for the first passage time problem (3) in order to improve the convergence of Monte Carlo simulation. In addition, we provide a variance analysis to justify that the importance sampling scheme is asymptotic optimal (or efficient) in one dimension.

The basic Monte Carlo simulation approximates the joint default prob-ability defined in (3) by the following estimator

DP ≈ 1 N N X k=1 Πni=1I  τi(k)≤ T, (14)

where τi(k) denotes the k-th i.i.d. sample of the i-th default time defined in (2) and N denotes the total number of simulation.

By Girsanov Theorem, one can construct an equivalent probability measure ˜P defined by the following Radon-Nikodym derivative

dP d ˜P = QT(h·) = exp Z T 0 h(s, Ss) · d ˜Ws− 1 2 Z T 0 ||h(s, Ss)||2ds  , (15) where we denote by Ss= (S1s, · · · , Sns) the state variable (asset value process) vector and ˜Ws =  ˜W1s, · · · , ˜Wns



(10)

Brownian motions, respectively. The function h(s, Ss) is assumed to satisfy Novikov’s condition such that ˜Wt = Wt +

Rt

0h(s, Ss)ds is a vector of Brownian motions under ˜P .

The importance sampling scheme proposed in [13] is to select a con-stant vector h = (h1, · · · , hn) which satisfies the following n conditions

˜

E {SiT|F0} = Bi, i = 1, · · · , n. (16) These equations can be simplified by using the explicit log-normal density of SiT, so we deduce the following sequence of linear equations for hi’s: Σij=1ρijhj = µi σi −ln Bi/Si0 σiT , i = 1, · · · , n. (17) If the covariance matrix Σ = (ρij)1≤i,j,≤n is non-singular, the vec-tor h exists uniquely so that the equivalent probability measure ˜P is uniquely determined. The joint default probability defined in (3) becomes

DP = ˜E {Πni=1I (τi ≤ T ) QT(h)|F0} . (18) Equation (16) requires that, under the new probability measure ˜P , the expectation of asset’s value at time T is equal to its debt level. When the debt level B of a company is much smaller than its initial asset value S0 (see examples in Table 2), or returns of any two names are highly negative correlated (see examples in Table4), joint default events are rare. By the proposed importance sampling scheme, ran-dom samples drawn under the new measure ˜P cause more defaults than those samples drawn under P .

Table 2 and Table 4 illustrate numerical results for estimating the (joint) default probabilities of a single-name case and a three-name case. The exact solution of the single name default probability

1 − N (d+2) + N (d−2) S0 B

1−2µ/σ2

(19)

with d±2 = ± ln(S0/B)+(µ−σ2/2)T

σ√T can be found in [8]. This result is obtained from the distribution of the running minimum of Brownian motion. However, there is no closed-form solution for the joint de-fault probability of three names in Table 4except for the case of zero correlation.

(11)

Table 2: Comparison of single-name default probability by basic Monte Carlo (BMC), exact solution, and importance sampling (IS). The number of simu-lation is 104 and an Euler discretization for (1) is used by taking time step

size T /400, where T is one year. Other parameters are S0 = 100, µ = 0.05

and σ = 0.4. Standard errors are shown in parenthesis.

B BMC Exact Sol IS

50 0.0886 (0.0028) 0.0945 0.0890 (0.0016)

20 - (-) 7.7310 ∗ 10−5 7.1598 ∗ 10−5(2.3183 ∗ 10−6)

1 - (-) 1.3341 ∗ 10−30 1.8120 ∗ 10−30(3.4414 ∗ 10−31)

Table 3: Comparison of three-name joint default probability by basic Monte Carlo (BMC), and importance sampling (IS). The number of simulation is 104 and an Euler discretization for (1) is used by taking time step size T /100,

where T is one year. Other parameters are S10= S20 = S30 = 100, µ1 = µ2 =

µ3 = 0.05, σ1 = σ2 = 0.4, σ3 = 0.3 and B1 = B2 = 50, B3 = 60. Standard

errors are shown in parenthesis.

ρ BMC IS

0.3 0.0049(6.9832 ∗ 10−4) 0.0057(1.9534 ∗ 10−4)

0 3.0000 ∗ 10−4(1.7319 ∗ 10−4) 6.4052 ∗ 10−4(6.9935 ∗ 10−5)

-0.3 −(−) 2.2485 ∗ 10−5(1.1259 ∗ 10−5)

3.1

Asymptotic Variance Analysis by Large

Deviation Principle

We provide a theoretical verification to show that the importance sam-pling developed above is asymptotic optimal for the one-dimensional first passage time problem under the geometric Brownian motion. This problem has also been considered in Carmona et al. [8].

Our proof is based on the Freidlin-Wentzell theorem [6,9] in large de-viation theory in order to approximate the default probability and the second moment of the importance sampling estimator defined in (18). We consider the scale ε = − (ln(B/S0))−1 being small or equivalently 0 < B  S0, the current asset value S0 is much larger than its debt value B. Our asymptotic results show that the second moment ap-proximation is the square of the first moment (or default probability) approximation. Therefore, we attain the optimality of variance reduc-tion in an asymptotic sense so that the proposed importance sampling scheme is efficient.

Theorem 2. (Efficient Importance Sampling) Let St denote the asset value following the log-normal process dSt= µStdt + σ StdWt with the

(12)

Table 4: Comparison of multiname joint default probabilities by basic Monte Carlo (BMC), and importance sampling (IS) under high-dimensional Black-Cox model. Let n denote the dimension, the total number of firms. The number of simulation is 3 ∗ 104 and an Euler discretization for (1) is used by taking time step size T /100, where T is one year. Other parameters are S0 = 100, µ = 0.05, σ = 0.3, ρ = 0.3, and B = 50.

Basic MC Importance Sampling

n M ean SE M ean SE

2 1.1E-03 3.31E-04 1.04E-03 2.83E-05

5 - - 6.36E-06 3.72E-07 10 - - 2.90E-07 2.66E-08 15 - - 9.45E-09 1.16E-09 20 - - 1.15E-09 1.98E-10 25 - - 2.06E-10 3.84E-11 30 - - 6.76E-11 2.36E-11 35 - - 1.35E-11 2.89E-12 40 - - 6.59E-12 1.58E-12 45 - - 3.25E-12 1.08E-12 50 - - 6.76E-13 2.26E-13

(13)

initial value S0, and B denote the default boundary. We define the default probability and its importance sampling scheme by

P1ε = E  I  min 0≤t≤TSt≤ B  = E˜  I  min 0≤t≤TSt≤ B  QT(h)  ,

where the Radon-Nykodym derivative Q(h) is defined in (15). The second moment of this estimator is denoted by

P2ε(h) = ˜E  I  min 0≤t≤TSt≤ B  Q2T(h)  .

By the choice of h = (µ T + 1/ε) /(σ T ) with the scale ε defined from −1/ε = ln(B/S0), the expected value ST under ˜P is B. That is,

˜

E {ST} = B. When ε is small enough or equivalently B  S0, we obtain a zero variance rate, i.e., limε→0ε2ln



P2ε(h)/ (P1ε)2 

= 0, so that the importance sampling scheme is efficient.

Proof: Recall that the one-dimensional default probability is defined by P  inf 0≤t≤TSt= S0e “ µ−σ22 ”t+σWt ≤ B  = E  I  inf 0≤t≤Tε  µ −σ 2 2  t + εσWt≤ −1  , (20)

where we have use the strictly monotonicity of the logarithmic trans-formation and we introduce a scaling ln (B/S0) = −1ε . For small pa-rameter ε, the default probability will be small in financial intuition because the debt to asset value, B/S0, is small. By an application of Freidlin-Wentzell Theorem [6, 9], it is easy to prove that the rate function of (20) is 12T. That is the rescaled default probability

P  inf 0≤t≤TSt= S0e “ µ−σ22 ” t+σWt ≤ B  ≈ exp  −1 ε22 σ2T  , (21) when ε is small. Recall that under the measure change defined in (18), the price dynamics becomes St= S0e

“ µ−σ2 2 −σh ” t+σ ˜Wt with h = µ σ − ln B/S0

σ T . The second moment P2ε(h) becomes ˜ E  I  inf 0≤t≤TSt≤ B  e2 h ˜WT−h2T  = Eˆ  I  inf 0≤t≤TS0e “ µ−σ22 +σh”t+σ ˆWt ≤ B  eh2T = Eˆ  I  inf 0≤t≤T  ε  2µ − σ 2 2  + 1 T  t + εσ ˆWt≤ −1  × e(σr+ 1 εσT) 2 T,

(14)

where the measure change d ˆP /d ˜P is defined by QT(2h) for the second line and we incorporate the same scaling ln (B/S0) = −1ε to rescale our problem for the last line above. By Freidlin-Wentzell Theorem, the rate function of the expectation is σ22T. Consequently, the

approx-imation ˜ E  I  inf 0≤t≤TSt≤ B  e2 h ˜WT−h2T  ≈ exp  −1 ε2σ2T  (22) is derived. By limε→0ε2ln P2ε(h) = 2 limε→0ε2ln P1ε, we confirm that the variance rate of this importance sampling is asymptotically zero

so that this scheme is efficient. 

Remark: The same result can be obtained from a PDE argument studied in [13].

4

Conclusion

Estimation of joint default probabilities under a first passage time problem in the structural form model are tackled by importance sam-pling. Imposing “the expected asset value at debt maturity equals to its debt value” as a condition, importance sampling schemes can be uniquely determined within a family of parametrized probability measures. This approach overcomes the hurdle of exponential change of measure that requires existence of the moment generating function of first passage times. According to the large deviation principle, our proposed importance sampling scheme is asymptotically optimal or efficient in a rare event simulation.

References

[1] B. Arouna,“Robbins Monro algorithms and variance reduction in finance,” Journal of Computational Finance, 7 (2), (Winter 2003/04).

[2] B. Arouna, “Adaptive Monte Carlo method, a variance reduction technique,” Monte Carlo Methods Appl, 10 (1) (2004), 1-24. [3] F. Black and M. Scholes, “The Pricing of Options and Corporate

Liabilities,” Journal of Political Economy, 81 (1973), 637-654. [4] F. Black and J. Cox, “Valuing Corporate Securities: Some Effects

of Bond Indenture Provisions,” Journal of Finance, 31(2) (1976): 351-367.

(15)

[5] T.R. Bielecki and M. Rutkowski, Credit Risk: Modeling, Valua-tion and Hedging, Springer 2002.

[6] J. A. Bucklew, Large Deviation Techniques in Decision, Simu-lation, and Estimation, Wiley-Interscience, Applied Probability and Statistics Series, New York, 1990.

[7] J. A. Bucklew, Introduction to rare event simulation, Springer, 2003.

[8] R. Carmona, J.P. Fouque, and D. Vestal, “ Interacting Particle Systems for the Computation of Rare Credit Portfolio Losses,” Finance and Stochastics, 13(4), 2009 (p. 613-633).

[9] A. Dembo and O. Zeitouni, Large Deviations Techniques and Ap-plications, 2/e, Springer, 1998.

[10] J.-P. Fouque and C.-H. Han, “Variance Reduction for Monte Carlo Methods to Evaluate Option Prices under Multi-factor Stochastic Volatility Models,” Quantitative Finance, Volume 4, number 5 (2004) 1-10.

[11] P. Glasserman, Monte Carlo Methods for Financial Engineering, Springer-Verlag, New York, 2003.

[12] C.-H. Han, W.-H. Liu, Z.-Y. Chen, “An Improved Procedure for VaR/CVaR Estimation under Stochastic Volatility Models,” Sub-mitted, 2010.

[13] C.-H. Han and D. Vestal, “Estimating the probability of joint de-fault in structural form by efficient importance sampling,” work-ing paper, Natoinal Tswork-ing-Hua University, 2010.

[14] C.-H. Han and C.-T. Wu, “Efficient importance sampling in esti-mation of default probability under factor copula model,” work-ing paper, National Tswork-ing-Hua University, 2010.

[15] R. Merton, “On the Pricing of Corporate Debt: The Risk Struc-ture of Interest Rates,” The Journal of Finance, 29 (1974), 449-470.

[16] C. Zhou, “The Term Structure of Credit Spreads with Jump Risk”, Journal of Banking & Finance, Vol. 25, No. 11, (November 2001), pp. 2015-2040.

[17] C. Zhou, “An Analysis of Default Correlations and Multiple De-faults,” The Review of Financial Studies, 14(2) (2001), 555-576.

數據

Table 1: Estimation of default probability P (X &gt; c) with different loss threshold c when X ∼ N (0, 1)
Table 4: Comparison of multiname joint default probabilities by basic Monte Carlo (BMC), and importance sampling (IS) under high-dimensional  Black-Cox model

參考文獻

相關文件

(1) principle of legality - everything must be done according to law (2) separation of powers - disputes as to legality of law (made by legislature) and government acts (by

Which of the following is used to report the crime of damaging the Great Wall according to the passage.

Radiographs of Total Hip Replacements 廖振焜 林大弘 吳長晉 戴 瀚成 傅楸善 楊榮森 侯勝茂 2005 骨科醫學會 聯合學術研討會. • Automatic Digital PE

 Propose eQoS, which serves as a gene ral framework for reasoning about th e energy efficiency trade-off in int eractive mobile Web applications.  Demonstrate a working prototype and

– Take advantages of the global and local behavior of lighting by clustering and rendering per-slice columns in the transport matrix. – 3x – 6x performance improvement compared to

▪ Can we decide whether a problem is “too hard to solve” before investing our time in solving it.. ▪ Idea: decide which complexity classes the problem belongs to

In this chapter, a dynamic voltage communication scheduling technique (DVC) is proposed to provide efficient schedules and better power consumption for GEN_BLOCK

In this thesis, we have proposed a new and simple feedforward sampling time offset (STO) estimation scheme for an OFDM-based IEEE 802.11a WLAN that uses an interpolator to recover