• 沒有找到結果。

One-day Workshop on Probability and Finance

N/A
N/A
Protected

Academic year: 2022

Share "One-day Workshop on Probability and Finance"

Copied!
141
0
0

加載中.... (立即查看全文)

全文

(1)

One-day Workshop on Probability and Finance

July 10 (Thursday), 2008

Room 308, New Mathematics Building, National Taiwan University About this Workshop

The meeting aims to address some researches on Probability Theory and Financial Mathematics. It consists of four one-hour lectures and a session of short communications. Informal walk-in talks are welcome for the SC session. No registration is required.

One-Hour Lectures, Morning Session

10:00 - 11:00 Horng-Tzer Yau (Harvard University)

11:15 - 12:15 Chuan-Hsiang Han (National Tsing Hua University) Lunch at the venue

One-Hour Lectures, Afternoon Session

13:30 - 14:30 Ching-Tang Wu (National Chiao Tung University) 14:45 - 15:45 Guan-Yu Chen (National Chiao Tung University) Short Communications

16:00 - 17:30 Each talk is less than 20 minutes.

Now-scheduled: N.-R. Shieh (NTU) Sponsors :

Taida Institute of Mathematical Science (http://www.tims.ntu.edu.tw)

Mathematics Division, National Center for Theoretical Sciences (Taipei Office)

(http://math.cts.ntu.edu.tw/)

(2)

Local semicircle law and complete delocalization for Wigner random matrices

Horng-Tzer Yau

Joint work with L. Erd˝ os, B. Schlein (Munich)

(3)

WIGNER ENSEMBLE

H = (h jk ) is a hermitian N × N matrix, N  1.

h jk = 1

√ N (x jk + iy jk ), (j < k), h jj =

s 2

N x jj

where x jk , y jk (j < k) and x jj are independent with distributions x jk , y jk ∼ dν := e −g(x) dx,

Normalization: E x jk = 0, E x 2 jk = 1 2 .

Example: g(x) = x 2 is GUE.

Normalization ensures that Spec(H) = [−2, 2] + o(1)

Results hold for real symmetric matrices as well, e.g. for GOE.

(4)

!2 2

! 2"

(x) = 4 ! x 2 1

Eigenvalues: E 1 ≤ E 2 ≤ . . . E N

Typical eigenvalue spacing is E i − E i−1N 1 .

(5)

MAIN QUESTIONS

1) Density of states (DOS) — Wigner semicircle law.

2) Eigenvalue spacing distribution (Wigner-Dyson statistics and level repulsion);

3) (De)localization properties of eigenvectors.

RELATIONS:

• 2) is finer than 1) [bulk vs. individual ev.]

• Level repulsion ⇐⇒ Delocalization ??? [Big open conjecture]

Motivation in background: Random Schr¨ odinger operators in the

extended states regime.

(6)

DENSITY OF STATES

N (I) := #{µ n ∈ I} number of evalues µ n of H in I ⊂ R . Smoothed density of states around E with window size η:

% η (E) = 1

N π ImTr 1

H − E − iη = 1 N π

X α

η

α − E) 2 + η 2

% η (E) and N (I) with I = [E − 2 η , E + 2 η ] are closely related.

WIGNER SEMICIRCLE LAW

For any fixed I ⊂ R ,

N →∞ lim

E N (I)

N =

Z

I % sc (x)dx, % sc (x) = 1 2π

q

4 − x 2 1 (|x| ≤ 2)

Similar statement for % η (E), window size η = O(1) fixed.

Fluctations and almost sure convergence are also known.

(7)

The Wigner-Dyson statistics (universal distribution of eigenvalue spacing) requires info on individual evalues on a scale η ∼ 1/N . It is believed to hold for general Wigner matrices, but proven only for Gaussian and related models and the proofs use explicit formulas for the joint ev. distribution. [Dyson, Deift, Johansson]

GOALS:

(i) Prove Semicircle Law for any scales η  N −1 .

(ii) Prove that eigenvectors are delocalized.

(8)

Theorem 1: Fix κ, ε > 0. Let η  (log N ) N 8 , then, as N → ∞, P

sup

|E|≤2−κ

N [E − 2 η , E + 2 η ]

N η − % sc (E)

≥ ε

≤ e −c(log N ) 2

i.e. Semicircle Law holds for energy windows ∼ 1/N (mod logs)

Theorem 2: Fix κ > 0, then P

∃ v , k v k 2 = 1, H v = µ v , |µ| ≤ 2−κ, k v k ≥ (log N ) 5 N 1/2

≤ e −c(log N ) 2

i.e. almost all eigenfunctions are fully delocalized.

(9)

ASSUMPTIONS on the single site distribution dν = e −g(x) dx

(i) sup g 00 < ∞

(ii) There exists δ > 0 such that R e δx 2 dν(x) < ∞

(iii) dν satisfies the logarithmic Sobolev inequality

Z

u log u dν ≤ C

Z

|∇ √

u| 2

Item (i) was needed for a concentration Lemma. J. Bourgain

has informed us that this lemma also holds if (i) is replaced by

a decay stronger than Gaussian (e.g. bounded r.v.).

(10)

Lemma: [Upper bound]. Assume g 00 < ∞. Let |I| ≥ log N N , then P {N (I) ≥ KN |I|} ≤ e −cKN |I|

for large K. Similar result holds for P {% η (E) ≥ K}.

Proof: Decompose

H = h a a B

!

, h ∈ C , a ∈ C N −1 , B ∈ C (N −1)×(N −1)

Let λ α , u α be the ev’s of B and define

ξ α := N | a · u α | 2 , E ξ α = 1

For the (1,1) matrix element of G z = (H − z) −1 , z = E + iη:

G z (1, 1) = 1

h − z − a · (B − z) −1 a =

 h − z − 1 N

N −1 X α=1

ξ α λ α − z

−1

(11)

|G z (1, 1)| ≤ 1

Im h h − z − N 1 P α λ ξ α

α −z i

≤ η −1

1 + 1

N P

α ξ α

α −E) 22

≤ N η

P α : λ α ∈I

ξ α for any interval I = [E − η, E + η].

N I ≤ Cη Im TrG z ≤ Cη

N X k=1

|G z (k, k)|

Repeating the above construction for each k, N I ≤ CN η 2

N X k=1

X α : λ (k) α ∈I

ξ α (k)

−1

so to get an upper bound on N I , we need a lower bound on P ξ α .

(12)

Good news: For the decomposition H = h a

a B

!

,

the eigenvalues µ α of H and λ α of B are interlaced:

µ 1 ≤ λ 1 ≤ µ 2 ≤ λ 2 ≤ . . . so the number of λ α ∈ I is N (I) ± 1.

N I ≤ CN η 2

N X k=1

X α : λ (k) α ∈I

ξ α (k)

−1

Suppose

X α : λ (k) α ∈I

ξ α (k) ≥ c #{λ (k) α ∈ I} ≥ cN (I)

(recalling E ξ = 1 and hoping for weak correlation) then we had N (I) . N

2 η 2

N (I) = ⇒ N (I) . N η

(13)

Lower bound on P α ξ α :

Recall ξ α = N | a · u α | 2 . Note that a is indep of λ α , u α . The ξ α ’s are not independent, but almost, so their sum has a strong con- centration property.

Lemma: Let g 00 < ∞ or supp ν compact, then P

X α∈A

ξ α ≤ δ|A|

 ≤ e −c|A|

Note X

α∈A

ξ α = N X

α∈A

| a · u α | 2 = N |P A a | 2 , P A = proj

Lemma: Let z = (z 1 , . . . z N ), z j = x j + iy j , x j , y j ∼ dν(x). Let P be a projection of rank m in C N . Then

E e −c(P z ,P z ) ≤ e −c 0 E (P z ,P z ) = e −c 0 m

(14)

Proof of the local semicircle law: Consider the Stieltjes transform

m(z) =

Z %(x)dx x − z

The Stieltjes tr. of the semicircle law satisfies m sc (z) + 1

m sc (z) + z = 0

This fixed point equation is stable away from the spectral edge.

Let m(z) be the Stieltjes tr. of the empirical density of H, and m (k) (z) that of the minor B (k) :

m(z) = 1

N Tr 1

H − z , m (k) (z) = 1

N − 1 Tr 1

B (k) − z

(15)

Then from the expansion m(z) = 1

N

N X k=1

G z (k, k) = 1 N

N X k=1

1

h kk − z − a (k) · 1

B (k) −z a (k) obtain

m = 1 N

N X k=1

1

h kk − z −  1 − N 1  m (k) − X k with

X k = a (k) · 1

B (k) − z a (k) − E k



a (k) · 1

B (k) − z a (k)



| {z }

=(1− N 1 )m (k)

= 1 N

N −1 X α=1

ξ α (k) − 1 λ (k) α − z

(recall ξ α (k) = N | a (k) · u (k) α | 2 , E k ξ α (k) = 1)

(16)

m = 1 N

N X k=1

1

h kk − z −  1 − N 1  m (k) − X k (i) P {h kk ≥ ε} ≤ e −δε 2 N

(ii) By interlacing property

m −  1 − N 1  m (k)

= o(1)

(iii) Lemma: P {|X k | ≥ ε} ≤ e −cε(log N ) 2

Then, away from an event of tiny prob, we have m = − 1

N

N X k=1

1

m + z + δ k

where the random variables δ k satisfy |δ k | ≤ ε. From stability of the equation m sc = − m 1

sc +z , we get |m − m sc | ≤ Cε.

(17)

Proof of Lemma: Forget k.

X = 1 N

N −1 X α=1

ξ α − 1

λ α − z , ξ = | b · v α | 2 ,

With a high prob in the prob. space of the minor, we have

#{λ α ∈ I} ≤ N η(log N ) 2 . Fix such an event and play with a . Compute

d dβ



e −β log E e e β X



= e −β E u log u ≤ Ce −β E |∇ √

u| 2 , u := e e β X E e e β X

and

e −β E |∇ √

u| 2 ≤ e β E

 u X

k

∂X

∂b k

2 

 = e β N 2 E



u X

α

ξ α

α − z| 2



≤ e β

N η E [uY ]

with Y = 1 P α ξ α .

(18)

e −β E |∇ √

u| 2 ≤ e β

N η E [uY ], Y = 1 N

X α

ξ α

α − z|

Use entropy inequality

E [uY ] ≤ γ −1 E u log u + γ −1 E e γY

(with optimal γ ∼ e β /N η) and log-Sobolev once more to get E |∇ √

u| 2 ≤ E e γY Integrate the inequality

d dβ



e −β log E e e β X



≤ e −β E e γY from −∞ to β 01 2 log(N η) − 2 log log N .

The boudary term at β = −∞ vanishes since E X = 0, thus

(19)

log E e e β0 X ≤ E e δY , Y = 1 N

X α

ξ α

α − z|

with δ ∼ 1/(log N ) 4  1.

Since ξ α = | b · v α | 2 has finite exponential moment, if there are not too many λ α near E, then Y has finite exponential moment for a small δ.

This controls the exponential moment of X.

(20)

EXTENDED STATES: EIGENVECTOR DELOCALIZATION

No concept of absolutely continuous spectrum.

v ∈ C N , k v k 2 = 1 is extended if k v k p ∼ N

1 p − 1 2

, p 6= 2.

E.g. For GUE, all eigenvectors have k v k 4 ∼ N −1/4 (symmetry) Question: in general for Wigner? [T. Spencer]

Our Theorem 2 answers to this in the strongest possible norm, with log corrections, for all eigenvectors (away from the edge)

Theorem 2: Fix κ > 0, then P

∃ v , k v k 2 = 1, H v = µ v , |µ| ≤ 2−κ, k v k ≥ (log N ) 5 N 1/2

≤ e −c(log N ) 2

(21)

Proof. Decompose as before H = h a a B

!

,

Let H v = µ v and v = (v 1 , w ), w ∈ C N −1 . Then

hv 1 + a · w = µv 1 , a v 1 + B w = µ w = ⇒ w = (µ − B) −1 a v 1 From the normalization, 1 = k w k 2 + |v 1 | 2 , we have

|v 1 | 2 = 1 1 + 1

N P

α ξ α (µ−λ α ) 2

≤ 1

1 N

1 (q/N ) 2

P

α∈A ξ α , (ξ α := N | a · u α | 2 ) where recall λ α , u α are the ev’s of B and let

A = n α : |λ α − µ| = q N

o q ∼ (log N ) 8

Concentration ineq. and lower bound on the local DOS imply

X α∈A

ξ α ≥ c|A| ≥ cq with very high probability, thus

|v 1 | 2 ≤ q

N = ⇒ k v k ≤ N −1/2 modulo logs

(22)

SUMMARY

• All results for general Wigner matrices, no Gaussian formulas

• We established the Semicircle Law for the DOS on scale (log N ) 8

N

(optimal modulo logs)

• All eigenvectors are fully delocalized away from the spectral edges. Optimal estimate on the sup norm (modulo logs)

OPEN QUESTIONS:

• Are all conditions necessary (strong decay plus log-Sobolev)?

• Wigner-Dyson distribution of level spacing [DREAM...]

(23)

Large Deviations, Small Default Probabilities and

Importance Sampling

Chuan-Hsiang Han

Dept. of Quantitative Finance, NTHU TIMS

July 10, 2008

(24)

Outline

• Credit Derivatives: market data and is- sues

• Approach I - reduced form: copula method

• Approach II - structural form: first pas- sage time problem

• Modification: stochastic correlation

• Conclusions and Future works

(25)

Introduction of Credit Derivatives

• A contract between two parties whose values are contingent on the creditwor- thiness of underlying asset (s).

• Single-name: only one reference asset, like CDS (Credit Default Swaps).

• Multi-name: several assets in one basket,

like CDO (Collateralized Debt Obliga-

tions) or BDS (Basket Default Swaps).

(26)

Credit Default Swap

0B 13,000B 26,000B 39,000B 52,000B 65,000B

2001 2002 2003 2004 2005 2006 2007 Credit Default Swap Outstandings (USD)

ISDA Market Survey

(27)

Source: Securities Industry and Financial Markets As-

sociation.

(28)

A Example: Credit Swap Evaluation

premium =

IE {(1 − R) × B(0, τ ) × I (τ < T )} / IE

 N X j=1

4 j−1, j × B(0, t j ) × I (τ > t j )

Notations: τ : default time, R: recovery rate,

B(0, t): discount factor, 4 j−1, j : time incre-

ment.

(29)

Some Mathematical Issues

• Modeling default time

• Modeling correlations between default times

• Estimating joint default probability: rare

event in high dimension

(30)

Approaches to Modeling Default Times

• Intensity-Based (Reduced Form)

View firm’s default as extraneous, modeling the hazard rate of the firm.

IP (τ ≤ t) = F (t) = 1 − exp



Z t 0

h(s)ds



.

• Asset Value-Based (Structural Form) First passage time problem: in 2-d

 dS

1t

= µ

1

S

1t

dt + σ

1

S

1t

dW

1t

dS

2t

= µ

2

S

2t

dt + σ

2

S

2t

d(ρ W

1t

+ p

1 − ρ

2

W

2t

)

Joint default occurs if S 1t < B 1 and S 2t < B 2

for some t ≤ T .

(31)

Reduced Form Approach: Copula Method

Default Times Modeling: n τ i = F i −1 (U i ) o n

i=1 , U ’s are (standard) uniform random variables.

A n-dimensional Copula is a distribution func- tion on [0, 1] n with uniform marginal distri- butions.

Through a copula function, one can build up correlations between default times.

Cherubini, Luciano, Vecchiato (2004), Nel-

(32)

Gaussian Copula

• Li (2000) introduced Gaussian copula C(u 1 , u 2 , · · · , u n ; Σ) =

Φ Σ−1 (u 1 ), Φ −1 (u 2 ), · · · , Φ −1 (u n )), where Σ denotes the variance-covariance matrix.

• Laurent and Gregory (2003) introduced Gaussian Factor Copula so that parame- ters numbers are reduced from O(n 2 ) to O(n).

• Easy to compute but lack of economic

sense.

(33)

Structural Form Approach: Review

• Merton (1974) applied Black-Scholes Op- tion Theory (1973). Default time only happens at maturity.

• Black and Cox (1976) proposed the first passage time problem (1-dim) to model default event.

• Zhou (2001) extended to 2-dim case.

(34)

Credit Risk Modeling:

Structural Form Approach

Multi-Names Dynamics: for 1 ≤ i ≤ n dS it = µ i S it dt + σ i S it dW it , d D W it , W jt E = ρ ij dt.

Each default time τ i for the i th name is de- fined as τ i = inf {t ≥ 0 : S it ≤ B i }, where B i denotes the i th debt level.

The i th default event is defined as {τ i ≤ T }.

(35)

Joint Default Probability:

First Passage Time Problem

Q: How to compute, for any finite n names, DP = IE n Π n i=1 I

i ≤T ) | F t o ?

Explicit Formulas exist for 1 and 2 names

cases so far...(no mention for stochastic cor-

relation/volaility...)

(36)

Multi-Dimensional Girsanov Theorem

Given a Radon-Nikodym derivative dIP

d ˜ IP = Q h T = e

 R T

0 h(s,S s )·d ˜ W s1 2 R T

0 ||h(s,S s )|| 2 ds



, W ˜ t = W t + R 0 t h(s, S s )ds is a vector of Brown- ian motions under ˜ IP . Thus

DP = ˜ IE n Π n i=1 I

i ≤T ) Q h T o .

If h = − DP 1 σ T ∇DP , zero variance for the

new estimator.

(37)

Monte Carlo Simulations:

Importance Sampling

An importance sampling method is to select a constant vector h = (h 1 , · · · , h n ) to satisfy the following n conditions

IE {S ˜ iT |F 0 } = B i , i = 1, · · · , n.

Each h i can be uniquely determined by the linear system

Σ i j=1 ρ ij h j = µ i

σ iln B σ i /S i0

i T , for i = 1, · · · , n.

(38)

Trajectories under different measures

Single Name Case

(39)

Single Name Default Probability

B BMC Exact Sol Importance Sampling 50 0.0886 (0.0028) 0.0945 0.0890 (0.0016) 20 0 (0) 7.7 ∗ 10

−5

7.2 ∗ 10

−5

(2.3 ∗ 10

−6

)

1 0 (0) 1.3 ∗ 10

−30

1.8 ∗ 10

−30

(3.4 ∗ 10

−31

)

Number of simulations are 10 4 and the Eu-

ler discretization takes time step size T /400,

where T is one year. Other parameters are

S 0 = 100, µ = 0.05 and σ = 0.4.

(40)

Three-Names Joint Default Probability

ρ BMC Importance Sampling

0.3 0.0049(6.98 ∗ 10

−4

) 0.0057(1.95 ∗ 10

−4

) 0 3.00 ∗ 10

−4

(1.73 ∗ 10

−4

) 6.40 ∗ 10

−4

(6.99 ∗ 10

−5

) -0.3 0(0) 2.25 ∗ 10

−5

(1.13 ∗ 10

−5

)

Parameters are S 10 = S 20 = S 30 = 100, µ 1 = µ 2 = µ 3 = 0.05, σ 1 = σ 2 = 0.4, σ 3 = 0.3 and B 1 = B 2 = 50, B 3 = 60. Standard errors are shown in parenthesis.

Effect of Correlation! Debt to Asset Ratios

(B i /S i0 ) are not small.

(41)

We propose an algorithm to compute the joint default prob.

In fact, the choice of our new measure

is optimal in Large Deviations Theory.

(42)

Large Deviations Theory:

Cramer’s Theorem

Let {X i } be real-valued IID r.v.’s under IP and IEX 1 < ∞. For any x ≥ IEX 1 , we have

n→∞ lim 1

n ln IP

 S n

n ≥ x



= −Γ (x) = − inf

y≥x Γ (y).

1. S n = P n i=1 X i : sample sum

2. Γ(θ) = ln IE h e θX 1 i : the cumulant function

3. Γ (x) = sup θ∈< [θ x − Γ(θ)]: Legendre

transform of Γ (also called rate function).

(43)

Tie to Importance Sampling

Define an expo. change of measure IP θ by dIP θ

dIP = exp (θ S n − n Γ(θ)) , p n := IP

 S n

n ≥ x



= IE θ



I Sn

n ≥x exp (−θ S n + n Γ(θ))



. The optimal 2 nd moment (M n 2 (θ, x)) of the

new estimator can be shown as

M n 2x , x) ≈ p 2 n , where Γ (x) = θ x x − Γ(θ x ).

Under the optimal measure, the event is

not rare any more! (Note: IE [S /n] = x.)

(44)

Large Deviation Principle (LDP)

A X -valued seq. {Z ε } ε defined on (Ω, F , IP ) satisfies a LDP with the rate function I if (1) Upper Bound: for any closed subset F of X , lim sup ε→0 ε ln IP [Z ε ∈ F ] ≤ − inf x∈F I(x) (2) Lower Bound: for any open subset G of X , lim sup ε→0 ε ln IP [Z ε ∈ G] ≥ − inf x∈G I(x) If F ⊆ X s.t. inf x∈F 0 I(x) = inf x∈ ¯ F := I F , then

lim sup

ε→0

ε ln IP [Z ε ∈ F ] = −I F .

(45)

Freidlin-Wentzell Theorem

The solution of

dX t ε = b(X t ε )dt + √

εσ(X t ε )dW t , X 0 ε = x,

satisfies a LDP with the rate function I(f ) =

1 2

Z T

0 < ˙ f (t) − b(f (t)), a −1 (f (t))( ˙ f (t) − b(f (t))) > dt for some nice function f , or I(f ) = ∞ oth-

erwise. a(x) = σ(x) σ 0 (x).

(46)

Single-Name Default Prob. Approximation

IP

 inf

0≤t≤T S t = S 0 e



µ− σ2 2



t+σW t

≤ B

= IE

"

I inf

0≤t≤T ε µ − σ 2 2

!

t + εσW t ≤ −1

!#

:= P ε (scaling by ln (B/S 0 ) = −1 ε )

≈ exp

 −1

ε 2 2 σ 2 T



. ( by F-W Thm )

(47)

Importance Sampling:

2 nd Moment Approximation

IE ˜

"

I inf

0≤t≤T S t ≤ B

!

e 2 h ˜ W T −h 2 T

#

S t = S 0 e



µ− σ2 2 −σh 

t+σ ˜ W t

, h = µ

σ − ln B/S 0 σ T

= IE ˆ

 I

 inf

0≤t≤T S 0 e



µ− σ2 2 +σh



t+σ ˆ W t

≤ B

 e h 2 T

(48)

2 nd Moment Approximation (Cont.)

= IE ˆ

"

I inf

0≤t≤T ε 2µ − σ 2 2

!

+ 1 T

!

t + εσ ˆ W t ≤ −1

!#

× e

 r

σ + εσT 1

 2

T (scaling by ln (B/S 0 ) = −1 ε ) := M ε 2

≈ exp

 −1

ε 2 σ 2 T



. ( by F-W Thm )

Theorem: By M ε 2 ≈ (P ε ) 2 we observe the

optimality of chosen measure.

(49)

The Optimal Variance Reduction:

A Numerical Evidence

0 10 20 30 40 50 60 70 80 90

10!30 10!20 10!10 100 1010 1020 1030 1040 1050

!(") vs PB(T), B = 15

!(") PB(T)

(50)

A Modification: Stochastic Correlation

 

 

 

 

dS t 1 = rS t 1 dt + σ 1 S t 1 dW t 1

dS t 2 = rS t 2 dt + σ 2 S t 2 d(ρ(Y t )dW t 1 +

q

1 − ρ 2 (Y t )dW t 2 ) dY t = 1

δ (m − Y t )dt +

√ √ 2β δ dZ t Joint default probability

P δ (t, x 1 , x 2 , y) := IE x 1 ,x 2 ,y



Π I {min

t≤u≤T S t i ≤B i }



In this case, the construction of our IS method

fails!

(51)

Full Expansion of P δ

Theorem

P δ (t, x 1 , x 2 , y) =

∞ X i=0

δ i P i (t, x 1 , x 2 , y),

where P 0 s can be obtained recursively and the y variable can be factored out (separate).

Proof: by means of Singular Perturbation Techniques.

Accuracy results are ensured given smooth-

(52)

Leading Order Term

P 0 (t, x 1 , x 2 ) solves the homogenized PDE (y-independent).

 L 1,0 + ρ ¯ L 1,1  P 0 (t, x 1 , x 2 ) = 0

ρ =< ρ(y) >, average taken wrt the invar- ¯ tiant measure of Y.

Differential operators are L 1,0 = ∂

∂t +

2 X i=1

σ i 2 x 2 i 2

2

∂x 2 i +

2 X i=1

µ i x i

∂x i L 1,1 = σ 1 σ 2 x 1 x 22

∂x 1 ∂x 2 .

(53)

Other Terms

P n+1 (t, x 1 , x 2 , y) =

i+j=n+1 X i≥0,j≥1

ϕ (n+1) i,j (y) L i 1,0 L j 1,1 P n where a seq. of Poisson eqns to be solved:

L 0 ϕ (n+1) i+1,j (y) =



ϕ (n) i,j (y)− < ϕ (n) i,j (y) >



L 0 ϕ (n+1) i,j+1 (y) =



ρ(y) ϕ (n) i,j (y)− < ρ ϕ (n) i,j >



, where L 0 = β 2 ∂ 2

∂y 2 + (m − y) ∂y .

(54)

Numerical Result I: Stochastic Correlation

α =

1δ

BMC Importance Sampling 0.1 0.0037(6 ∗ 10

−4

) 0.0032(1 ∗ 10

−4

)

1 0.0074(9 ∗ 10

−4

) 0.0065(2 ∗ 10

−4

) 10 0.0112(1 ∗ 10

−3

) 0.0116(4 ∗ 10

−4

) 50 0.0163(1 ∗ 10

−3

) 0.0137(5 ∗ 10

−4

) 100 0.016(1 ∗ 10

−3

) 0.0132(4 ∗ 10

−4

)

Parameters are S 10 = S 20 = 100, B 1 = 50, B 2 = 40, m = π/4, ν = 0.5, ρ(y) = |sin(y)|.

Using homogenization in IS, note the ef-

fect of correlation.

(55)

Numerical Result II: Stochastic Correlation

α =

1δ

BMC Importance Sampling 0.1 0(0) 9.1 ∗ 10

−7

(7 ∗ 10

−8

)

1 0(0) 7.5 ∗ 10

−6

(6 ∗ 10

−7

) 10 0(0) 2.4 ∗ 10

−5

(2 ∗ 10

−6

) 50 1 ∗ 10

−4

(1 ∗ 10

−4

) 2.9 ∗ 10

−5

(3 ∗ 10

−6

) 100 1 ∗ 10

−4

(1 ∗ 10

−4

) 2.7 ∗ 10

−5

(2 ∗ 10

−6

)

Parameters are S 10 = S 20 = 100, B 1 = 30, B 2 = 20, m = π/4, ν = 0.5.

Note the effect of correlation.

(56)

Conclusion

• Credit risk models are introduced.

• A simple yet efficient importance sam- pling method is proposed, justified by large deviations theory.

• Full expansion of joint default probabil-

ity under stochastic correlation and its

application to importance sampling.

(57)

Future Works

• Generalized to stochastic volatility mod- els.

• Risk management of credit portfolios.

• Similar variance analysis for Gaussian cop- ula models.

• Homogenization in Large Deviations.

(58)

Acknowledgment

• S.-J. Sheu, N.-R. Shieh, Doug Vestal.

(by name order)

• NCTS (Taipei Office)

• TIMS, NTU.

• NSC.

(59)

Thank You!

(60)

Weak Brownian Motion and its Applications

Wu, Ching-Tang

Department of Applied Mathematics National Chiao Tung University

July 10, 2008 National Taiwan University

(61)

Outline

1 Motivation

2 Weak Brownian Motions

3 Martingale Marginal Property

4 Wiener Chaos

5 Future Works

6 References

(62)

Motivation

Pricing Formula

In a financial model with interest rate 0, stock price process (St) and risk neutral probability measure P, the price of European call option at time 0 is given by

π(K, T ) = E(ST − K)+ , (1) where T is the maturity and K is the strike price.

Two methods to discuss it:

Stochastic analysis Dynamic analysis

(63)

Motivation

Pricing Formula

In a financial model with interest rate 0, stock price process (St) and risk neutral probability measure P, the price of European call option at time 0 is given by

π(K, T ) = E(ST − K)+ , (1) where T is the maturity and K is the strike price.

Two methods to discuss it:

Stochastic analysis Dynamic analysis

(64)

Motivation

Relation between π and S

t

Method 1: Stochastic Analysis

ut: marginal utility of St under P, Then π is determined by the distribution function and the partial moment and

πxx(·, t) = ut with density function p(·, t).

Method 2: Dynamic Analysis Suppose St satisfies

dSt= St(btdt + σ(St, t) dWt), then we have Dupire equation

πt= 1

2x2σ2(x, t)πxx− xσtπx. (2) Solve it!

(65)

Motivation

Relation between π and S

t

Method 1: Stochastic Analysis

ut: marginal utility of St under P, Then π is determined by the distribution function and the partial moment and

πxx(·, t) = ut with density function p(·, t).

Method 2: Dynamic Analysis Suppose St satisfies

dSt= St(btdt + σ(St, t) dWt), then we have Dupire equation

πt= 1

x2σ2(x, t)πxx− xσtπx. (2)

(66)

Motivation

Another Point of View

Consider

π(K, T ) = E(ST − K)+ , where (St) is a martingale with respect to P.

Breeden and Litzenberger (1978) and Dupire (1997) show that P(ST > K) = − ∂

∂K+π(K, T ), where ∂

∂K+π(K, t) means the right-derivative of π with respect to K.

Question

Does there exist a stochastic process whose marginal (or k-marginal) is identical to the marginal of (St)?

(67)

Motivation

Another Point of View

Consider

π(K, T ) = E(ST − K)+ , where (St) is a martingale with respect to P.

Breeden and Litzenberger (1978) and Dupire (1997) show that P(ST > K) = − ∂

∂K+π(K, T ), where ∂

∂K+π(K, t) means the right-derivative of π with respect to K.

Question

Does there exist a stochastic process whose marginal (or k-marginal) is

(68)

Motivation

Another Point of View

Consider

π(K, T ) = E(ST − K)+ , where (St) is a martingale with respect to P.

Breeden and Litzenberger (1978) and Dupire (1997) show that P(ST > K) = − ∂

∂K+π(K, T ), where ∂

∂K+π(K, t) means the right-derivative of π with respect to K.

Question

Does there exist a stochastic process whose marginal (or k-marginal) is identical to the marginal of (St)?

(69)

Motivation

Stoyanov’s Conjecture

Stoyanov’s Conjecture

There exists a stochastic process X with X0 = 0 satisfying

1 Xt− Xs∼ N (0, t − s) for all s < t.

2 Xt2 − Xt1 and Xt4 − Xt3 are independent for 0 ≤ t1 < t2 ≤ t3 < t4. But X is nota Brownian motion.

Thus,

We aim to see if there exists a stochastic process whose marginal (or k-marginal) is identical to the marginal of a Brownian motion, but is not a Brownian motion.

(70)

Motivation

Stoyanov’s Conjecture

Stoyanov’s Conjecture

There exists a stochastic process X with X0 = 0 satisfying

1 Xt− Xs∼ N (0, t − s) for all s < t.

2 Xt2 − Xt1 and Xt4 − Xt3 are independent for 0 ≤ t1 < t2 ≤ t3 < t4. But X is nota Brownian motion.

Thus,

We aim to see if there exists a stochastic process whose marginal (or k-marginal) is identical to the marginal of a Brownian motion, but is not a Brownian motion.

(71)

Weak Brownian Motions

Definition

Definition

A stochastic process X is called a weak Brownian motion of order k, if for all (t1, t2, ..., tk)

(Xt1, Xt2, ..., Xtk)(law)= (Bt1, Bt2, ..., Btk) , where B is a Brownian motion.

Another formulation

E[f1(Xt1) · · · fk(Xtk)] = E[f1(Bt1) · · · fk(Btk)]

for f1, ..., fk ∈ C01(R).

Stoyanov’s Conjecture

(72)

Weak Brownian Motions

Definition

Definition

A stochastic process X is called a weak Brownian motion of order k, if for all (t1, t2, ..., tk)

(Xt1, Xt2, ..., Xtk)(law)= (Bt1, Bt2, ..., Btk) , where B is a Brownian motion.

Another formulation

E[f1(Xt1) · · · fk(Xtk)] = E[f1(Bt1) · · · fk(Btk)]

for f1, ..., fk ∈ C01(R).

Stoyanov’s Conjecture

There exists weak Brownian motion of order 4 which differs from Brownian motion.

(73)

Weak Brownian Motions

Definition

Definition

A stochastic process X is called a weak Brownian motion of order k, if for all (t1, t2, ..., tk)

(Xt1, Xt2, ..., Xtk)(law)= (Bt1, Bt2, ..., Btk) , where B is a Brownian motion.

Another formulation

E[f1(Xt1) · · · fk(Xtk)] = E[f1(Bt1) · · · fk(Btk)]

for f1, ..., fk ∈ C01(R).

Stoyanov’s Conjecture

(74)

Weak Brownian Motions

Main Results

Theorem (F¨ollmer-W.-Yor (2000))

Let k ∈ N. There exists a process (Xt)0≤t≤1 which is not Brownian motion such that the k-dimensional marginals of X are identical to those of Brownian motion.

Theorem

For every ε > 0, there exists a probability measure Q 6= P on C([0, 1]) such that

1 Q ≈ P

2 Q ⊥ P and satisfies

Q = P on FJ = σ(Xt: t ∈ J )

for any J ⊆ [0, 1] such that Jc contains some interval of length ε.

(75)

Weak Brownian Motions

Main Results

Theorem (F¨ollmer-W.-Yor (2000))

Let k ∈ N. There exists a process (Xt)0≤t≤1 which is not Brownian motion such that the k-dimensional marginals of X are identical to those of Brownian motion.

Theorem

For every ε > 0, there exists a probability measure Q 6= P on C([0, 1]) such that

1 Q ≈ P

2 Q ⊥ P and satisfies

Q = P on FJ = σ(Xt: t ∈ J )

(76)

Weak Brownian Motions

Properties

Proposition

Let X be a weak Brownian motion of order k.

1 If k ≥ 2, then X has a continuous version. Moreover, if X is a Gaussian process, then X is a Brownian motion

2 If k ≥ 4, then hXit= t. Moreover, if X is a martingale, then X is a Brownian motion.

Remark

A weak Brownian motion may not be a martingale, e.g.,

Xt=

( Wt, t ≤ 1/2,

W1

2 + √

2 − 1 Wt−1

2, t > 1/2.

(77)

Weak Brownian Motions

Properties

Proposition

Let X be a weak Brownian motion of order k.

1 If k ≥ 2, then X has a continuous version. Moreover, if X is a Gaussian process, then X is a Brownian motion

2 If k ≥ 4, then hXit= t. Moreover, if X is a martingale, then X is a Brownian motion.

Remark

A weak Brownian motion may not be a martingale, e.g.,

Xt=

( Wt, t ≤ 1/2,

W1 + √

2 − 1 W 1, t > 1/2.

(78)

Weak Brownian Motions

Itˆ o Integral

If X is a continuous weak Brownian motion of order k ≥ 1 whose paths have quadratic variation

hXit= t, then the Itˆo integral

Z t 0

f (Xt) dXt

exists as a pathwise limit of non-nticipting Riemann sums along dyadic partitions for any bounded f ∈ C1 and satisfies the Itˆo’s formula even though X may not be a semimartingale, see F¨ollmer (1981). Moreover,

E

Z t 0

f (Xt) dXt



= 0

which may be viewed as a weak form of martingale property.

(79)

Weak Brownian Motions

Itˆ o Integral

If X is a continuous weak Brownian motion of order k ≥ 1 whose paths have quadratic variation

hXit= t, then the Itˆo integral

Z t 0

f (Xt) dXt

exists as a pathwise limit of non-nticipting Riemann sums along dyadic partitions for any bounded f ∈ C1 and satisfies the Itˆo’s formula even though X may not be a semimartingale, see F¨ollmer (1981). Moreover,

E

Z t 0

f (Xt) dXt



= 0

(80)

Weak Brownian Motions

Characterization

1 W is a Brownian motion if and only if there exist an orthonormal basis (ϕn) on L2([−0, 1]) and a sequence of i.i.d. N (0, 1)-distributed random variables (ξn) such that

Wt=

X

k=1

Z t 0

ϕn(u) du

 ξn.

2 X is a weak Brownian motion of order k if and only if there exist an orthonormal basis (ϕn) on L2([−0, 1]) and a sequence of uncorrelated N (0, 1)-distributed random variables (ηn) such that

X

k=1

 λ1

Z t1

0

ϕn(u) du + · · · + λk Z tk

0

ϕn(u) du

 ηn is Gaussian for all λ1, ..., λk ∈ R, t1 ≤ t2≤ · · · ≤ tk and

Xt=

XZ t 0

ϕn(u) du

 ηn

(81)

Weak Brownian Motions

Characterization

1 W is a Brownian motion if and only if there exist an orthonormal basis (ϕn) on L2([−0, 1]) and a sequence of i.i.d. N (0, 1)-distributed random variables (ξn) such that

Wt=

X

k=1

Z t 0

ϕn(u) du

 ξn.

2 X is a weak Brownian motion of order k if and only if there exist an orthonormal basis (ϕn) on L2([−0, 1]) and a sequence of uncorrelated N (0, 1)-distributed random variables (ηn) such that

X

k=1

 λ1

Z t1

0

ϕn(u) du + · · · + λk Z tk

0

ϕn(u) du

 ηn is Gaussian for all λ1, ..., λk ∈ R, t1 ≤ t2≤ · · · ≤ tk and

t

(82)

Martingale Marginal Property

Martingale Marginal

Definition

The family of densities Q = {q(x, t) : t > 0} has martingale marginal propertyif there exists a probability space on which one may define a martingale (Mt) such that for every t, the law of M is given by the density q(M, t).

Theorem (Strassen (1965))

A family of probability measures (µn)n≥0 has martingale marginal property if and only if for all n ≥ 0,

Z

|x| µn(dx) < ∞, and for any concave µn-integrable function ψ, the sequence

Z

ψ(x) µn(dx)

 is non-increasing (the values of the integrals may be −∞).

(83)

Martingale Marginal Property

Martingale Marginal

Definition

The family of densities Q = {q(x, t) : t > 0} has martingale marginal propertyif there exists a probability space on which one may define a martingale (Mt) such that for every t, the law of M is given by the density q(M, t).

Theorem (Strassen (1965))

A family of probability measures (µn)n≥0 has martingale marginal property if and only if for all n ≥ 0,

Z

|x| µn(dx) < ∞, and for any concave µn-integrable function ψ, the sequence

Z

ψ(x) µn(dx)

 is

(84)

Martingale Marginal Property

Generalizations

Remark

Doob (1968) proved the continuous version of above result.

Theorem (Rothschild and Stiglitz (1970, 1971))

{q(x, t) : t > 0} has martingale marginal property if and only if for all K and for all T1 ≤ T2

Z 0

Sq(S, T2) dS ≤ Z

0

Sq(S, T1) dS Z

0

(S − K)+q(S, T2) dS ≥ Z

0

(S − K)+q(S, T1) dS.

Remark

This concept is relative to stochastic orders, see F¨ollmer and Schied (2004).

(85)

Martingale Marginal Property

Generalizations

Remark

Doob (1968) proved the continuous version of above result.

Theorem (Rothschild and Stiglitz (1970, 1971))

{q(x, t) : t > 0} has martingale marginal property if and only if for all K and for all T1 ≤ T2

Z 0

Sq(S, T2) dS ≤ Z

0

Sq(S, T1) dS Z

0

(S − K)+q(S, T2) dS ≥ Z

0

(S − K)+q(S, T1) dS.

Remark

(86)

Martingale Marginal Property

Generalizations

Remark

Doob (1968) proved the continuous version of above result.

Theorem (Rothschild and Stiglitz (1970, 1971))

{q(x, t) : t > 0} has martingale marginal property if and only if for all K and for all T1 ≤ T2

Z 0

Sq(S, T2) dS ≤ Z

0

Sq(S, T1) dS Z

0

(S − K)+q(S, T2) dS ≥ Z

0

(S − K)+q(S, T1) dS.

Remark

This concept is relative to stochastic orders, see F¨ollmer and Schied (2004).

(87)

Martingale Marginal Property

Markov Martingales

Question

Given a family of densities {q(x, t) : t > 0}, does there exist a probability space on which one can define a Markov martingale (Mt) such that for every t, the law of M is given by the density q(M, t)?

Theorem (Kellerer (1972))

1 Let {q(x, t) : t > 0} be a family of marginal densities, with finite first moment, such that for s < t

Z

f (x)q(x, t) dx ≥ Z

f (x)q(x, s) dx

for all convex non-decreasing functions f , then there exists a Markov submartingale (Mt) with marginal densities {q(Mt, t) : t > 0}.

(88)

Martingale Marginal Property

Markov Martingales

Question

Given a family of densities {q(x, t) : t > 0}, does there exist a probability space on which one can define a Markov martingale (Mt) such that for every t, the law of M is given by the density q(M, t)?

Theorem (Kellerer (1972))

1 Let {q(x, t) : t > 0} be a family of marginal densities, with finite first moment, such that for s < t

Z

f (x)q(x, t) dx ≥ Z

f (x)q(x, s) dx

for all convex non-decreasing functions f , then there exists a Markov submartingale (Mt) with marginal densities {q(Mt, t) : t > 0}.

2 Furthermore, if the means are independent of t, then (Mt) is a Markov martingale.

(89)

Martingale Marginal Property

Constructions

Define the family of barycetre functions ψ(x, t) =

R

x yq(y, t) dy R

x q(y, t) dy .

Suppose ψ(x, t) is increasing in t and q(x, t) is a family of zero mean densities.

Theorem (Madan and Yor (2002))

Let (Bt) be a standard Brownian motion. Define a stopping time τt= inf



s : sup

0≤u≤s

Bu≥ ψ(Bs, t)

 .

(90)

Martingale Marginal Property

Constructions

Define the family of barycetre functions ψ(x, t) =

R

x yq(y, t) dy R

x q(y, t) dy .

Suppose ψ(x, t) is increasing in t and q(x, t) is a family of zero mean densities.

Theorem (Madan and Yor (2002))

Let (Bt) be a standard Brownian motion. Define a stopping time τt= inf



s : sup

0≤u≤s

Bu≥ ψ(Bs, t)

 .

Then Mt:= Bτt is an inhomogeneous Markov martingale with density q.

(91)

Wiener Chaos

Consequence of the Main Results

Let X represent the coordinate process and L2(P) = L2(C([0, 1]), P).

Notation

For every k ∈ N, define

Πk:=

( k Y

i=1

fi(Xti) : t1 < · · · < tk≤ 1, fi is bounded, Borel measurable )

.

Then Corollary

For every k, Π is not total in L2(P).

(92)

Wiener Chaos

Consequence of the Main Results

Let X represent the coordinate process and L2(P) = L2(C([0, 1]), P).

Notation

For every k ∈ N, define

Πk:=

( k Y

i=1

fi(Xti) : t1 < · · · < tk≤ 1, fi is bounded, Borel measurable )

.

Then Corollary

For every k, Πk is not total in L2(P).

(93)

Wiener Chaos

Decomposition

Notation

For every k ∈ N, define

K0 := Π0 = R, Kn+1= ¯Πn+1∩ ¯Πn, where ⊥ denotes orthogonality relation in L2(P).

Lemma

L2(P) =

M

n=1

Kn.

Remark

(94)

Wiener Chaos

Decomposition

Notation

For every k ∈ N, define

K0 := Π0 = R, Kn+1= ¯Πn+1∩ ¯Πn, where ⊥ denotes orthogonality relation in L2(P).

Lemma

L2(P) =

M

n=1

Kn.

Remark

Kn is called the nth time-space Wiener chaos.

(95)

Wiener Chaos

Decomposition

Notation

For every k ∈ N, define

K0 := Π0 = R, Kn+1= ¯Πn+1∩ ¯Πn, where ⊥ denotes orthogonality relation in L2(P).

Lemma

L2(P) =

M

n=1

Kn.

Remark

參考文獻

相關文件

You are given the wavelength and total energy of a light pulse and asked to find the number of photons it

Wang, Solving pseudomonotone variational inequalities and pseudocon- vex optimization problems using the projection neural network, IEEE Transactions on Neural Networks 17

volume suppressed mass: (TeV) 2 /M P ∼ 10 −4 eV → mm range can be experimentally tested for any number of extra dimensions - Light U(1) gauge bosons: no derivative couplings. =&gt;

Define instead the imaginary.. potential, magnetic field, lattice…) Dirac-BdG Hamiltonian:. with small, and matrix

incapable to extract any quantities from QCD, nor to tackle the most interesting physics, namely, the spontaneously chiral symmetry breaking and the color confinement.. 

• Formation of massive primordial stars as origin of objects in the early universe. • Supernova explosions might be visible to the most

Monopolies in synchronous distributed systems (Peleg 1998; Peleg

Corollary 13.3. For, if C is simple and lies in D, the function f is analytic at each point interior to and on C; so we apply the Cauchy-Goursat theorem directly. On the other hand,