Monte Carlo Integration I
Digital Image Synthesisg g y YungYu Chuang
with slides by Pat Hanrahan and Torsten Moller
Introduction
) ω p, (
_{o}L
oL
_{e}( p , ω
_{o})
i i
i i
o
, ω ) ( p, ω ) cos θ ω ω
p,
2
f ( L
_{i}d
s
• The integral equations generally don’t have
analytic solutions, so we must turn to numerical methods.
• Standard methods like Trapezoidal integration p g or Gaussian quadrature are not effective for highdimensional and discontinuous integrals.g g
Numerical quadrature
• Suppose we want to calculate , but can’t solve it analytically The approximations I ^{}
a^{b} f (x)dxcan t solve it analytically. The approximations through quadrature rules have the form
^{n}
i
i
i f x
w I
1
) ˆ (
which is essentially the weighted sum of samples of the function at various pointsp p
Midpoint rule
convergence convergence
Trapezoid rule
convergence convergence
Simpson’s rule
• Similar to trapezoid but using a quadratic polynomial approximation
polynomial approximation
convergence convergence
assuming f has a continuous fourth derivative.
Curse of dimensionality and discontinuity
• For an sd f i f function f,
• If the 1d rule has a convergence rate of O(n^{r}), the sd rule would require a much larger number (n^{s}) of samples to work as well as the 1d one.
Thus, the convergence rate is only O(n^{r/s}).
• If f is discontinuous, convergence is O(n, g ( ^{1/s}) for ) sd.
Randomized algorithms
• Las Vegas v.s. Monte Carlo
L V l i h i h b
• Las Vegas: always gives the right answer by using randomness.
• Monte Carlo: gives the right answer on the average. Results depend on random numbers used, but statistically likely to be close to the right answer.
Monte Carlo integration
• Monte Carlo integration: uses sampling to estimate the values of integrals It only estimate the values of integrals. It only
requires to be able to evaluate the integrand at arbitrary points making it easy to implement
arbitrary points, making it easy to implement and applicable to many problems.
If l d it t th t
• If n samples are used, its converges at the rate of O(n^{1/2}). That is, to cut the error in half, it is
t l t f ti
necessary to evaluate four times as many samples.
• Images by Monte Carlo methods are often noisy.
Most current methods try to reduce noise.
Monte Carlo methods
• Advantages
E t i l t
– Easy to implement
– Easy to think about (but be careful of statistical bias)
R b t h d ith l i t d d
– Robust when used with complex integrands and domains (shapes, lights, …)
Efficient for high dimensional integrals – Efficient for high dimensional integrals
• Disadvantages
– Noisy
– Slow (many samples needed for convergence)
Basic concepts
• X is a random variable
A l i f i d i bl i
• Applying a function to a random variable gives another random variable, Y=f(X).
• CDF (cumulative distribution function) }
Pr{
)
(x X x
P
• PDF (probability density function): nonnegative, sum to 1 dP )(x
} {
) (
sum to 1
i l if d i bl ξ ( id d dx
x x dP
p ( )
)
(
• canonical uniform random variable ξ (provided by standard library and easy to transform to
th di t ib ti ) other distributions)
Discrete probability distributions
• Discrete events
X
_{i} with probabilityp
_{i}0
np
i
1
1
n
i i
p
p
i• Cumulative PDF (distribution)
p
^{i}j
P
p ^{1}• Construction of samples:
1
j i
i
P p
• Construction of samples: U
To randomly select an event, Select
X
_{i} ifP
_{i}_{}_{1} U P
_{i}0
P
i _{0}Uniform random variable
X
_{3}Continuous probability distributions
• PDF p x( ) Uniform
( ) 0 p x
• CDF
x 1
(1) 1 P ( )
P x
0
( ) ( ) P x
p x dx( ) Pr( ) P x X x
P ( X ) ( ) d
_{0}Pr( X ) p x dx( )
0 1
( ) ( ) P
P
^{0} ^{1}
Expected values
• Average value of a function f(x) over some distribution of values p(x) over its domain D distribution of values p(x) over its domain D
^{f} ^{x}
^{}
^{f} ^{x} ^{p} ^{x} ^{dx}E ( ) ( ) ( )
• Example: cos function over [0, π], p is uniform
^{D}p f x f x p x dx
E ( ) ( ) ( )
p [ , ], p
^{cos(}^{x}^{)}
^{}
^{} ^{cos} ^{x} ^{1} ^{dx} ^{} ^{0}E _{p}
cos(x)
0 cos x _{} dx 0Variance
• Expected deviation from the expected value
F d l f if i h
• Fundamental concept of quantifying the error in Monte Carlo methods
^{(} ^{)} ^{(} ^{)} ^{2}
)]
(
[ f x E f x E f x
V
Properties
^{af} ^{(}^{x}^{)}
^{aE}
^{f} ^{(}^{x}^{)}
E
^{}
i
i E f X
X f
E
( )
( )
^{i} ^{i}
^{af} ^{(}^{x}^{)}
^{a}^{2}^{V}
^{f} ^{(}^{x}^{)}
V
^{af} ^{(}^{x}^{)}
^{a} ^{V}
^{f} ^{(}^{x}^{)}
V
^{f} ^{(}^{x}^{)}
^{E}
^{f} ^{(}^{x}^{)}
^{2}
^{E}^{}
^{f} ^{(}^{x}^{)}^{}
^{2}V
^{f} ^{(}^{x}^{)}
^{E}
^{f} ^{(}^{x}^{)}
^{E}^{}
^{f} ^{(}^{x}^{)}^{}
V
Monte Carlo estimator
• Assume that we want to evaluate the integral of evaluate the integral of f(x) over [a,b]
Gi if d
• Given a uniform random variable X_{i} over [a,b],
M t C l ti t Monte Carlo estimator
^{N}X a f
F b ( )
says that the expected
i
i
N f X
F N
1
) (
says that the expected value E[F_{N}] of the
estimator F_{N} equals the estimator F_{N} equals the integral
General Monte Carlo estimator
• Given a random variable X drawn from an arbitrary PDF p(x) then the estimator is arbitrary PDF p(x), then the estimator is
^{N} f X^{i}
F 1 ( )
i i
N N p X
F
1 ( )
• Although the converge rate of MC estimator is O(N( ^{1/2}),) slower than other integral methods, its converge rate is independent of the dimension, making it the only practical method for high
dimensional integral
Convergence of Monte Carlo
• Chebyshev’s inequality: let X be a random
variable with expected value μ and variance σ^{2} variable with expected value μ and variance σ^{2}. For any real number k>0,
2
} 1

Pr{ X
k
k• For example, for k= , it shows that at least
half of the value lie in the interval ^{2} ^{(}^{} ^{} ^{2}^{}^{,}^{} ^{} ^{2}^{}^{)}
• Let , the MC estimate FYi f ^{(}Xi^{)}^{/} p^{(}Xi^{)} _{N} becomes
1 N
) ,
(
i
i
N Y
F N
1
1
Convergence of Monte Carlo
• According to Chebyshev’s inequality,
^{1}
_{} ^{}
[ ] ^{2}
 ] [

Pr _{N} _{N} V F^{N}
F E F
^{V}
^{Y}Y N N V
Y N V
N Y V F
V _{N} 1 ^{N} _{i} 1 ^{N} _{i} 1 ^{N} _{i} 1
]
[
_{2}
^{} _{2}
^{}
Plugging into Chebyshev’s inequality
N N
N
N _{i} ^{i} _{i} ^{i} _{i} ^{i}
N ] [
1 2 1
2
1
• Plugging into Chebyshev’s inequality,
^{2}
] 1
[
 1

Pr V Y
I F_{N}
So, for a fixed threshold, the error decreases at
_{}^{} ^{}


Pr F_{N} I N
, ,
the rate N^{1/2}.
Properties of estimators
• An estimator F_{N} is called unbiased if for all N That is, the expected value is independent of N.
Q F
E[ _{N} ]
• Otherwise, the bias of the estimator is defined
as
• If the bias goes to zero as N increases the Q
F E
F_{N} ] [ _{N} ]
[• If the bias goes to zero as N increases, the estimator is called consistent
0 ]
[
li
[F ] 0lim
N
N
FQ F
E[ ] lim E F_{N} Q
N
[ ]
lim
Example of a biased consistent estimator
• Suppose we are doing antialiasing on a 1d pixel, to determine the pixel value we need to
to determine the pixel value, we need to evaluate , where is the filter function withI ^{}
0^{1}w(x) f (x)dx w(x)
^{1} ^{( d}^{)} ^{1}filter function with
• A common way to evaluate this is
0 w( dxx) ^{}1
^{N}
_{N}
i N
i i i
N w X
X f
X F ^{1}w
) (
) (
) (
• When N=1, we have
i_{1}w(Xi ) Xf
X
( ) ( )
I dx
x f X
f X E
w
X f
X E w
F
E
1
0^{1}1
1
1 1 [ ( )] ( )
) (
) (
) ] (
[
Example of a biased consistent estimator
• When N=2, we have
I dx
x dx w
x w
x f x w x
f x F w
E
0^{1} 1
0 1 2
2 1
2 2
1 2 1
) (
) (
) (
) (
) ( ) ] (
[
• However, when N is very large, the bias approaches to zeropp
N
i i i
N
X f X N w
F ^{1}
1
) ( ) 1 (
Ni i
N w X
N1 ^{1} ( ) X
f
N X
^{(} ^{)} ^{(} ^{)}
^{1}li 1
I dx x f x w dx
x w
dx x f x w X
N w
X f X N w
F
E N
i i
i i i
N
N N
1 1 0
0 1 0
1
1 ( ) ( )
) (
) ( ) ( )
1 ( lim
) ( ) ( lim
] [ lim
N ^{i} ^{i}
N
1 ( )
0Choosing samples
• ^{F}^{N} ^{} _{N}^{1}
^{N} ^{f} _{(}^{(}_{X}^{X}^{i}_{)}^{)}• Carefully choosing the PDF from which samples are drawn is an important technique to reduce
i p XiN _{1} ( )
are drawn is an important technique to reduce variance. We want the f/p to have a low
variance Hence it is necessary to be able to variance. Hence, it is necessary to be able to draw samples from the chosen PDF.
• How to sample an arbitrary distribution from a
• How to sample an arbitrary distribution from a variable of uniform distribution?
– Inversionve s o – Rejection – Transform
Inversion method
• Cumulative probability distribution function
( ) Pr( ) P x X x
• Construction of samples
Solve for X=P^{1}(U)
1
Solve for X P (U)
• Must know:
U
• Must know:
1. The integral of p(x)
2. The inverse function P^{1}1(x)
0
X
Proof for the inversion method
• Let U be an uniform random variable and its CDF is P (x) x We will show that Y P^{1}(U) has CDF is P_{u}(x)=x. We will show that Y=P^{1}(U) has the CDF P(x).
Proof for the inversion method
• Let U be an uniform random variable and its CDF is P (x) x We will show that Y P^{1}(U) has CDF is P_{u}(x)=x. We will show that Y=P^{1}(U) has the CDF P(x).
^{Pr}
^{(} ^{)}
^{Pr}^{}
^{(} ^{)}^{}
^{(} ^{(} ^{))} ^{(} ^{)}Pr Y x P^{}^{1} U x U P x P_{u} P x P x
because P is monotonic,
) (
)
( _{1} _{2}
2
1 x P x P x
x
Thus, Y’s CDF is exactly P(x).
) (
)
( _{1} _{2}
2 1
Inversion method
• Compute CDF P(x)
• Compute P^{1}(x)
• Obtain ξξ
• Compute X_{i}=P^{1}(ξ)
Example: power function
It is used in sampling Blinn’s microfacet model.
x
nnx
p ( )
Example: power function
A
It is used in sampling Blinn’s microfacet model.
• Assume
( ) ( 1) ^{n} p x n x
1 1 1
0 0
1
1 1
n
n x
x dx n n
0 0( ) ^{n} 1
P x x ^{}
1 1
~ ( ) ( ) ^{n}
X p x X P U^{} ^{} U
Trick (It only works for sampling power distribution)
1 2 1
max( , , , _{n}, _{n} ) Y U U _{} U U _{}
1 1
n
^{1}1
Pr( ) Pr( ) ^{n}
i
Y x U x x ^{}
Example: exponential distribution
useful for rendering participating media.
ce
axx
p ( )
^{}• Compute CDF P(x)
• Compute P^{1}(x)
• Compute P (x) Obt i ξ
• Obtain ξ
• Compute X_{i}=P^{1}(ξ)
Example: exponential distribution
useful for rendering participating media.
ce
axx
p ( )
^{}0
1
^{}^{ce}
^{}^{ax}^{dx} ^{c} ^{} ^{a}
• Compute CDF P(x)
0x
ae
asds e
axx
P ^{(} ^{)}
^{} ^{1}
^{}• Compute P^{1}(x)
e ds
ae x
P ^{(} ^{)}
0 ^{1}
• Compute P (x)
1
Obt i ξ
) 1
1 ln(
)
1
( x
x a
P
^{}
• Obtain ξ
• Compute X_{i}=P^{1}(ξ)
1 ln
) 1
1 ln(
X ( )
a
a
Rejection method
• Sometimes, we can’t integrate into CDF or invert CDF
1
( ) I
f x dx invert CDF0
( ) f
dx dy
^{y} ^{} ^{f x} ^{( )}
• Algorithm
( ) y f x
dx dy
• Algorithm
Pick U_{1} and U_{2}
Accept
U
^{if }U < f(U )
Accept
U
_{1} ^{if }U
_{2}< f(U
_{1})
• Wasteful? Efficiency = Area / Area of rectangle
• Wasteful? Efficiency = Area / Area of rectangle
Rejection method
• Rejection method is a dartthrowing method without performing integration and inversion without performing integration and inversion.
1. Find q(x) so that p(x)<Mq(x) 2. Dart throwing
a. Choose a pair (X, ξ), where X is sampled from q(x) b. If (ξ<p(X)/Mq(X)) return X
• Equivalently, we pick a qu vale tly, we p c a point (X, ξMq(X)). If
it lies beneath (X) it lies beneath p(X) then we are fine.
Why it works
• For each iteration, we generate X_{i} from q. The sample is returned if ξ<p(X)/Mq(X) which
sample is returned if ξ<p(X)/Mq(X), which happens with probability p(X)/Mq(X).
S th b bilit t t i
• So, the probability to return x is x
p x
x p
q ( ) ( )
)
(
whose integral is 1/M
M x
x Mq
q( ) ( ) whose integral is 1/M
• Thus, when a sample is returned (with
probability 1/M) X is distributed according to probability 1/M), X_{i} is distributed according to p(x).
Example: sampling a unit disk
void RejectionSampleDisk(float *x, float *y) { float sx, sy;, y;
do {
sx = 1.f 2.f * RandomFloat();
sy = 1.f 2.f * RandomFloat();
} while (sx*sx + sy*sy > 1.f)
*x = sx; *y = sy;
*x = sx; *y = sy;
}
π/4～78.5% good samples, gets worse in higher dimensions, for example, for sphere, π/6～52.4%
dimensions, for example, for sphere, π/6 52.4%
Transformation of variables
• Given a random variable X from distribution p_{x}(x) to a random variable Y y(X) where y is one to to a random variable Y=y(X), where y is oneto one, i.e. monotonic. We want to derive the
distribution of Y p (y) distribution of Y, p_{y}(y).
• P_{y} (y(x)) Pr{Y y(x)} Pr{X x} P_{x} (x)
P (x)
• PDF:
dx x dP dx
x y
dP_{y} ( ( )) _{x} ( )
x
P_{x}(x)
dx dx
) (x dy p
y dy dP
y
p ^{y} ( )
) (
x P_{y}(y)
) (x p_{x} dx
y dy
dx y y
p_{y} ( ) ^{y}
) ( )
(
1
x dy p
y p
( ) ^{y}
)
( p x
dx y y
p_{y} _{x}
Example
x x
p_{x} ( ) 2 X Y sin
2 1 1
1 sin 2
cos ) 2
( )
(cos )
( y
y x
x x p
x y
p_{y} _{x}
^{} ^{}
cos x 1 y
Transformation method
• A problem to apply the above method is that we usually have some PDF to sample from not we usually have some PDF to sample from, not a given transformation.
Gi d i bl X ith ( ) d
• Given a source random variable X with p_{x}(x) and a target distribution p_{y}(y), try transform X into t th d i bl Y th t Y h th to another random variable Y so that Y has the distribution p_{y}(y).
• We first have to find a transformation y(x) so that P_{x}(x)=P_{y}_{y}(y(x)). Thus,
)) (
( )
( x P
^{1}P x
y
_{y}^{} _{x}Transformation method
• Let’s prove that the above transform works.
W fi h h d i bl ( )
We first prove that the random variable Z= P_{x}(x) has a uniform distribution. If so, then
P
_{y}^{}^{1}( Z )
should have distribution P_{y} from the inversion method.
Thus Z is uniform and the transformation works
^{Z} ^{x}
Pr
^{P}x (^{X} ) ^{x}
Pr
^{X} ^{P}x^{} (^{x})
^{P}x(^{P}x^{} (^{x})) ^{x}Pr ^{1} ^{1}
Thus, Z is uniform and the transformation works.
• It is an obvious generalization of the inversion method in which X is uniform and P (x)=x
method, in which X is uniform and P_{x}(x)=x.
Example
x x
p
_{x}( )
^{y}p
_{y}( y ) e
^{y}Example
x x
p
_{x}( )
^{y}p
_{y}( y ) e
^{y}) 2
(
x
2x
P
_{x} P
_{y}( y ) e
^{y}2 P
_{y}^{}^{1}( y ) ln y
2 ln ln
2 2 )
ln(
)) (
( )
(
1
2
^{}x x
x P
P x
y
_{y} _{x}Thus, if X has the distribution , then the
2
x x
p_{x}( )
random variable has the distributionY 2ln X ln 2^{x}
y
y y e
p ( )
Multiple dimensions
We often need the other way around,
Spherical coordinates
• The spherical coordinate representation of directions is i
directions is
sin sin
cos sin
r y
r x
cos r
z
sin

 J  r^{2} sin
 J_{T} r
) , , ( sin
) , ,
(r r^{2} p x y z
p
Spherical coordinates
• Now, look at relation between spherical directions and a solid angles
directions and a solid angles
d d
d sin
• Hence, the density in terms of
,
d d p d p ( , ) d d p ( ) d p ( , ) ( )
) (
sin )
,
( p
p ( , ) p ( )
p
Multidimensional sampling
• Separable case: independently sample X from p_{x} and Y from p p(x y) p (x)p (y)
and Y from p_{y}. p(x,y)=p_{x}(x)p_{y}(y)
• Often, this is not possible. Compute the i l d it f ti ( ) fi t
marginal density function p(x) first.
p x y dy x
p ( ) ( )
• Then compute the conditional density function
p x y dy x
p ( ) ( , )
• Then, compute the conditional density function
) (
) ,
) (

( p x y
x y
p
• Use 1D sampling with p(x) and p(yx).
) ) (

( y p x
p
Sampling a hemisphere
• Sample a hemisphere uniformly, i.e. ^{p}^{(}^{}^{)} ^{} ^{c}
Sampling a hemisphere
• Sample a hemisphere uniformly, i.e. ^{p}^{(}^{}^{)} ^{} ^{c}
2 ) 1
(
^{} p ( )
1 p
2
1 c
• Sample first
2
) sin ,
(
p
^{} ^{}sin
2 ) sin
, ( )
(
2 2
^{p} ^{d} ^{d}
p
• Now sampling
0
2
0
2
1 )
(
) , ) (

(
p p p
) 2 (
p
Sampling a hemisphere
• Now, we use inversion technique in order to sample the PDF’s
sample the PDF s
)
^{}sin ' ' 1 cos
( ^{d}
P ( ) sin 1 cos
0
^{d}
P
^{} ^{)}
^{}^{1} ^{'}
( ^{d}
P
• Inverting these:
' 2
) 2
 (
0
^{d}
P
• Inverting these:
1
cos
1
^{}2 1
2
Sampling a hemisphere
• Convert these to Cartesian coordinate
1
cos
1
^{}2 1 2
) 1
2 cos(
cos
sin
x
2 1
2 cos
sin sin sin( 2
^{2}) 1
^{1}^{2}
^{} ^{}
y
Si il d i ti f f ll h
cos
1 z
• Similar derivation for a full sphere
Sampling a disk
RIGHT EquiAreal WRONG EquiAreal RIGHT Equi Areal WRONG Equi Areal
_{1}2
2 U r U
2 U1
r U
U
2r U2
Sampling a disk
RIGHT EquiAreal WRONG EquiAreal RIGHT Equi Areal WRONG Equi Areal
2 U
1
2 U
_{1}r U
2r U2
Sampling a disk
• Uniform
) 1
,
(x y
p ^{p}(^{r},) ^{rp}(^{x}, ^{y}) ^{r}
• Sample r first.
d 2
) (
) (
2
^{}^{} ^{}
• Then sample
r d
r p r
p( ) ( , ) 2
0
^{} ^{}
• Then, sample .
2 1 )
(
) , ) (

(
r p
r r p
p
• Invert the CDF.
2 )(r p ) 2
(r r
P(r) r P(
 r)
P
) 2

( r P
r
_{1}
2
r
2
_{2}Shirley’s mapping
r U 1
2
4 1
U U
Sampling a triangle
0
u ( ,1u u)
0
1 v
u v
v
1 u v
u 1
2 1
1 1^{}^{u} 1
(1 u ) 1
1 u v
0 0 0
0
(1 ) 1
(1 )
2 2
A dv du u du u
( ) 2p u v( , ) 2 p u v
Sampling a triangle
• Here
u
andv
are not independent! ^{p u v}^{( , ) 2}^{}) , ) (

( p u v
v u
p
• Conditional probability
^{(} ^{)} ^{d}
)
( (  ) ( )
v v p
u
p
1
( ) 2 2(1 )
u
d
p u v dv u
p ( ) ( , )
0 2
( ) ^{u} 2(1 ) (1 )
P u
u du u0
( ) 2 2(1 )
p u
dv u0 1 1
u U
(  ) 1 p v u
0 0 0
( ) 2(1 ) (1 )
P u
u du u0 1 2
v U U
0 0 0 0 0 0
(  ) (  ) 1
(1 ) (1 )
o o
v v v
P v u p v u dv dv
u u
(  )
(1 ) p v u
u
0 0
0 0
(1 u ) (1 u )
Cosine weighted hemisphere
) cos
(
p
p ) ( d
1
^{2}^{} ^{}
0 0
2
cos sin
1
0 0c d d
2
^{2}cos sin
1 c
0^{} d
^{1}
c
d d
d sin
1 cos sin )
,
(
p
Cosine weighted hemisphere
1 cos sin )
,
(
p
, ) cos sin (
p
^{}1 i 2 i i 2
)
( ^{} ^{)}
^{2}_{} ^{cos} ^{} ^{sin} ^{} ^{d} ^{} ^{2} ^{cos} ^{} ^{sin} ^{} ^{sin} ^{2} ^{}
(
0^{d}
p
) 1 (
p
2
1 )
(
) , ) (

(
p p p
2
12 1 2 cos
) 1
(
P cos ( 1 2 )
2 1
1
1
^{} 2
2
)
2
(
P
2
2
2 2
2)

(
_{2}
_{2}Cosine weighted hemisphere
• Malley’s method: uniformly generates points on the unit disk and then generates directions by the unit disk and then generates directions by projecting them up to the hemisphere above it.
Vector CosineSampleHemisphere(float u1 float u2){
Vector CosineSampleHemisphere(float u1,float u2){
Vector ret;
ConcentricSampleDisk(u1 u2 &ret x &ret y);
ConcentricSampleDisk(u1, u2, &ret.x, &ret.y);
ret.z = sqrtf(max(0.f,1.f  ret.x*ret.x  ret.y*ret.y));
ret.y ret.y));
return ret;
} }
r
Cosine weighted hemisphere
• Why does Malley’s method work?
U i di k li ( ) ^{r}
• Unit disk sampling
• Map to hemisphere p(r,)
) , (sin )
,
(r
i )
, ( r
Y T X ( , )
sin r
) ( )
( ))
(
( T x J x
^{1}p x
p
_{y}
_{T} ^{} _{x}
) ( )
( ))
(
( p
p
_{y} _{T} _{x}
0 cos ) cos
(
x
J cos
1 ) 0
(
x
J
_{T}Cosine weighted hemisphere
) ,
( r
Y T X ( , )
sin r
) ,
( ( , )
) ( )
( ))
(
( T x J x
^{1}p x p ( T ( x )) J ( x )
^{}p ( x )
p
_{y}
_{T} _{x} 0 cos
1 cos 0
0 ) cos
(
x
J
_{T}
, ) ( , ) ^{cos} ^{sin}
( J p r
p
_{T}
) ( , ) ,
( p
p
_{T}Sampling Phong lobe
^{n}p ( ) cos
c
^{n}p ( ) cos
^{2}
^{ } ^{/}^{2}^{c} ^{cos}
^{n}^{} ^{sin} ^{} ^{d} ^{} ^{d} ^{} ^{} ^{1}
0 0
^{0}^{cos} ^{cos} ^{} ^{1}
2 c
^{n} d 2 c 1
1 cos
1 cos
cos 2