• 沒有找到結果。

# Let random variables X and Y be the number of aces and kings that occur

N/A
N/A
Protected

Share "Let random variables X and Y be the number of aces and kings that occur"

Copied!
39
0
0

(1)

y Ex. 9 Suppose that 2 cards are dealt from a deck of 52

without replacement Let random variables X and Y be the without replacement. Let random variables X and Y be the number of aces and kings that occur. It can be shown that

48 4

4 4

51) )(48 52 ( 4 2 )

1 ( )

1

( = p =

pX Y

51) )( 4 52 ( 4 2 )

1 , 1

( =

p

y Since , X and Y are not independentp(1,1) pX (1)pY (1)

(2)

y Joint pdf of X and Y f(x,y) (continuous)

### ∫∫

=

B A

dxdy y

x f B

Y A X

P( , ) ( , ) ,

where A and B are two real-number sets

y X and Y are independent if f(x,y)=fX(x)fY(y) for all x, y where

### ∫

= f x y dy x

fX ( ) ( , )

being the marginal pdfs of X and Y, respectively

### ∫

= f x y dx y

fY ( ) ( , )

being the marginal pdfs of X and Y, respectively

(3)

y Ex. 10 Suppose that X and Y are jointly continuousEx. 10 Suppose that X and Y are jointly continuous random variables with

24 for 0, 0 and + 1 )

( xy x y x y

f y Then

=

otherwise

0 ) ,

,

( y y y

y x f y Then

2 1

0 1

0 24 24 | 12 (1 )

)

(x xydy xy x x

fX =

### ∫

x = x = for 0≦ x ≦1

2 1

0 1

0 0

) 1

( 12

| 24 24

)

(y xydx xy y y

fY =

y = y =

### ∫

for 0≦ y ≦1

y Since , X and Y are not independent y If X and Y are not independent, we say that they are

) ( ) ( )

,

(x y f x f y

f X Y

dependent.

Return 2 Return 1

(4)

y The mean or expected value of a random variable Xi (or μi or E[X ])

or E[Xi ])

⎪⎪

=

=1

discrete, is

if )

( i

j

j X

j p x X

x μ

=

### ∫

1

. continuous is

if )

( i

X j

X dx

x xf

μ

i

y The mean is one measure of central tendency.

(5)

y Ex. 11 For the demand-size random variable in Ex. 5, the mean is given by

mean is given by

2 5 6

4 1 3

3 1 3

2 1 6

1× 1 + × + × + × =

= μ

y Ex. 12 For the uniform random variable in Ex. 6, the mean 2

6 3

3 6

is given by

1xf (x)dx

1xdx 1

μ =

0 xf (x)dx =

### ∫

0 xdx = 2 μ

(6)

y Important properties of means:

E[ X] E[X]

y E[cX]=cE[X]

y even if the Xi’s are dependent

Wh d d ( l b )

### ∑

ni= ciXi = ni= ciE Xi

E[ 1 ] 1 [ ]

Where c or ci denoted a constant (real number)

y The median x0.5 of the random variable Xi, which is an

l i f l d i d fi d b

alternative measure of central tendency, is defined to be the smallest value of x such that for the

discrete random variable For continuous random variable 5

. 0 )

(x FXi

discrete random variable. For continuous random variable, as shown in Fig. 4.8.

5 . 0 )

(x0.5 = FXi

(7)

The median may be a better measure of central tendency than the mean

fXi(x) when Xi can take on very large or

very small values.

Sh d d 0 5

x x

x0.5

(8)

y The variance of the random variable Xi denoted or Var(X ) by

2

σi

Var(Xi) by

].

[ ]

[ ]

)

[( 2 2 2

2

i i

i i

i E X μ E X E X

σ = =

y The variance is a measure of the dispersion of a random variable about its mean.

(9)

σ2 small σ2 large

μ

μ μ

μ

(10)

y Ex. 13 For the demand-size random variable in Ex. 5, the variance is computed as follows:

variance is computed as follows:

6 43 6

4 1 3

3 1 3

2 1 6

1 1 ]

[X 2 = 2 × + 2 × + 2 × + 2 × = E

12 ) 11

2 (5 6

] 43 [

] [

) (

6 6

3 3

6

2 2

2 = =

= E X E X

X Var

y Ex. 14 For the uniform random variable in Ex. 6 the variance is computed as follows:

1

1 1

1

3 ) 1

( ]

[ 1

0 1 2

0 2

2 =

x f x dx =

### ∫

x dx = X

E

12 ) 1

2 (1 3

] 1 [ ]

[ )

(X = E X 2 E2 X = 2 = Var

(11)

y The variance has the following properties:

V ( ) ≧ 0

y Var(x) ≧ 0

y Var(cX)=c2Var(X)

if h X ’ i d d

n n

y

### ∑ ∑

if the Xi’s are independent

=

=

=

i

i i

i Var X

X Var

1 1

) (

) (

y The standard deviation of the random variable Xi is denoted to be σi = Var(Xi) = σi2.

y E.g. For a normal random variable, the probability that Xi is between and is 0.95.μi 1.96σi μi +1.96σi

(12)

y The covariance between the random variables Xi and Xj, which is a measure of their dependence will be denoted which is a measure of their dependence, will be denoted by Cij or Cov(Xi,Xj) and is defined by

j i j

i j

j i

i

ij E X μ X μ E X X μ μ

C = [( )( )] = [ ]

y Note that covariances are symmetric, i.e., Cij = Cji. And if i=j, Cij = σi2.

(13)

y Ex. 15 For the joint continuous random variable X and Y in Ex 10 the covariance is computed as

in Ex. 10, the covariance is computed as

### ∫ ∫

=

= 1 1 2

) (

)

(XY x xyf x y dydx

E(XY) =

### ∫ ∫

0 0 xyf (x, y)dydx = 15 E

=

= 1 2

) ( )

(X xf x dx

E(X ) =

0 xf (x)dx = 5

E X

=

= 1 2

) ( )

(Y yf y dy

E(Y) =

### ∫

0 yf (y)dy = 5

E Y

) 2 )(2 (2 ) 2

( ) ( )

( )

,

(

=

=

= E XY E X E Y Y

X

Cov ) 75

)(5 (5 ) 15

( ) ( )

( )

, (

Return 1

(14)

y If Cij=0, the random variable Xi and Xj are said to be uncorrelated (X and X are independent) uncorrelated. (Xi and Xj are independent)

y Fact: independence implies the uncorrelated result.

I l th i t t t f l

y In general, the converse is not true expect for normal random variable's

If C 0 th iti l l t d

y If Cij>0, then positively correlated.

y If Cij<0, then negatively correlated.

Xii and Xjj tend to occur together Xii and Xjj tend to occur together

y Correlation ρij is defined by

2 1 and

for i j n

ρ = Cij , for and =1,2,..., .

2

2 i j n

σ ρ σ

j i

ij = =

(15)

y It can be shown that -1 ≦ ρij ≦ 1

If i l 1 h hi hl i i l l d

y If ρij is close to +1, then highly positively correlated.

y If ρij is close to -1, then highly negatively correlated.

y Ex. 16 For random variables in Ex. 10, it can be shown that Var(X)=Var(Y)=1/25. Therefore,

2 75

/ 2 )

, ) (

( Cov X Y

Y X

C 3

2 25

/ 1

75 / 2 )

( )

(

) , ) (

,

( = = σij = =

Y Var X

Var

Y X Y Cov

X Cor

(16)

## 4 3 Simulation output data and 4.3 Simulation output data and stochastic processes p

y Most simulation models use random variables as input, the simulation output data are themselves random

simulation output data are themselves random.

y A stochastic process is a collection of "similar" random variables ordered over time

variables ordered over time.

y If the collection consists of finite or countably infinite

d i bl (i X X ) th h di t

random variables (i.e. X1, X2, …), then we have a discrete- time stochastic process.

y If th ll ti i {X(t) t

### ≧

0} th h

y If the collection is {X(t), t

### ≧

0}, then we have a continuous-time stochastic process.

(17)

y Ex. 17 Consider a single-server queueing system, e.g., M/M/1 queue with IID interarrival times A A IID M/M/1 queue, with IID interarrival times A1, A2,..., IID service times S1, S2,..., and customers served in a FIFO manner The set of all possible values that these random manner. The set of all possible values that these random variables can take on is called the state space. We can define the discrete-time stochastic process of delays in p y queue D1, D2,...as follows:

D1=0 D1 0

Di+1=max{Di+Si-Ai+1,0} for i=1,2,…

(18)

y Thus, the simulation maps the input random variables into the output stochastic process D D of interest

the output stochastic process D1, D2,... of interest.

y The state space is the set of nonnegative real numbers.

D d D i i l l d

y Di and Di+1, are positively correlated.

(19)

y A discrete-time stochastic process X1, X2,...is said to be covariance stationary if

covariance-stationary if

<

<

=

=

2 2

2 f 12 d

and 2

1 for

i

μ -

,...

, i

μ μi

and Ci,i+j=Cov(Xi, Xi+j) is independent of i.

<

=

= 2 2

2 σ for i 1,2,...and σ σi

y For a covariance-stationary process, we denote the

covariance and correlation between Xi and Xi+j by Cj and ρj, where

,...

2 , 1 , 0 for

2 ,

2 2

, = = =

= + j

C C σ

C

σ j Ci i j j j

2 0 2

+ σ C

σ σi i j

j

(20)

y Ex. 18, Consider the output process D1, D2,...for a

covariance stationary M/M/1 queue with ρ= λ / ω<1 (λ:

covariance-stationary M/M/1 queue with ρ= λ / ω<1. (λ:

arrival rate, ω: service rate) See Fig. 4.10, note that the correlation ρ are positive and monotonically decrease to correlation ρj are positive and monotonically decrease to zero as j increases.

y In general our experiment indicates that output processes y In general, our experiment indicates that output processes

for queueing systems are positive correlated.

(21)
(22)

y Note that if X1, X2,...is a stochastic process beginning at time 0 in a simulation then it is quite likely not to be time 0 in a simulation, then it is quite likely not to be

covariance-stationary. However, for some simulation Xk+1, Xk 2 will be approximately covariance-stationary if k is Xk+2,...will be approximately covariance stationary if k is large enough, where k is the length of the warmup period.

(23)

## 4 4 Estimation of means 4.4 Estimation of means, variances, and correlations ,

y Suppose that X1, X2,..., Xn are IID random variables with finite population mean μ and finite population variance σ2 finite population mean μ and finite population variance σ2 y Sample mean

n X

n n X

X ( ) =

### ∑

i=1 i

which is an unbiased estimator of μ, i.e., μ

n X

E[[ (( )])] = μ

(24)

y Sample variance

n 2

1

)]

( ) [

( 1

2 2

=

### ∑

=

n

n X n X

S

n

i i

which is an unbiased estimator of σ2 since

2 2( )]

[S n σ E[ ( )] =

(25)

y Since Var X X n

Var n n

X

Var n

i

i n

i

i

2 1 1

) 1 (

1 ) ( )]

(

[ =

=

=

=

n σ σ

n n X

n Var

n

i

i

i i

2 2

2 1

2

1 1

) 1 1 (

=

### ∑

= =

it is clear that the bigger the sample size n, the closer should be to μ And the unbiased estimator of

n n

n i=1

) (n X

)]

( [X n should be to μ. And the unbiased estimator of Var is obtained by replacing σ2 of above Eq., resulting in

)]

( [X n Var

)]

(

[ 2

n X X

) 1 (

)]

( ) [

)] ( (

[ 1

2 2

=

=

### ∑

=

n n

n X X

n n n S

X

Var i

i

y If the Xi’s are independent, ρj=0 for j=1,2,…,n-1.

) (

(26)

y It has been our experience that simulation output data are almost always correlated

almost always correlated.

y Now, assume that the random variables X1, X2,..., Xn are from a covariance stationary stochastic process

from a covariance-stationary stochastic process.

y Then the sample mean is still an unbiased estimator.

y But the sample variance is no longer an unbiased estimator

y It can be shown that

) ] / 1

2 ( 1 [ )]

( [

1 2 1

2

×

=

### ∑

= j n ρ

ρ n

S E

n

j j

1 ] 2

1 [ )]

(

[ = ×

ρ n n

S E

(27)

y If ρj > 0, then , which will lead to serious errors in analysis

2 2( )]

[S n σ E <

errors in analysis

y As for , it can be shown (Prob. 4.17) that Var[X (n)]

1

n

ρ n σ j

n X Var

n

j (1 / ) j ]

2 1 )] [

( [

1

2 +

### ∑

=1

=

y One of estimators of ρj is ρ = Cˆ j

ˆ

n X X

n X X

n ρ S

j n

j i i

j

=

2

)]

( )][

( ˆ [

) (

j n

n X X

n X

Cj i Xi i j

=

### ∑

=1 [ ( )][ + ( )]

ˆ

(28)

y We shall see in Chap. 9 that it is often possible to group simulation output data into new "observations" to which simulation output data into new observations to which the formulas on IID observations can be applied.

(29)

## 4 5 Confidence intervals and 4.5 Confidence intervals and hypothesis tests for the mean yp

y Let X1, X2,...,Xn be IID random variables with finite mean μ and finite variance σ2

μ and finite variance σ2.

y Let Zn be the random variable and Fn(z) be the distribution function of Z (i e F (z)=P(Z ≦z))

n σ

μ n

X ( ) ]/ /

[ 2

be the distribution function of Zn (i.e. Fn(z)=P(Zn ≦z)).

y Note:

y Mean: μ

y Variance of : σX (n) 2 / n

(30)

## Central limit theorem

y TH. 1 Fn(z) → Φ(z) as , where Φ(z) is the

distribution function of a normal random variable with

n

distribution function of a normal random variable with μ=0 and σ2 =1.

1

### ∫

< <

=

Φ z e y dy z

z π

2

/ for 2

) 1

( 2

y If n is “sufficiently” large, the random number Zn will be approximately Xi’s.

y The difficulty is that σ22 is generally unknown.

y However, since the sample variance S2(n) converges σ2 as n gets large. TH. 1 remains true if we replace σ2 by S2(n).

(31)

y If the Xi's are normal random variables, then random

variable S2( )

variable

h di ib i i h 1 d f f d A d

n n μ S

n X

tn ( )

/ ] )

( [

2

=

has a t distribution with n-1 degrees of freedom. And an exact 100(1- α) percent confidence interval for μ is given bby

n n t S

n

X n α ( )

)

( ± 1,1 /2 0< α <1

y In practice, the distribution of the Xi's will rarely by

normal, and the above equation will also be approximate.

(32)

f(x)

0 z1- α/2 x

-z1- α/2

(33)

y If n is sufficiently large, an approximate 100(1- α) percent confidence interval for μ is given by

confidence interval for μ is given by n

z S n

X α ( )

) (

2 2

/ 1

±

y If one constructs a very large number of independent

α n )

( 1 /2

100(1- α) percent confidence intervals, the proportion of these confidence intervals that contain μ should be 1- α.

(34)

y Ex. 19 Suppose that the 10 observations 1.2, 1.5, 1.68, 1 89 0 95 1 49 1 55 0 5 and 1 09 are from a normal 1.89, 0.95, 1.49, 1.55, 0.5, and 1.09 are from a normal distribution with unknown mean μ and that our objective is to construct a 90 percent confidence interval for μ Note is to construct a 90 percent confidence interval for μ. Note that

17 0 9

/ ] ) 34 1 09 1 ( )

34 1 2 1 [(

) 10 (

34 1 10

) 09 1 5

1 2 1 ( ) 10 (

2 2

2 + +

= +

+ +

= S

. /

. ...

. .

X

### ( )

10 1 34 1 83 0.17 1 34 0 24

) 10 (

17 . 0 9

/ ] ) 34 . 1 09 . 1 ( ...

) 34 . 1 2 . 1 [(

) 10 (

2 95

0 9

2 2

2

±

=

±

=

±

=

+

+

= t S X

S

Therefore, we claim with 90 percent confidence that μ is i th i t l [1 10 1 58]

24 . 0 34 . 10 1

83 . 1 34 . 10 1

) 10

( ± t9,0.95 ± ±

X

in the interval [1.10 1.58].

(35)

y Assume that X1, X2,...,Xn are normally distributed, we would like to test the null hypothesis H that μ= μ

would like to test the null hypothesis H0 that μ= μ0

L S2(n) h f f h h i

y Let , the form of our hypothesis t t (t t t) f i

n n μ S

n X

tn ( )

/ ] )

(

[ 0

=

test (t test) for μ= μ0 is

⎧> t reject H

>

0 2

/ 1 . 1

0 2

/ 1 . 1

accept"

"

reject

|

|

If t H

H t t

α n

α n

n

(36)

y The set of all x such that is called the critical region

2 / 1 ,

| 1

| x > tn α

region.

y The probability that the statistic tn falls in the critical region given H is true is called the level of the test region given H0 is true is called the level of the test.

y Typical value of level: 0.05 or 0.1. (for experiment) y Two types of errors can be made:

y Type I error: reject H0 when it is indeed true.

y Type II error: accept H0 when it is false.

(37)

y Ex. 20 For the data of the previous example, suppose that we would like to test the null hypothesis H that μ =1 at we would like to test the null hypothesis H0 that μ =1 at level α =0.1. Since

34 0 1

) 10 ( X

95 . 0 , 2 9

10 2.65 1.83

10 / 17 . 0

34 . 0 10

/ ) 10 (

1 ) 10

( t

S

t X = = > =

= y we reject H0.

(38)

## 4 6 The strong law of large 4.6 The strong law of large numbers

y TH. 2

X (n) μ w p 1 n X ( ) . . 1 as

(39)

### probability distribution by its mean probability distribution by its mean

y One should not use the mean to replace the input distribution for the sake of simplicity

distribution for the sake of simplicity.

A call to SetLoadLevelTarget() implicitely stops ramping if applicable (&#34;last action wins&#34;), resetting the state variables as defined by

• Each row corresponds to one truth assignment of the n variables and records the truth value of φ under that truth assignment. • A truth table can be used to prove if two

The best way to picture a vector field is to draw the arrow representing the vector F(x, y) starting at the point (x, y).. Of course, it’s impossible to do this for all points (x,

1 Generalized Extreme Value Distribution Let Y be a random variable having a generalized extreme- value (GEV) distribution with shape parameter ξ, loca- tion parameter µ and

The proof is based on Hida’s ideas in [Hid04a], where Hida provided a general strategy to study the problem of the non-vanishing of Hecke L-values modulo p via a study on the

Chang-Yu 2005 proves that the Euler-Carlitz relations and the Frobenius relations generate all the algebraic relations among special Carlitz zeta values over the field ¯ k.. Jing

11.4 Differentials and the Chain Rules 11.5 Directional Derivatives and Gradients 11.6 Tangent Planes and Normal Lines 11.7 Extrema of Functions of Two Variables 11.8

• Adds variables to the model and subtracts variables from the model, on the basis of the F statistic. •