• 沒有找到結果。

CONDITIONAL DISTRIBUTIONS

在文檔中 ProbabilitY Fundamentals of (頁 180-191)

Technique 2: We have that

8.3 CONDITIONAL DISTRIBUTIONS

Now the right side of this equation is a function of x while its left side is a function of)

x2+ y2. This implies that fX(x)/

xfX(x)

is constant. To prove this, we show that for any given x1

and x2,

fX(x1)

x1fX(x1) = fX(x2) x2fX(x2). Let y1= x2and y2= x1; then x12+ y12= x22+ y22and we have

fX(x1)

x1fX(x1) = ϕ

/

x12+ y12

ϕ

/

x12+ y12

/ x12+ y12

= ϕ

/

x22+ y22

ϕ

/

x22+ y22

/ x22+ y22

= fX(x2) x2fX(x2). We have shown that for some constant k,

fX(x) xfX(x) = k.

Therefore, fX(x)

fX(x) = kx and hence ln fX(x)=1

2kx2+ c, or fX(x)= e(1/2)kx2+c= αe(1/2)kx2, where α = ec. Now since



−∞

αe(1/2)kx2dx = 1, we have that k < 0. Let σ = )

−1/k;

then fX(x) = αe−x2/(2σ2) and



−∞αe−x2/(2σ2)dx = 1 implies that α = 1/(σ

2π ). So fX(x) = 1

σ

2πe−x2/(2σ2), showing that X ∼ N(0, σ2). The fact that Y ∼ N(0, σ2)is proved similarly.

Section 8.3 Conditional Distributions 175 2. Since

fY(y)=

 y 0

2 dx = 2y, 0 < y < 1, we have that

fX|Y(x|y) = f (x, y) fY(y) = 2

2y = 1

y, 0 < x < y, 0 < y < 1.

3. Let X be the number of flips of the coin until the sixth head is obtained. Let Y be the number of flips of the coin until the third head is obtained. Let Z be the number of additional flips of the coin after the third head occurs until the sixth head occurs; Z is a negative binomial random variable with parameters 3 and 1/2. By the independence of the trials,

pX|Y(x|5) = P (Z = x − 5) =

x− 6 2

1 2

31 2

x−8

=

x− 6 2

1 2

x−5

, x= 8, 9, 10, . . . .

4. Note that

fX|Y

 x 3

4 = 3

x2+ (9/16) (27/16)+ 1 = 1

43(48x2+ 27).

Therefore, P

1

4 < X < 1 2

 Y = 3 4

=

 1/2 1/4

1

43(48x2+ 27) dx = 17 86.

5. In the discrete case, let p(x, y) be the joint probability mass function of X and Y , and let A be the set of possible values of X. Then

E(X| Y = y) =

x∈A

xp(x, y) pY(y) =

x∈A

xpX(x)pY(y) pY(y) =

x∈A

xpX(x)= E(X).

In the continuous case, letting f (x, y) be the joint probability density function of X and Y , we get

E(X| Y = y) =



−∞

xf (x, y) fY(y) dx=



−∞

xfX(x)fY(y) fY(y) dx

=

−∞xfX(x) dx = E(X).

6. Since

fY(y)=



−∞

f (x, y) dx=

 1 0

(x+ y) dx = 1 2 + y,

the desired quantity is given by

fX|Y(x|y) =

⎧⎪

⎪⎩

x+ y

(1/2)+ y 0≤ x ≤ 1, 0 ≤ y ≤ 1

0 elsewhere.

7. Clearly,

fY(y)=



0

e−x(y+1)dx = 1

y+ 1, 0≤ y ≤ e − 1.

Therefore,

E(X| Y = y) =



−∞

xfX|Y(x|y) dx =



0

xf (x, y) fY(y) dx

=



0

xe−x(y+1)

1/(y+ 1)dx = 1 y+ 1. Note that, the last integral, *

0 x(y+ 1)e−x(y+1)dx is 1/(y+ 1) because it is the expected value of an exponential random variable with parameter y+ 1.

8. Let f (x, y) be the joint probability density function of X and Y . Clearly, f (x, y)= fX|Y(x|y)fY(y).

Thus

fX(x)=



−∞

fX|Y(x|y)fY(y) dy.

Now

fY(y)= 1 0 < y < 1 0 elsewhere, and

fX|Y(x|y) =

⎧⎪

⎪⎩ 1

1− y 0 < y < 1, y < x < 1 0 elsewhere.

Therefore, for 0 < x < 1,

fX(x)=

 x 0

1

1− y dy= − ln(1 − x), and hence

fX(x)= − ln(1 − x) 0 < x < 1

0 elsewhere.

Section 8.3 Conditional Distributions 177 9. f (x, y), the joint probability density function of X and Y is given by

f (x, y)=

⎧⎪

⎪⎩ 1

π if x2+ y2≤ 1 0 otherwise.

Thus

fY

4 5

= 1−(16/25)

1−(16/25)

1

π dx= 6 5π. Now

fX|Y

 x 4

5 =f

 x,4

5

fY

4 5

= 5

6, −3

5 ≤ x ≤ 3 5. Therefore,

P



0≤ X ≤ 4 11

 y = 4 5

=

 4/11 0

5

6dx= 10 33. 10. (a)



0

 x

−x

ce−xdy dx = 1 implies that c = 1/2.

(b) fX|Y(x|y) = f (x, y)

fY(y) = (1/2)e−x



|y| (1/2)e−xdx

= e−x+|y|, x >|y|,

fY|X(y|x) = (1/2)e−x

 x

−x(1/2)e−xdy

= 1

2x, −x < y < x.

(c) By part (b), given X = x, Y is a uniform random variable over (−x, x). Therefore, E(Y|X = x) = 0 and

Var(Y|X = x) =

x− (−x)2

12 = x2 3 . 11. Let f (x, y) be the joint probability density function of X and Y . Since

fX|Y(x|y) =

⎧⎪

⎪⎩

1 20+

(2y)/3

− 20 = 3

2y 20 < x < 20+2y 3

0 otherwise,

and

fY(y)=

⎧⎨

1/30 0 < y < 30 0 elsewhere,

we have that

f (x, y)= fX|Y(x|y)fY(y)=

⎧⎪

⎪⎩ 1

20y 20 < x < 20+2y

3 , 0 < y < 30 0 elsewhere.

12. Let X be the first arrival time. Clearly,

P

X≤ x | N(t) = 1

= 0 if x < 0 1 if x≥ t.

For 0≤ x < t, P

X≤ x | N(t) = 1

=P

X ≤ x, N(t) = 1 P

N (t )= 1 = P

N (x)= 1, N(t − x) = 0 P

N (t )= 1

= P

N (x)= 1 P

N (t− x) = 0 P

N (t )= 1 =

e−λx(λx)1

1! ·e−λ(t−x)

λ(t− x)0

0! e−λt(λt )1

1!

= x t,

where the third equality follows from the independence of the random variables N (x) and N (t− x) (recall that Poisson processes possess independent increments). We have shown that

P

X≤ x | N(t) = 1

=

⎧⎪

⎪⎪

⎪⎪

⎪⎩

0 if x < 0 x/t if 0≤ x < t 1 if x≥ t.

This shows that the conditional distribution of X given N (t)= 1 is uniform on (0, 1).

13. For x ≤ y, the fact that the conditional distribution of X given Y = y is hypergeometric follows from the following:

P (X= x | Y = y) = P (X= x, Y = y)

P (Y = y) = P (X= x)P (Y − X = y − x) P (Y = y)

=

m x



px(1− p)m−x ·

n− m y− x



py−x(1− p)(n−m)−(y−x)

n y



py(1− p)n−y

=

m x

n− m y− x



n y

 .

Section 8.3 Conditional Distributions 179 It must be clear that the conditional distribution of Y given that X = x is binomial with parameters n− m and p. That is,

P (Y = y | X = x) =

n− m y− x



py−x(1− p)n−m−y+x, y= x, x + 1, . . . , n − m + x.

14. Let f (x, y) be the joint probability density function of X and Y . By the solution to Exercise 25, Section 8.1,

f (x, y)=

⎧⎨

1/2 |x| + |y| ≤ 1 0 elsewhere, and

fY(y)= 1 − |y|, −1 ≤ y ≤ 1.

Hence

fX|Y(x|y) = 1/2

1− |y| = 1 2

1− |y|, −1 + |y| ≤ x ≤ 1 − |y|, −1 ≤ y ≤ 1.

15. Let λ be the parameter of

N (t ): t ≥ 0

. The fact that for s < t, the conditional distribution of N (s)given N (t)= n is binomial with parameters n and p = s/t, follows from the following relations for i ≤ n.

P

N (s)= i | N(t) = n

=P

N (s)= i, N(t) = n P

N (t )= n

= P

N (s)= i, N(t) − N(s) = n − i P

N (t )= n = P

N (s)= i P

N (t )− N(s) = n − i P

N (t )= n

= P

N (s)= i P

N (t− s) = n − i P

N (t )= n =

e−λs(λs)i

i! · e−λ(t−s)

λ(t − s)n−i

(n− i)!

e−λt(λt )n n!

=

n i

s t

i 1−s

t n−i

,

where the third equality follows since Poisson processes possess independent increments and the fourth equality follows since Poisson processes are stationary.

For i ≥ k, P

N (t )= i | N(s) = k

= P

N (t )− N(s) = i − k | N(s) = k

= P

N (t )− N(s) = i − k

= P

N (t− s) = i − k

= e−λ(t−s)

λ(t− s)i−k

(i− k)!

shows that the conditional distribution of N (t) given N (s) = k is Poisson with parameter λ(t− s).

16. Let p(x, y) be the joint probability mass function of X and Y . Clearly, pY(5)=12

13 4 1

13

, and

p(x,5)=

⎧⎪

⎪⎪

⎪⎪

⎪⎪

⎪⎪

⎪⎩

11 13

x−1 1 13

12 13

4−x 1 13

x <5

0 x = 5

11 13

4 1 13

12 13

x−6 1 13

x >5.

Using these, we have that

E(X| Y = 5) = x=1

xpX|Y(x|5) = x=1

x p(x,5) pY(5)

= 4 x=1

1 11x

11 12

x

+

x=6

x

11 12

4 1 13

12 13

x−6

= 0.72932 +11 12

41 13

y=0

(y+ 6)12 13

y

= 0.72932 +11 12

41 13

y=0

y

12 13

y

+ 6 y=0

12 13

y

= 0.702932 +11 12

41 13

12/13

(1/13)2+ 6 1 1− (12/13)

= 13.412.

Remark: In successive draws of cards from an ordinary deck of 52 cards, one at a time, randomly, and with replacement, the expected value of the number of draws until the first ace is 1/(1/13) = 13. This exercise shows that knowing the first king occurred on the fifth trial will increase, on the average, the number of trials until the first ace 0.412 draws.

Section 8.3 Conditional Distributions 181 17. Let X be the number of blue chips in the first 9 draws and Y be the number of blue chips drawn

altogether. We have that

E(X| Y = 10) = 9 x=0

xp(x,10) pY(10)

= 9

x=1

x

9 x

12 22

x10 22

9−x

·

 9

10− x

12 22

10−x10 22

x−1

18 10

12 22

1010 22

8

= 9

x=1

x

9 x

 9

10− x



18 10

 =9× 10 18 = 5,

where the last sum is (9× 10)/18 because it is the expected value of a hypergeometric random variable with N = 18, D = 9, and n = 10.

18. Clearly,

fX(x)=

 1 x

n(n− 1)(y − x)n−2dy= n(1 − x)n−1. Thus

fY|X(y|x) = f (x, y)

fX(x) = n(n− 1)(y − x)n−2

n(1− x)n−1 = (n− 1)(y − x)n−2 (1− x)n−1 . Therefore,

E(Y | X = x) =

 1 x

y(n− 1)(y − x)n−2

(1− x)n−1 dy= n− 1 (1− x)n−1

 1 x

y(y− x)n−2dy.

But

 1 x

y(y− x)n−2dy=

 1 x

(y− x + x)(y − x)n−2dy

=

 1 x

(y− x)n−1dy+

 1 x

x(y− x)n−2dy

= (1− x)n

n +x(1− x)n−1 n− 1 . Thus

E(Y | X = x) = n− 1

n (1− x) + x = n− 1 n +1

nx.

19. (a) The area of the triangle is 1/2. So

f (x, y)= 2 if x ≥ 0, y ≥ 0, x + y ≤ 1 0 elsewhere.

(b) fY(y)=

 1−y

0

2 dx= 2(1 − y), 0 < y < 1. Therefore,

fX|Y(x|y) = 2

2(1− y) = 1

1− y, 0≤ x ≤ 1 − y, 0 ≤ y < 1.

(c) By part (b), given that Y = y, X is a uniform random variable over (0, 1 − y). Thus E(X| Y = y) = (1 − y)/2, 0 < y < 1.

20. Clearly,

pX(x)= x y=0

1

e2y! (x − y)! = 1 e2x!

x y=0

x!

y! (x − y)! = e−2 x!

x y=0

x y



= e−2· 2x x! ,

where the last equality follows since x y=0

x y



is the number of subsets of a set with x elements and hence is equal to 2x.Therefore, pX(x)is Poisson with parameter 2 and so

pY|X(y|x) = p(x, y) pX(x) =

x y

 2−x. This yields

E(Y | X = x) = x y=0

y

x y

 2−x =

x y=0

y

x y

1 2

y1 2

x−y

= x 2,

where the last equality follows because the last sum is the expected value of a binomial random variable with parameters x and 1/2.

21. Let X be the lifetime of the dead battery. We want to calculate E(X | X < s). Since X is a continuous random variable, this is the same as E(X| X ≤ s). To find this quantity, let

FX|X≤s(t )= P (X ≤ t | X ≤ s), and fX|X≤s(t )= FX|X≤s (t ). Then

E(X| X ≤ s) =



0

tfX|X≤s(t ) dt.

Section 8.4 Transformations of Two Random Variables 183 Now

FX|X≤s(t )= P (X ≤ t | X ≤ s) = P (X≤ t, X ≤ s) P (X≤ s)

=

⎧⎨

P (X≤ t)

P (X≤ s) if t < s

1 if t ≥ s.

Differentiating FX|X≤s(t )with respect to t, we obtain

fX|X≤s(t )=

⎧⎨

f (t )

F (s) if t < s 0 otherwise.

This yields

E(X| X ≤ s) = 1 F (s)

 s 0

tf (t ) dt.

8.4 TRANSFORMATIONS OF TWO RANDOM VARIABLES 1. Let f be the joint probability density function of X and Y . Clearly,

f (x, y)= 1 0 < x < 1, 0 < y < 1 0 elswhere.

The system of two equations in two unknowns

−2 ln x = u

−2 ln y = v defines a one-to-one transformation of

R=

(x, y): 0 < x < 1, 0 < y < 1 onto the region

Q=

(u, v): u > 0, v > 0 . It has the unique solution x = e−u/2, y= e−v/2. Hence

J=







−1

2e−u/2 0

0 −1

2e−v/2







= 1

4e−(u+v)/2 = 0.

By Theorem 8.8, g(u, v), the joint probability density function of U and V is g(u, v)= f

e−u/2, e−v/21

4e−(u+v)/2 = 1

4e−(u+v)/2, u >0, v > 0.

2. Let f (x, y) be the joint probability density function of X and Y . Clearly, f (x, y)= f1(x)f2(y), x >0, y > 0.

Let V = X and g(u, v) be the joint probability density functions of U and V . The probability density function of U is gU(u), its marginal density function. The system of two equations in two unknowns

x/y= u x = v defines a one-to-one transformation of

R=

(x, y): x > 0, y > 0 onto the region

Q=

(u, v): u > 0, v > 0 . It has the unique solution x = v, y = v/u. Hence

J=





0 1

v u2

1 u



= v u2 = 0.

By Theorem 8.8, g(u, v)= f

v,v u

v u2

 = v u2f

 v,v

u = v

u2f1(v)f2

v u

u >0, v > 0.

Therefore,

gU(u)=



0

v

u2f1(v)f2

v u

dv, u >0.

3. Let g(r, θ ) be the joint probability density function of R and . We will show that g(r, θ )= gR(r)g(θ ).This proves the surprising result that R and  are independent. Let f (x, y) be the joint probability density function of X and Y . Clearly,

f (x, y)= 1

2πe−(x2+y2)/2, −∞ < x < ∞, −∞ < y < ∞.

Let R be the entire xy-plane excluded the set of points on the x-axis with x ≥ 0. This causes no problems since

P (Y = 0, X ≥ 0) = P (Y = 0)P (X ≥ 0) = 0.

The system of two equations in two unknowns

⎧⎨

)x2+ y2= r

arctany x = θ

Section 8.4 Transformations of Two Random Variables 185 defines a one-to-one transformation of R onto the region

Q=

(r, θ ): r > 0, 0 < θ < 2π . It has the unique solution

x= r cos θ y= r sin θ.

Hence

J=





cos θ −r sin θ sin θ rcos θ



 =r = 0.

By Therorem 8.8, g(r, θ ) is given by

g(r, θ )= f (r cos θ, r sin θ)|r| = 1

2πre−r2/2 0 < θ < 2π, r > 0.

Now

gR(r)= 0

1

2πre−r2/2 = re−r2/2, r >0, and

g(θ )=



0

1

2πre−r2/2dr = 1

2π, 0 < θ < 2π.

Therefore, g(r, θ ) = gR(r)g(θ ), showing that R and  are independent random variables.

The formula for g(θ )indicates that  is a uniform random variable over the interval (0, 2π ).

The probability density function obtained for R is called Rayleigh.

4. Method 1: By the convolution theorem (Theorem 8.9), g, the probability density function of

在文檔中 ProbabilitY Fundamentals of (頁 180-191)

相關文件