• 沒有找到結果。

EXPECTATIONS OF DISCRETE RANDOM VARIABLES

在文檔中 ProbabilitY Fundamentals of (頁 77-89)

Technique 2: We have that

4.4 EXPECTATIONS OF DISCRETE RANDOM VARIABLES

Section 4.4 Expectations of Discrete Random Variables 71 To find P (A), note that for A to occur, there are k−1 possibilities for one of the first k−1 teams to be a female-female team, two possibilities for the kth team (male-female and female-male), and one possibility for the remaining teams to be all male-male teams. Therefore,

P (A)=2(k − 1) 26

3

 .

To find P (B), note that for B to occur, there is one possibility for the first k − 1 teams to be all male-male, and two possibilities for the kth team: male-female and female-male. The number of possibilities for the remaining 13−k teams is equal to the number of distinguishable permutations of two F ’s and (26−2k)−2 M’s, which, by Theorem 2.4, is 26− 2k)!

2! (26 − 2k − 2)! =

26− 2k 2



.Therefore,

P (B)= 2

26− 2k 2



26 3

 .

Hence, for 1≤ k ≤ 13,

P (X= 2k) = P (A) + P (B) =

2(k− 1) + 2

26− 2k 2



26 3

 = 1

650k2− 1 26k+1

4.

3. The expected value of the winning amount is 30 4000

2, 000, 000

+ 800 500 2, 000, 000

+ 1, 200, 000 1 2, 000, 000

= 0.86.

Considering the cost of the ticket, the expected value of the player’s gain in one game is

−1 + 0.86 = −0.14.

4. Let X be the amount that the player gains in one game, then

P (X= 4) =

4 3

6 1



10 4

 = 0.114, P (X = 9) = 1

10 4

 = 0.005,

and P (X= −1) = 1 − 0.114 − 0.005 = 0.881. Thus

E(X)= −1(0.881) + 4(0.114) + 9(0.005) = −0.38.

Therefore, on the average, the player loses 38 cents per game.

5. Let X be the net gain in one play of the game. The set of possible values of X is{−8, −4, 0, 6, 10}.

The probabilities associated with these values are

p(−8) = p(0) = 1

5 2

 = 1

10, p(−4) =

2 1

2 1



5 2

 = 4

10,

and p(6)= p(10) =

2 1



5 2

 = 2

10.Hence

E(X)= −8 · 1

10 − 4 · 4

10 + 0 · 1

10 + 6 · 2

10 + 10 · 2 10 = 4

5. Since E(X) > 0, the game is not fair.

6. The expected number of defective items is

3 i=0

i·

5 i

 15 5− i



20 3

 = 0.75.

Section 4.4 Expectations of Discrete Random Variables 73 7. For i = 4, 5, 6, 7, let Xi be the profit if i magazines are ordered. Then

E(X4)= 4a 3 , E(X5)= 2a

3 · 6 18+5a

3 · 12 18 = 4a

3 , E(X6)= 0 · 6

18+ a · 5 18+6a

3 · 7

18 = 19a 18 , E(X7)= −2a

3 · 6 18 +a

3 · 5 18 +4a

3 · 4 18 +7a

3 · 3

18 =10a 18 .

Since 4a/3 > 19a/18 and 4a/3 > 10a/18, either 4, or 5 magazines should be ordered to maximize the profit in the long run.

8. (a) x=1

6 π2x2 = 6

π2 x=1

1 x2 = 6

π2· π2 6 = 1.

(b) E(X)= x=1

x 6 π2x2 = 6

π2 x=1

1 x = ∞.

9. (a) 2 i=−2

p(x)= 9 27 + 4

27 + 1 27+ 4

27+ 9 27 = 1.

(b) E(X)=2

x=−2xp(x)= 0, E(|X|) =2

x=−2|x|p(x) = 44/27, E(X2)=2

x=−2x2p(x)= 80/27. Hence

E(2X2− 5X + 7) = 2(80/27) − 5(0) + 7 = 349/27.

10. Let R be the radius of the randomly selected disk; then E(2π R)= 2π 10

i=1

i 1

10 = 11π.

11. p(x)the probability mass function of X is given by

x −3 0 3 4

p(x) 3/8 1/8 1/4 1/4 Hence

E(X)= −3 ·3

8+ 0 · 1

8+ 3 · 1 4+ 4 ·1

4 =5 8, E(X2)= 9 ·3

8 + 0 ·1

8 + 9 ·1

4+ 16 · 1 4 = 77

8 ,

E(|X|) = 3 ·3

8 + 0 ·1

8 + 3 ·1

4+ 4 · 1 4 = 23

8 , E(X2− 2|X|) = 77

8 − 223 8

= 31 8, E(X|X|) = −9 ·3

8+ 0 · 1

8+ 9 · 1

4+ 16 ·1 4 =23

8 .

12. E(X)= 10

i=1

i· 1 10 =11

2 and E(X2)= 10

i=1

i2· 1 10 = 77

2 . So E

X(11− X)

= E(11X − X2)= 11 ·11 2 −77

2 = 22.

13. Let X be the number of different birthdays; we have P (X= 4) = 365× 364 × 363 × 362

3654 = 0.9836,

P (X= 3) =

4 2



365× 364 × 363

3654 = 0.0163,

P (X= 2) =

4 2



365× 364 +

4 3



365× 364

3654 = 0.00007,

P (X= 1) = 365

3654 = 0.000000021.

Thus

E(X)= 4(0.9836) + 3(0.0163) + 2(0.00007) + 1(0.000, 000, 021) = 3.98.

14. Let X be the number of children they should continue to have until they have one of each sex.

For i ≥ 2, clearly, X = i if and only if either all of their first i −1 children are boys and the ith child is a girl, or all of their first i− 1 children are girls and the ith child is a boy. Therefore, by independence,

P (X= i) =1 2

i−1

·1 2 +1

2 i−1

·1 2 =1

2 i−1

, i≥ 2.

So

E(X)=

i=2

i

1 2

i−1

= −1 +

i=1

i

1 2

i−1

= −1 + 1

(1− 1/2)2 = 3.

Note that for|r| < 1,

i=1iri−1= 1/[(1 − r)2].

Section 4.4 Expectations of Discrete Random Variables 75 15. Let Aj be the event that the person belongs to a family with j children. Then

P (K = k) = c j=0

P (K = k|Aj)P (Aj)= c j=k

1 j. Therefore,

E(K)= c k=1

kP (K = k) = c

k=1

k c

j=k

αj j =

c k=1

c j=k

j j .

16. Let X be the number of cards to be turned face up until an ace appears. Let A be the event that no ace appears among the first i− 1 cards that are turned face up. Let B be the event that the ith card turned face up is an ace. We have

P (X= i) = P (AB) = P (B|A)P (A) = 4 52− (i − 1)·

 48 i− 1



 52 i− 1

.

Therefore,

E(X)= 49

i=1

i

 48 i− 1

 4

 52 i− 1



(53− i)

= 10.6.

To some, this answer might be counterintuitive.

17. Let X be the largest number selected. Clearly, P (X= i) = P (X ≤ i) − P (X ≤ i − 1) = i

N n

−i− 1 N

n

, i= 1, 2, . . . , N.

Hence

E(X)= N

i=1

in+1

Nni(i− 1)n Nn

= 1 Nn

N i=1

in+1− i(i − 1)n

= 1 Nn

N i=1

in+1− (i − 1)n+1− (i − 1)n

=

Nn+1N

i=1

(i− 1)n

Nn .

For large N ,

N i=1

(i− 1)n

 N 0

xndx = Nn+1 n+ 1.

Therefore,

E(X)

Nn+1Nn+1 n+ 1

Nn = nN

n+ 1. 18. (a) Note that

1

n(n+ 1) = 1 n− 1

n+ 1.

So k

n=1

1 n(n+ 1) =

k n=1

1 n− 1

n+ 1

= 1 − 1 k+ 1. This implies that

n=1

p(n)= lim

k→∞

k n=1

1

n(n+ 1) = 1 − lim

k→∞

1 k+ 1 = 1.

Therefore, p is a probability mass function.

(b) E(X)= n=1

np(n)=

n=1

1

n+ 1 = ∞,

where the last equality follows since we know from calculus that the harmonic series, 1+ 1/2 + 1/3 + · · · , is divergent. Hence E(X) does not exist.

19. By the solution to Exercise 16, Section 4.3, it should be clear that for 1≤ k ≤ n,

P (X= 2k) =

2(k− 1) + 2

2n− 2k 2



2n 3

 .

Hence

E(X)= n

k=1

2kP (X= 2k) = n

k=1

=

4k(k− 1) + 4k

2n− 2k 2



2n 3



= 4

2n 3

 2

n k=1

k3− (4n − 2) n

k=1

k2+ (2n2− n − 1) n n=1

k



= 4

2n 3



n2(n+ 1)2

4 − (4n − 2) ·n(n+ 1)(2n + 1)

6 + (2n2− n − 1)n(n+ 1) 2



= (n+ 1)2 2n− 1 .

Section 4.5 Variances and Moments of Discrete Random Variables 77 4.5 VARIANCES AND MOMENTS OF DISCRETE RANDOM VARIABLES

1. On average, in the long run, the two businesses have the same profit. The one that has a profit with lower standard deviation should be chosen by Mr. Jones because he’s interested in steady income. Therefore, he should choose the first business.

2. The one with lower standard deviation, namely, the second device.

3. E(X)=3

x=−3xp(x)= −1, E(X2)=3

x=−3x2p(x)= 4. Therefore, Var(X) = 4−1 = 3.

4. p, the probability mass function of X is given by

x −3 0 6

p(x) 3/8 3/8 2/8 Thus

E(X)= −9 8 +12

8 = 3

8, E(X2)= 27

8 +72 8 =99

8 , Var(X)= 99

8 − 9

64 = 783

64 = 12.234, σX =√

12.234= 3.498.

5. By straightforward calculations, E(X)=

N i=1

i· 1 N = 1

N ·N (N+ 1)

2 =N+ 1

2 ,

E(X2)= N

i=1

i2· 1 N = 1

N · N (N+ 1)(2N + 1)

6 = (N+ 1)(2N + 1)

6 ,

Var(X)=(N+ 1)(2N + 1)

6 −(N+ 1)2

4 = N2− 1 12 , σX =

! N2− 1

12 . 6. Clearly,

E(X)= 5

i=0

i·

13 i

 39 5− i



52 5

 = 1.25,

E(X2)= 5

i=0

i2·

13 i

 39 5− i



52 5

 = 2.426.

Therefore, Var(X)= 2.426 − (1.25)2= 0.864, and hence σX =√

0.864= 0.9295.

7. By the Corollary of Theorem 4.2, E(X2 − 2X) = 3 implies that E(X2)− 2E(X) = 3.

Substituting E(X)= 1 in this relation gives E(X2)= 5. Hence, by Theorem 4.3, Var(X)= E(X2)−

E(X)2

= 5 − 1 = 4.

By Theorem 4.5,

Var(−3X + 5) = 9Var(X) = 9 × 4 = 36.

8. Let X be Harry’s net gain. Then

X=

⎧⎪

⎪⎪

⎪⎨

⎪⎪

⎪⎪

−2 with probability 1/8 0.25 with probability 3/8 0.50 with probability 3/8 0.75 with probability 1/8.

Thus

E(X)= −2 ·1

8 + 0.25 ·3

8+ 0.50 ·3

8+ 0.75 ·1

8 = 0.125 E(X2)= (−2)2· 1

8+ 0.252· 3

8 + 0.502· 3

8+ 0.752· 1

8 = 0.6875.

These show that the expected value of Harry’s net gain is 12.5 cents. Its variance is Var(X)= 0.6875 − 0.1252= 0.671875.

9. Note that E(X)= E(Y ) = 0. Clearly, P

|X − 0| ≤ t

= 0 if t < 1 1 if t ≥ 1, P

|Y − 0| ≤ t

= 0 if t < 10 1 if t ≥ 10.

These relations, clearly, show that for all t > 0, P

|Y − 0| ≤ t

≤ P

|X − 0| ≤ t . Therefore, X is more concentrated about 0 than Y is.

10. (a) Let X be the number of trials required to open the door. Clearly, P (X= x) =

1−1 n

x−11

n, x = 1, 2, 3, . . . .

Section 4.5 Variances and Moments of Discrete Random Variables 79 Thus

E(X)=

x=1

x

 1−1

n x−11

n = 1 n

x=1

x

 1−1

n x−1

. (10)

We know from calculus that∀r, |r| < 1, x=1

xrx−1= 1

(1− r)2. (11)

Thus

x=1

x

 1−1

n x−1

= 1

1−

1−1 n

2 = n2. (12)

Substituting (12) in (10), we obtain E(X)= n. To calculate Var(X), first we find E(X2). We have

E(X2)= x=1

x2

 1−1

n

x−11 n

= 1 n

x=1

x2

 1−1

n x−1

. (13)

Now to calculate this sum, we multiply both sides of (11) by r and then differentiate it with respect to r; we get

x=1

x2rx−1= 1+ r (1− r)3. Using this relation in (13), we obtain

E(X2)= 1

n· 1+ 1 − 1 n

1− 1−1

n

3 = 2n2− n.

Therefore,

Var(X)= (2n2− n) − n2= n(n − 1).

(b) Let Ai be the event that on the ith trial the door opens. Let X be the number of trials required to open the door. Then

P (X= 1) = 1 n,

P (X= 2) = P (Ac1A2)= P (A2|Ac1)P (Ac1)

= 1

n− 1· n− 1

n = 1

n,

P (X= 3) = P (Ac1Ac2A3)= P (A3|Ac2Ac1)P (Ac2Ac1)

= P (A3|Ac2Ac1)P (Ac2|Ac1)P (Ac1)

= 1

n− 2· n− 2

n− 1· n− 1 n = 1

n.

Similarly, P (X = i) = 1/n for 1 ≤ i ≤ n. Therefore, X is a random number selected from {1, 2, 3, . . . , n}. By Exercise 5, E(X) = (n + 1)/2 and Var(X) = (n2− 1)/12.

11. For E(X3)to exist, we must have E

|X3|

<∞. Now

n=1

xn3p(xn)= 6 π2

n=1

(−1)nnn

n2 = 6

π2 n=1

(−1)n n <∞, whereas

E

|X3|

= n=1

|xn3|p(xn)= 6 π2

n=1

nn n2 = 6

π2 n=1

√1

n = ∞.

12. For 0 < s < r, clearly,

|x|s ≤ max 1,|x|r

≤ 1 + |x|r, ∀x ∈ R .

Let A be the set of possible values of X and p be its probability mass function. Since the rth absolute moment of X exists,

x∈A|x|rp(x) <∞. Now

x∈A

|x|sp(x)

x∈A

1+ |x|r p(x)

=

x∈A

p(x)+

x∈A

|x|rp(x)= 1 +

x∈A

|x|rp(x) <∞,

implies that the absolute moment of order s of X also exists.

13. Var(X)=Var(Y ) implies that

E(X2)− E(X)2

= E(Y2)− E(Y )2

. Since E(X)= E(Y ), this implies that E(X2)= E

Y2 .Let

P (X= a) = p1, P (X= b) = p2, P (X= c) = p3; P (Y = a) = q1, P (Y = b) = q2, P (Y = c) = q3.

Section 4.5 Variances and Moments of Discrete Random Variables 81 Clearly,

p1+ p2+ p3= q1+ q2+ q3= 1.

This implies

(p1− q1)+ (p2− q2)+ (p3− q3)= 0. (14) The relations E(X)= E(Y ) and E(X2)= E(Y2)imply that

ap1+ bp2+ cp3= aq1+ bq2+ cq3

a2p1+ b2p2+ c2p3= a2q1+ b2q2+ c2q3.

These and equation (14) give us the following system of 3 equations in the 3 unknowns p1−q1, p2− q2, and p3− q3.

⎧⎪

⎪⎩

(p1− q1) + (p2− q2) + (p3− q3) = 0 a(p1− q1) + b(p2− q2) + c(p3− q3) = 0 a2(p1− q1)+ b2(p2− q2)+ c2(p3− q3)= 0.

In matrix form, this is equivalent to

⎝1 1 1

a b c

a2 b2 c2

p1− q1

p2− q2

p3− q3

⎠ =

⎝0 0 0

⎠ . (15)

Now

det

⎝1 1 1

a b c

a2 b2 c2

⎠ = bc2+ ca2+ ab2− ba2− cb2− ac2

= (c − a)(c − b)(b − a) = 0,

since a, b, and c are three different real numbers. This implies that the matrix

⎝1 1 1

a b c

a2 b2 c2

is invertible. Hence the solution to (15) is

p1− q1= p2− q2= p3− q3= 0.

Therefore, p1= q1, p2= q2, p3= q3implying that X and Y are identically distributed.

14. Let

P (X= a1)= p1, P (X= a2)= p2, . . . , P (X= an)= pn; P (Y = a1)= q1, P (Y = a2)= q2, . . . , P (Y = an)= qn. Clearly,

p1+ p2+ · · · + pn= q1+ q2+ · · · + qn= 1.

This implies that

(p1− q1)+ (p2− q2)+ · · · + (pn− qn)= 0.

The relations E(Xr)= E(Yr),for r = 1, 2, . . . , n − 1 imply that

a1p1+ a2p2+ · · · + anpn= a1q1+ a2q2+ · · · + anqn, a21p1+ a22p2+ · · · + an2pn= a12q1+ a22q2+ · · · + an2qn,

...

an−11 p1+ a2n−1p2+ · · · + ann−1pn= a1n−1q1+ an−12 q2+ · · · + ann−1qn.

These and the previous relation give us the following n equations in the n unknowns p1− q1, p2− q2, . . . , pn− qn.

⎧⎪

⎪⎪

⎪⎪

⎪⎨

⎪⎪

⎪⎪

⎪⎪

(p1− q1) + (p2− q2) + · · · + (pn− qn) = 0 a1(p1− q1) + a2(p2− q2) + · · · + an(pn− qn) = 0 a12(p1− q1) + a22(p2− q2) + · · · + an2(pn− qn) = 0 . . . . an−11 (p1− q1)+ a2n−1(p2− q2)+ · · · + ann−1(pn− qn)= 0 In matrix form, this is equivalent to

⎜⎜

⎜⎜

⎜⎝

1 1 · · · 1

a1 a2 · · · an a12 a22 · · · a2n ... ... ... an−11 a2n−1 · · · ann−1

⎟⎟

⎟⎟

⎟⎠

⎜⎜

⎜⎜

⎜⎝ p1− q1

p2− q2 p3− q3

... pn− qn

⎟⎟

⎟⎟

⎟⎠=

⎜⎜

⎜⎜

⎜⎝ 0 0 0 ... 0

⎟⎟

⎟⎟

⎟⎠

. (16)

Now

det

⎜⎜

⎜⎜

⎜⎝

1 1 · · · 1

a1 a2 · · · an a12 a22 · · · an2 ... ... ... a1n−1 an−12 · · · ann−1

⎟⎟

⎟⎟

⎟⎠= 

j=n,n−1,... ,2 i<j

(aj− ai)= 0,

Section 4.6 Standardized Random Variables 83 since ai’s are all different real numbers. The formula for the determinant of this type of matrices is well known. These are referred to as Vandermonde determinants, after the famous French mathematician A. T. Vandermonde (1735–1796). The above determinant being nonzero implies that the matrix ⎛

⎜⎜

⎜⎜

⎜⎝

1 1 · · · 1

a1 a2 · · · an a21 a22 · · · a2n ... ... ... a1n−1 a2n−1 · · · ann−1

⎟⎟

⎟⎟

⎟⎠

is invertible. Hence the solution to (16) is

p1− q1= p2− q2= · · · = pn− qn= 0.

Therefore, p1= q1, p2= q2, . . . , pn = qn, implying that X and Y are identically distributed.

在文檔中 ProbabilitY Fundamentals of (頁 77-89)

相關文件