Chapter 2. Poisson Processes
Prof. Ai-Chun Pang
Graduate Institute of Networking and Multimedia,
Department of Computer Science and Information Engineering,
National Taiwan University, Taiwan
Outline
• Introduction to Poisson Processes
• Properties of Poisson processes – Inter-arrival time distribution – Waiting time distribution
– Superposition and decomposition
• Non-homogeneous Poisson processes (relaxing stationary)
• Compound Poisson processes (relaxing single arrival)
• Modulated Poisson processes (relaxing independent)
• Poisson Arrival See Average (PASTA)
Introduction
t
~1
x
~x2
~x3
0
S~ ~1
S ~2
S ~3
S ~4
S )
~ t(
n Inter-arrival time
arrival epoch
(i) nth arrival epoch ˜Sn is
S˜n = x˜1 + ˜x2 + . . . + ˜xn = ni=1 x˜i S˜0 = 0
(ii) Number of arrivals at time t is: ˜n(t). Notice that:
{˜n(t) ≥ n} if f⇔ { ˜Sn ≤ t}, {˜n(t) = n} if f⇔ { ˜Sn ≤ t and ˜Sn+1 > t}
Introduction
Arrival Process: X = {˜xi, i = 1, 2, . . .}; ˜xi’s can be any S = { ˜Si, i = 0, 1, 2, . . .}; ˜Si’s can be any
N = {˜n(t), t ≥ 0}; −→ called arrival process
Renewal Process: X = {˜xi, i = 1, 2, . . .}; ˜xi’s are i.i.d.
S = { ˜Si, i = 0, 1, 2, . . .}; ˜Si’s are general distributed N = {˜n(t), t ≥ 0}; −→ called renewal process
Poisson Process: X = {˜xi, i = 1, 2, . . .}; ˜xi’s are iid exponential distributed S = { ˜Si, i = 0, 1, 2, . . .}; ˜Si’s are Erlang distributed
N = {˜n(t), t ≥ 0}; −→ called Poisson process
Counting Processes
• A stochastic process N = {˜n(t), t ≥ 0} is said to be a counting process if ˜n(t) represents the total number of “events” that have occurred up to time t.
• From the definition we see that for a counting process ˜n(t) must satisfy:
1. ˜n(t) ≥ 0.
2. ˜n(t) is integer valued.
3. If s < t, then ˜n(s) ≤ ˜n(t).
4. For s < t, ˜n(t) − ˜n(s) equals the number of events that have occurred in the interval (s, t].
Definition 1: Poisson Processes
The counting process N = {˜n(t), t ≥ 0} is a Poisson process with rate λ (λ > 0), if:
1. ˜n(0) = 0
2. Independent increments relaxed ⇒ Modulated Poisson Process P [˜n(t) − ˜n(s) = k1|˜n(r) = k2, r ≤ s < t] = P [˜n(t) − ˜n(s) = k1] 3. Stationary increments relaxed ⇒ Non-homogeneous Poisson Process
P [˜n(t + s) − ˜n(t) = k] = P [˜n(l + s) − ˜n(l) = k]
4. Single arrival relaxed ⇒ Compound Poisson Process P [˜n(h) = 1] = λh + o(h)
P [˜n(h) ≥ 2] = o(h)
Definition 2: Poisson Processes
The counting process N = {˜n(t), t ≥ 0} is a Poisson process with rate λ (λ > 0), if:
1. ˜n(0) = 0
2. Independent increments
3. The number of events in any interval of length t is Poisson distributed with mean λt. That is, for all s, t ≥ 0
P [˜n(t + s) − ˜n(s) = n] = e−λt (λt)n
n! , n = 0, 1, . . .
Theorem: Definitions 1 and 2 are equivalent.
Proof. We show that Definition 1 implies Definition 2. To start, fix u ≥ 0 and let
g(t) = E[e−u˜n(t)]
We derive a differential equation for g(t) as follows:
g(t + h) = E[e−u˜n(t+h)]
= E
e−u˜n(t)e−u[˜n(t+h)−˜n(t)]
= E
e−u˜n(t)
E
e−u[˜n(t+h)−˜n(t)]
by independent increments
= g(t)E
e−u˜n(h)
by stationary increments (1)
Theorem: Definitions 1 and 2 are equivalent.
Conditioning on whether ˜n(t) = 0 or ˜n(t) = 1 or ˜n(t) ≥ 2 yields E
e−u˜n(h)
= 1 − λh + o(h) + e−u(λh + o(h)) + o(h)
= 1 − λh + e−uλh + o(h) (2)
From (1) and (2), we obtain that
g(t + h) = g(t)(1 − λh + e−uλh) + o(h) implying that
g(t + h) − g(t)
h = g(t)λ(e−u − 1) + o(h) h
Theorem: Definitions 1 and 2 are equivalent.
Letting h → 0 gives
g(t) = g(t)λ(e−u − 1) or, equivalently,
g(t)
g(t) = λ(e−u − 1) Integrating, and using g(0) = 1, shows that
log(g(t)) = λt(e−u − 1) or
g(t) = eλt(e−u−1) → the Laplace transform of a Poisson r. v.
Since g(t) is also the Laplace transform of ˜n(t), ˜n(t) is a Poisson r. v.
Theorem: Definitions 1 and 2 are equivalent
Let Pn(t) = P [˜n(t) = n].
We derive a differential equation for P0(t) in the following manner:
P0(t + h) = P [˜n(t + h) = 0]
= P [˜n(t) = 0, ˜n(t + h) − ˜n(t) = 0]
= P [˜n(t) = 0]P [˜n(t + h) − ˜n(t) = 0]
= P0(t)[1 − λh + 0(h)]
Hence
P0(t + h) − P0(t)
h = −λP0(t) + o(h) h Letting h → 0 yields
P0(t) = −λP0(t) Since P0(0) = 1, then
P0(t) = e−λt
Theorem: Definitions 1 and 2 are equivalent
Similarly, for n ≥ 1
Pn(t + h) = P [˜n(t + h) = n]
= P [˜n(t) = n, ˜n(t + h) − ˜n(t) = 0]
+P [˜n(t) = n − 1, ˜n(t + h) − ˜n(t) = 1]
+P [˜n(t + h) = n, ˜n(t + h) − ˜n(t) ≥ 2]
= Pn(t)P0(h) + Pn−1(t)P1(h) + o(h)
= (1 − λh)Pn(t) + λhPn−1(t) + o(h) Thus Pn(t + h) − Pn(t)
h = −λPn(t) + λPn−1(t) + o(h) h Letting h → 0,
Pn (t) = −λPn(t) + λPn−1(t)
The Inter-Arrival Time Distribution
Theorem. Poisson Processes have exponential inter-arrival time distribution, i.e., {˜xn, n = 1, 2, . . .} are i.i.d and exponentially
distributed with parameter λ (i.e., mean inter-arrival time = 1/λ).
Proof.
˜
x1 : P (˜x1 > t) = P (˜n(t) = 0) = e−λt(λt)0
0! = e−λt
··
· x˜1 ∼ e(t; λ)
˜
x2 : P (˜x2 > t|˜x1 = s)
= P{0 arrivals in (s, s + t]|˜x1 = s}
= P{0 arrivals in (s, s + t]}(by independent increment)
= P{0 arrivals in (0, t]}(by stationary increment)
= e−λt ··· x˜2 is independent of ˜x1 and ˜x2 ∼ exp(t; λ).
⇒ The procedure repeats for the rest of ˜xi’s.
The Arrival Time Distribution of the n th Event
Theorem. The arrival time of the nth event, ˜Sn (also called the waiting time until the nth event), is Erlang distributed with parameter (n, λ).
Proof. Method 1 :
··· P [ ˜Sn ≤ t] = P [˜n(t) ≥ n] = ∞
k=n
e−λt(λt)k k!
··
· fS˜
n(t) = λe−λt(λt)n−1
(n − 1)! (exercise) Method 2 :
fS˜
n(t)dt = dFS˜
n(t) = P [t < ˜Sn < t + dt]
= P{n − 1 arrivals in (0, t] and 1 arrival in (t, t + dt)} + o(dt)
= P [˜n(t) = n − 1 and 1 arrival in (t, t + dt)] + o(dt)
= P [˜n(t) = n − 1]P [1 arrival in (t, t + dt)] + o(dt)(why?)
The Arrival Time Distribution of the n th Event
= e−λt(λt)n−1
(n − 1)! λdt + o(dt)
··
· lim
dt→0
fS˜
n(t)dt
dt = fS˜
n(t) = λe−λt(λt)n−1 (n − 1)!
Conditional Distribution of the Arrival Times
Theorem. Given that ˜n(t) = n, the n arrival times ˜S1, ˜S2, . . . , ˜Sn have the same distribution as the order statistics corresponding to n i.i.d.
uniformly distributed random variables from (0, t).
. . . . Order Statistics. Let ˜x1, ˜x2, . . . , ˜xn be n i.i.d. continuous random
variables having common pdf f . Define ˜x(k) as the kth smallest value among all ˜xi’s, i.e., ˜x(1) ≤ ˜x(2) ≤ ˜x(3) ≤ . . . ≤ ˜x(n), then ˜x(1), . . . , ˜x(n) are known as the “order statistics” corresponding to random variables
˜
x1, . . . , ˜xn. We have that the joint pdf of ˜x(1), ˜x(2), . . . , ˜x(n) is f˜x(1),˜x(2),...,˜x(n)(x1, x2, . . . , xn) = n!f (x1)f (x2) . . . f (xn), where x1 < x2 < . . . < xn (check the textbook [Ross]).
. . . .
Conditional Distribution of the Arrival Times
Proof. Let 0 < t1 < t2 < . . . < tn+1 = t and let hi be small enough so that ti + hi < ti+1 , i = 1, . . . , n.
··· P [ti < ˜Si < ti + hi, i = 1, . . . , n|˜n(t) = n]
=
P
⎛
⎝ exactly one arrival in each [ti, ti + hi]
i = 1, 2, . . . , n, and no arrival elsewhere in [0, t]
⎞
⎠
P [˜n(t) = n]
= (e−λh1λh1)(e−λh2λh2) . . . (e−λhnλhn)(e−λ(t−h1−h2...−hn)) e−λt(λt)n/n!
= n!(h1h2h3 . . . hn) tn
··
· P [ti < ˜Si < ti + hi, i = 1, . . . , n|˜n(t) = n]
h1h2 . . . hn = n!
tn
Conditional Distribution of the Arrival Times
Taking lim
hi→0,i=1,...,n( ), then fS˜
1, ˜S2,..., ˜Sn|˜n(t)(t1, t2, . . . , tn|n) = n!
tn , 0 < t1 < t2 < . . . < tn.
Conditional Distribution of the Arrival Times
Example (see Ref [Ross], Ex. 2.3(A) p.68). Suppose that travellers arrive at a train depot in accordance with a Poisson process with rate λ. If the train departs at time t, what is the expected sum of the
waiting times of travellers arriving in (0, t)? That is, E[i=1˜n(t)(t− ˜Si)] =?
. . . t
1
S~ ~2
S ~3
S ~~( )
t
Sn
Conditional Distribution of the Arrival Times
Answer. Conditioning on ˜n(t) = n yields E[
˜n(t)
i=1
(t − ˜Si)|˜n(t) = n] = nt − E[n
i=1
S˜i]
= nt − E[n
i=1
˜
u(i)] (by the theorem)
= nt − E[n
i=1
˜
ui] (···
n i=1
˜
u(i) =
n i=1
˜ ui)
= nt − t
2 · n = nt
2 (··· E[˜ui] = t 2) To find E[
˜n(t)
i=1
(t − ˜Si)], we should take another expectation
··
· E[
˜n(t)
i=1
(t − ˜Si)] = t
2 · E[˜n(t)]
=λt
= λt2 2
Superposition of Independent Poisson Processes
Theorem. Superposition of independent Poisson Processes (λi, i = 1, . . . , N ), is also a Poisson process with rate
N 1
λi.
Poisson Poisson
Poisson
Poisson
λ1
λ2
λN rate = ∑
N
1
λi
<Homework> Prove the theorem (note that a Poisson process must satisfy Definitions 1 or 2).
Decomposition of a Poisson Process
Theorem.
• Given a Poisson process N = {˜n(t), t ≥ 0};
• If ˜ni(t) represents the number of type-i events that occur by time t, i = 1, 2;
• Arrival occurring at time s is a type-1 arrival with probability p(s), and type-2 arrival with probability 1 − p(s)
⇓then
• ˜n1, ˜n2 are independent,
• ˜n1(t) ∼ P (k; λtp), and
• ˜n2(t) ∼ P (k; λt(1 − p)), where p = 1 t
t
0 p(s)ds
Decomposition of a Poisson Process
Poisson Poisson (k;λ
∫
p(s)ds)∫
[1− ( )] );
(k λ p s ds
Poisson
λ p(s)
1-p(s)
special case: If p(s) = p is constant, then
Poisson Poisson rate
λ
p) 1
( − p
λ
Poisson rate
λ
p1-p
Decomposition of a Poisson Process
Proof. It is to prove that, for fixed time t,
P [˜n1(t) = n, ˜n2(t) = m] = P [˜n1(t) = n]P [˜n2(t) = m]
= e−λpt(λpt)n
n! · e−λ(1−p)t[λ(1 − p)t]m m!
. . . .
P [˜n1(t) = n, ˜n2(t) = m]
=
∞ k=0
P [˜n1(t) = n, ˜n2(t) = m|˜n1(t) + ˜n2(t) = k] · P [˜n1(t) + ˜n2(t) = k]
= P [˜n1(t) = n, ˜n2(t) = m|˜n1(t) + ˜n2(t) = n + m] · P [˜n1(t) + ˜n2(t) = n + m]
Decomposition of a Poisson Process
• From the “condition distribution of the arrival times”, any event
occurs at some time that is uniformly distributed, and is independent of other events.
• Consider that only one arrival occurs in the interval [0, t]:
P [type - 1 arrival|˜n(t) = 1]
=
t
0 P [type - 1 arrival|arrival time ˜S1 = s, ˜n(t) = 1]
×fS˜1|˜n(t)(s|˜n(t) = 1)ds
=
t
0 P (s) · 1
t ds = 1 t
t
0 P (s)ds = p
Decomposition of a Poisson Process
··
· P [˜n1(t) = n, ˜n2(t) = m]
= P [˜n1(t) = n, ˜n2(t) = m|˜n1(t) + ˜n2(t) = n + m] · P [˜n1(t) + ˜n2(t) = n + m]
=
⎛
⎝ n + m n
⎞
⎠ pn(1 − p)m · e−λt(λt)n+m (n + m)!
= (n + m)!
n!m! pn(1 − p)m · e−λt(λt)n+m (n + m)!
= e−λpt(λpt)n
n! · e−λ(1−p)t[λ(1 − p)t]m m!
Decomposition of a Poisson Process
Example (An Infinite Server Queue, textbook [Ross]).
Poisson λ departure
∞
G
• G˜s(t) = P ( ˜S ≤ t), where ˜S = service time
• G˜s(t) is independent of each other and of the arrival process
• ˜n1(t): the number of customers which have left before t;
• ˜n2(t): the number of customers which are still in the system at time t;
⇒ ˜n1(t) ∼? and ˜n2(t) ∼?
Decomposition of a Poisson Process
Answer.
˜
n1(t): the number of type-1 customers
˜
n2(t): the number of type-2 customers
type-1: P (s) = P (finish before t)
= P ( ˜S ≤ t − s) = G˜s(t − s) type-2: 1 − P (s) = ¯G˜s(t − s)
··
· n˜1(t) ∼ P
k; λt · 1 t
t
0 G˜s(t − s)ds
˜
n2(t) ∼ P
k; λt · 1 t
t
0
G¯˜s(t − s)ds
Decomposition of a Poisson Process
··
· E[˜n1(t)] = λt · 1 t
t
0 G(t − s)ds
= λ
0
t
G(y)(−dy) t − s = y s = t − y
= λ
t
0 G(y)dy
As t → ∞, we have
t→∞lim E[˜n2(t)] = λ
t
0
G(y)dy = λE[ ˜¯ S] (Little’s formula)
Non-homogeneous Poisson Processes
• The counting process N = {˜n(t), t ≥ 0} is said to be a non-stationary or non-homogeneous Poisson Process with time-varying intensity
function λ(t), t ≥ 0, if:
1. ˜n(0) = 0
2. N has independent increments 3. P [˜n(t + h) − ˜n(t) ≥ 2] = o(h)
4. P [˜n(t + h) − ˜n(t) = 1] = λ(t) · h + o(h)
• Define “integrated intensity function” m(t) = t
0 λ(t)dt. Theorem.
P [˜n(t + s) − ˜n(t) = n] = e−[m(t+s)−m(t)][m(t + s) − m(t)]n n!
Proof. < Homework >.
Non-homogeneous Poisson Processes
Example. The “output process” of the M/G/∞ queue is a
non-homogeneous Poisson process having intensity function λ(t) = λG(t), where G is the service distribution.
Hint. Let D(s, s + r) denote the number of service completions in the interval (s, s + r] in (0, t]. If we can show that
• D(s, s + r) follows a Poisson distribution with mean λ ss+r G(y)dy, and
• the numbers of service completions in disjoint intervals are independent,
then we are finished by definition of a non-homogeneous Poisson process.
Non-homogeneous Poisson Processes
Answer.
• An arrival at time y is called a type-1 arrival if its service completion occurs in (s, s + r].
• Consider three cases to find the probability P (y) that an arrival at time y is a type-1 arrival:
s s+r t
Case 1 Case 2 Case 3
y y y
– Case 1: y ≤ s.
P (y) = P{s − y < ˜S < s + r − y} = G(s + r − y) − G(s − y) – Case 2: s < y ≤ s + r.
P (y) = P{ ˜S < s + r − y} = G(s + r − y)
Non-homogeneous Poisson Processes
– Case 3: s + r < y ≤ t.
P (y) = 0
• Based on the decomposition property of a Poisson process, we
conclude that D(s, s + r) follows a Poisson distribution with mean λpt, where p = (1/t)0t P (y)dy.
t
0 P (y)dy =
s
0 [G(s + r − y) − G(s − y)]dy + s+r
s
G(s + r − y)dy +
t
s+r
(0)dy
=
s+r
0 G(s + r − y)dy − s
0 G(s − y)dy
=
s+r
0 G(z)dz − s
0 G(z)dz =
s+r
s
G(z)dz
Non-homogeneous Poisson Processes
• Because of
– the independent increment assumption of the Poisson arrival process, and
– the fact that there are always servers available for arrivals,
⇒ the departure process has independent increments
Compound Poisson Processes
• A stochastic process {˜x(t), t ≥ 0} is said to be a compound Poisson process if
– it can be represented as
˜
x(t) =
˜n(t)
i=1
˜
yi, t ≥ 0 – {˜n(t), t ≥ 0} is a Poisson process
– {˜yi, i ≥ 1} is a family of independent and identically distributed random variables which are also independent of {˜n(t), t ≥ 0}
• The random variable ˜x(t) is said to be a compound Poisson random variable.
• E[˜x(t)] = λtE[˜yi] and V ar[˜x(t)] = λtE[˜yi2].
Compound Poisson Processes
• Example (Batch Arrival Process). Consider a parallel-processing
system where each job arrival consists of a possibly random number of tasks. Then we can model the arrival process as a compound
Poisson process, which is also called a batch arrival process.
• Let ˜yi be a random variable that denotes the number of tasks comprising a job. We derive the probability generating function P˜x(t)(z) as follows:
P˜x(t)(z) = E
z˜x(t)
= E
E
z˜x(t)|˜n(t) = E
E
z˜y1+···+˜yn(t)˜ |˜n(t)
= E
E
z˜y1+···+˜yn(t)˜
(by independence of ˜n(t) and {˜yi})
= E E z˜y1 · · · E z˜yn(t)˜ (by independence of ˜y1, · · · , ˜y˜n(t))
= E
(P˜y(z))˜n(t)
= P˜n(t) (P˜y(z))
Modulated Poisson Processes
• Assume that there are two states, 0 and 1, for a “modulating process.”
0 1
• When the state of the modulating process equals 0 then the arrive rate of customers is given by λ0, and when it equals 1 then the arrival rate is λ1.
• The residence time in a particular modulating state is exponentially distributed with parameter μ and, after expiration of this time, the modulating process changes state.
• The initial state of the modulating process is randomly selected and is equally likely to be state 0 or 1.
Modulated Poisson Processes
• For a given period of time (0, t), let Υ be a random variable that indicates the total amount of time that the modulating process has been in state 0. Let ˜x(t) be the number of arrivals in (0, t).
• Then, given Υ, the value of ˜x(t) is distributed as a non-homogeneous Poisson process and thus
P [˜x(t) = n|Υ = τ] = (λ0τ + λ1(t − τ))ne−(λ0τ +λ1(t−τ)) n!
• As μ → 0, the probability that the modulating process makes no
transitions within t seconds converges to 1, and we expect for this case that
P [˜x(t) = n] = 1 2
(λ0t)ne−λ0t
n! + (λ1t)ne−λ1t n!
Modulated Poisson Processes
• As μ → ∞, then the modulating process makes an infinite number of transitions within t seconds, and we expect for this case that
P [˜x(t) = n] = (βt)ne−βt
n! , where β = λ0 + λ1 2
• Example (Modeling Voice).
– A basic feature of speech is that it comprises an alternation of silent periods and non-silent periods.
– The arrival rate of packets during a talk spurt period is Poisson
with rate λ1 and silent periods produce a Poisson rate with λ0 ≈ 0.
– The duration of times for talk and silent periods are exponentially distributed with parameters μ1 and μ0, respectively.
⇒ The model of the arrival stream of packets is given by a modulated Poisson process.
Poisson Arrivals See Time Averages (PASTA)
• PASTA says: as t → ∞
Fraction of arrivals who see the system in a given state
upon arrival (arrival average)
= Fraction of time the system is in a given state (time average)
= The system is in the given state at any random time
after being steady
• Counter-example (textbook [Kao]: Example 2.7.1)
0 1 2 3 4 5
1 1
1/2 1/2
service time = 1/2 inter-arrival time = 1
Poisson Arrivals See Time Averages (PASTA)
– Arrival average that an arrival will see an idle system = 1 – Time average of system being idle = 1/2
• Mathematically,
– Let X = {˜x(t), t ≥ 0} be a stochastic process with state space S, and B ⊂ S
– Define an indicator random variable
˜
u(t) =
⎧⎨
⎩
1, if ˜x(t) ∈ B 0, otherwise
– Let N = {˜n(t), t ≥ 0} be a Poisson process with rate λ denoting the arrival process
then,
Poisson Arrivals See Time Averages (PASTA)
t→∞lim
t
0 u(s)d˜˜ n(s)
˜
n(t) = lim
t→∞
t
0 u(s)ds˜ t
(arrival average) (time average)
• Condition – For PASTA to hold, we need the lack of anticipation assumption (LAA): for each t ≥ 0,
– the arrival process {˜n(t + u) − ˜n(t), u ≥ 0} is independent of {˜x(s), 0 ≤ s ≤ t} and {˜n(s), 0 ≤ s ≤ t}.
• Application:
– To find the waiting time distribution of any arriving customer – Given: P[system is idle] = 1 − ρ; P[system is busy] = ρ
Poisson Arrivals See Time Averages (PASTA)
Poisson
Poisson
Case 1: system is idle
Case 2: system is busy
⇒ P ( ˜w ≤ t) = P ( ˜w ≤ t|idle) · P (idle upon arrival) + P ( ˜w ≤ t|busy) · P (busy upon arrival)
Memoryless Property of the Exponential Distribution
• A random variable ˜x is said to be without memory, or memoryless, if P [˜x > s + t|˜x > t] = P [˜x > s] for all s, t ≥ 0 (3)
• The condition in Equation (3) is equivalent to P [˜x > s + t, ˜x > t]
P [˜x > t] = P [˜x > s]
or
P [˜x > s + t] = P [˜x > s]P [˜x > t] (4)
• Since Equation (4) is satisfied when ˜x is exponentially distributed (for e−λ(s+t) = e−λse−λt), it follows that exponential random variable are memoryless.
• Not only is the exponential distribution “memoryless,” but it is the unique continuous distribution possessing this property.
Comparison of Two Exponential Random Variables
Suppose that ˜x1 and ˜x2 are independent exponential random variables with respective means 1/λ1 and 1/λ2. What is P [˜x1 < ˜x2]?
P [˜x1 < ˜x2] =
∞
0 P [˜x1 < ˜x2|˜x1 = x]λ1e−λ1xdx
=
∞
0 P [x < ˜x2]λ1e−λ1xdx
=
∞
0 e−λ2xλ1e−λ1xdx
=
∞
0 λ1e−(λ1+λ2)xdx
= λ1
λ1 + λ2
Minimum of Exponential Random Variables
Suppose that ˜x1, ˜x2,· · · , ˜xn are independent exponential random variables, with ˜xi having rate μi, i = 1, · · · , n. It turns out that the smallest of the ˜xi is exponential with a rate equal to the sum of the μi.
P [min(˜x1, ˜x2, · · · , ˜xn) > x] = P [˜xi > x for each i = 1, · · · , n]
=
n i=1
P [˜xi > x] (by independence)
=
n i=1
e−μix
= exp
−
n
i=1
μi
x
How about max(˜x1, ˜x2, · · · , ˜xn)? (exercise)