• 沒有找到結果。

Game Theory with Applications to Finance and Marketing (PartII)

N/A
N/A
Protected

Academic year: 2021

Share "Game Theory with Applications to Finance and Marketing (PartII)"

Copied!
32
0
0

加載中.... (立即查看全文)

全文

(1)

Game Theory with Applications to Finance and

Marketing

Lecture 1: Games with Complete Information, Part II Professor Chyi-Mei Chen, R1102

(TEL) x2964

(email) cchen@ccms.ntu.edu.tw

1. In Part I, we have introduced such solution concepts as dominance equilibrium, Nash equilibrium, SPNE, backward induction, and for-ward induction. Now we shall briefly go over other relevant equilibrium concepts, such as strong equilibrium, coalition-proof equilibrium, ratio-nalizable strategies, and correlated equilibrium. The remainder of this note will consider some complex applications of the above equilibrium concepts. We shall have a more formal examination of repeated games in a subsequent note.

2. Consider the following strategic game.

player 1/player 2 D C

D 0,0 0,0

C 0,0 1,1

This game has two mixed strategy NE’s. In view of Wilson’s theorem (1971), this game is quite unusual. Note that (D,D) is an NE where players play weakly dominated strategies in equilibrium. This does not seem reasonable. To get rid of this type of NE’s, Selten (1975) proposes the trembling-hand perfect equilibrium in normal form games, which is a refined notion of NE’s, aiming at screening out better NE’s. To see Selten’s idea, note that the reason that (D,D) can become an NE is because players are sure that C will be played by the rival with zero probability. Therefore, if we consider only those strategy profiles which are limits of totally mixed strategy profiles, then (D,D) can be ruled out. Formally, let Σ0 be the set of totally mixed strategy profiles,

(2)

and given any  ∈ <++, σ ∈ Σ0 is called an -perfect equilibrium if

∀i ∈ I, ∀si, s0i ∈ Si,

ui(si, σ−i) < ui(s0i, σ−i) ⇒ σi(si) ≤ .

A trembling-hand perfect equilibrium is then a profile σ ∈ Σ (which need not be totally mixed!) such that there exists a sequence {k; k ∈

Z+} in <++ and a sequence {σk; k ∈ Z+} in Σ0 with (i) limk→∞k= 0;

(ii) σkis an k-perfect equilibrium for all k ∈ Z+; and (iii) limk→∞σi,k(si) =

σi(si), ∀i ∈ I, ∀si ∈ Si. It can be shown that a trembling-hand perfect

equilibrium must exist for a finite game, and the trembling-hand perfect equilibrium is itself an NE, but the reverse is not true.1 In particular,

the above profile (D,D) is not a trembling-hand perfect equilibrium. 3. Consider the extensive game with two players where player 1 first

chooses between L and R, and the game ends with payoff profile (2, 2) if R is chosen, but if instead L is chosen, then player 2 can choose between l and r, with the game ending with payoff profile (1, 0) if r is chosen, and if instead player 2 chooses l, then player 1 can choose be-tween A and B, with the game ending with respectively payoff profiles (3, 1) and (0, −5). This game has a unique SPNE, (L,l,A), but (R,r,B) 1Let us prove that a trembling-hand perfect equilibrium σ is an NE. Recall the following definition of NE: a profile σ ∈ Σ is an NE if and only if for all i ∈ I, for all si, s0i ∈ Si,

ui(si, σ−i) < ui(s 0

i, σ−i) ⇒ σi(si) = 0. Note that i ∈ I, for all si, s0i∈ Si such that

ui(si, σ−i) < ui(s0i, σ−i) there exists K ∈ Z+ such that

k≥ K ⇒ ui(si, σk−i) < ui(s 0 i, σ−ki), by the fact that σk → σ, and hence for any such k, we have

σik(si) ≤ k, implying that 0 ≤ σi(si) = lim k→∞σ k i(si) ≤ lim k→∞k = 0. This shows that a trembling-hand perfect equilibrium is an NE.

(3)

is a trembling-hand perfect equilibrium in the corresponding (reduced) strategic game: consider letting player 1 play (L,A) with probability 2

and (L,B) with probability , where notice that player 2 will optimally respond by playing r (this happens because pro.((L, B)|L) = 

+2 is

close to one when  ↓ 0). The problem here is that at the two infor-mation sets where player 1 is called upon to take actions, player 1’s trembles are correlated. Because of this problem, Selten (1975) ar-gues that we should pay attention to the agent-normal form, where player 1 appearing at different information sets is treated as different agents. Then, the trembling-hand perfect equilibria are defined as the trembling-hand perfect equilibria of the agent-normal form, and will be simply referred to as the perfect equilibria. With this definition, it can be shown that perfect equilibria are SPNE’s.2

4. Consider the following strategic game.

player 1/player 2 L M R

U 1,1 0,0 -9,-9

M 0,0 0,0 -7,-7

D -9,-9 -7,-7 -7,-7

This game has three NE’s, all in pure strategy. These are (C,C), (D,D), and (L,R). In this game, (M,M) becomes a trembling-hand perfect equilibrium!3 This is unreasonable, for what we did was adding two

dominated strategies R and D to the preceding strategic game! Myer-son (1978) proposes a remedy to this situation. Formally, let Σ0 be the

set of totally mixed strategy profiles, and given any  ∈ <++, σ ∈ Σ0

is called an -proper equilibrium if ∀i ∈ I, ∀si, s0i ∈ Si,

ui(si, σ−i) < ui(s0i, σ−i) ⇒ σi(si) ≤ σi(s0i).

2In fact, they are also sequential equilibria defined by Kreps and Wilson (1982). 3To see that this claim is true, given  > 0, let

σ1(U ) = , σ1(M ) = 1 − 2, σ1(D) = , σ2(L) = , σ2(M ) = 1 − 2, σ2(R) = .

(4)

A proper equilibrium is then a profile σ ∈ Σ (which need not be to-tally mixed!) such that there exists a sequence {k; k ∈ Z+} in <++

and a sequence {σk; k ∈ Z+} in Σ0 with (i) limk→∞k = 0; (ii) σk is

an k-proper equilibrium for all k ∈ Z+; and (iii) limk→∞σi,k(si) =

σi(si), ∀i ∈ I, ∀si ∈ Si. It can be shown that a proper equilibrium is

necessarily a trembling-hand perfect equilibrium (this is obvious; sim-ply observe that σi(si) ≤ σi(s0i) ⇒ σi(si) ≤ ), and hence an NE, but

the reverse is not true. In particular, the above game has a unique proper equilibrium (U,L). To see this, consider any -proper equilib-rium σ, which is by definition totally mixed. Since player 1 would feel

indifferent about M and D only if player 2 were expected to use R with probability one, here we conclude that player 1 prefers M to D. This implies that player 1 should assign probabilities

(A1) σ

1(D) ≤ σ1(M ),

which implies that, from player 2’s point of view, for  > 0 small enough, u2(L, σ1) − u2(R, σ1) = 10σ 1(U ) + 7σ1(M ) − 2σ1(D) ≥ 10σ 1(U ) + (7 − 2)σ  1(M ) > 0, implying that σ2(R) ≤ σ2(L),

which in turn implies that, from player 1’s point of view, for  > 0 small enough, u1(U, σ2) − u1(M, σ2) = σ 2(L) − 2σ2(R) ≥ (1 − 2)σ 2(L) > 0,

implying further that

(A2) σ1(M ) ≤ σ1(U ). By (A1) and (A2), we conclude that σ

1(U ) ≥ 1 −  − 2, and hence in

any proper equilibrium σ = lim↓0σ, we have

1 ≥ σ1(U ) = lim ↓0σ



(5)

A similar reasoning applies to σ2(L). Hence (U,L) is the unique proper

equilibrium of this strategic game.

5. Myerson also proves that any finite strategic game has a proper rium, and hence any finite game has a trembling-hand perfect equilib-rium and an NE. Let us sketch Myerson’s proof. Note that it suffices to show that for any k ∈ (0, 1), an k-proper equilibrium σk exists, since

by the compactness of Σ, a convergent subsequence of {σk; k ∈ Z +}.

Thus fix any  ∈ (0, 1). Define

m ≡ max{#(Si; i = 1, 2, · · · , I},

where recall that #(A) is the cardinality of set A (the number of ele-ments of A). Define d ≡ m

m. For all i = 1, 2, · · · , I, define

Σd

i ≡ {σi ∈ Σi : σi(si) ≥ d, ∀si ∈ Si}.

Note that Σd

i is a non-empty compact subset of Σ0i. Define

Σd ≡ ΠI i=1Σdi.

For all i = 1, 2, · · · , I, consider the correspondence Fi : Σd → Σdi defined

by

Fi(σ) = {σi ∈ Σdi : ui(si, σ−i) < ui(s0i, σ−i) ⇒ σi(si) ≤ σi(si0), ∀si, s0i ∈ Si}.

Note that given each σ ∈ Σd, F

i(σ) is convex and closed. We claim

that Fi(σ) is also non-empty. To see this, for each si ∈ Si, define ρ(si)

to be the number of pure strategies s0

i ∈ Si with ui(si, σ−i) < ui(s0i, σ−i). Define σ0 i(si) ≡ ρ(si) P s00 i∈Si ρ(s00 i), ∀si ∈ Si. By construction, we have σ0 i(si) ≥ d, and Psi∈Siσ 0 i(si) = 1. Moreover,

it can be verified that σ0

i ∈ Fi(σ), showing that Fi(σ) is indeed

non-empty. Finally, one can verify that Fi is upper hemi-continuous. Define

F ≡ ΠI

(6)

Then F , inheriting the main properties from the Fi’s, is non-empty,

convex, and upper hemi-continuous, and hence F has a fixed point by Kakutani’s fixed point theorem. A fixed point of F is an -proper equilibrium. Since  ∈ (0, 1) was chosen arbitrarily, this proves that we can construct a sequence of k-proper equilibria, {σk; k ∈ Z+}, and

the latter must have a convergent subsequence, of which the limit is exactly a proper equilibrium. This finishes the proof for existence. 6. Now we briefly mention other useful equilibrium concepts proposed by

game theorists. (Very difficult; beginners can skip.)

Definition 10: (Aumann, 1959) Given an I-person finite strategic game Γ, a profile σ is a strong equilibrium if for any J ⊂ {1, 2, · · · , I}, and any σ0 ∈ Σ, there exists j ∈ J such that u

j(σ) ≥ uj(σ0J, σ−J),

where σ0

J is the profile σ0 restricted on the set of players J and σ−J is

the profile σ restricted on the set of players not contained in J.

From now on any nonempty subset of players from the original game is referred to as a coalition. Immediately from the above definition, a strong equilibrium is an NE; to see this, just let J be any singleton coalition. Define the set

U ≡ {(u1(σ), u2(σ), · · · , uI(σ)) : σ ∈ Σ}.

A strong equilibrium, if it exists, must give a profile of payoffs lying on the efficient frontier of U : Otherwise, we take J to be the entire set of players and obtain a contradiction.

Thus a strong NE is an NE which is robust against not only unilat-eral deviations but any coalitional deviations also. The problem with this solution concept is that it asks us to check all possible coalitional deviations, including those coalitional deviations which are themselves unreasonable: given a coalition that might benefit from a joint devi-ation from the original NE strategy profile, there may be some sub-coalition that can benefit from a joint deviation from this supposed joint deviation of the entire coalition. Thus coalitional deviations must be treated in a logically consistent way; this is where the coalition-proof equilibrium gets in the picture. Intuitively, among the solution concepts of NE, strong NE, and coalition-proof NE, the former is the weakest, and the strong NE is the strongest, so that it can happen that

(7)

given a game, there exists an NE and a coalition-proof equilibrium, but no strong equilibrium.

7. Definition 11: (Bernheim, Peleg, and Whinston, 1987) Suppose that we are given an I-person finite strategic game Γ. Let J be the set of all feasible coalitions.

(i) If I = 1, then a profile σ is a coalition-proof equilibrium if and only if u1(σ) ≥ u1(σ0) for all σ0 ∈ Σ.

(ii) Suppose I ≥ 2 and coalition-proof equilibrium (CPE) has been defined for all n-person finite strategic game with n ≤ I −1. A profile σ is self-enforcing if for all J ∈ J, σJ is a coalition-proof equilibrium in the

game Γ/σ−J; i.e. the #(J)-person strategic game where everything is as

in Γ except that players in −J are restricted to play σ−J. A

coalition-proof equilibrium is a enforcing profile σ such that no other self-enforcing profiles σ0 can simultaneously provide each and every player

in Γ a strictly higher payoff than σ.

Thus when I = 1, CPE requires only the best response property. Fol-lowing this fact, by considering all one-person coalitions, we conclude from the definition of self-enforceability that a CPE must be an NE. For two-person finite strategic games, CPE are equivalent to the set of NE’s which are not Pareto strictly dominated. However, for I ≥ 3, no inclusion relationships can be established between the two. Apparently, a strong equilibrium, if it exists, must be a CPE.

With these definitions and discussions in mind, we now consider two problems. First, consider three players A, B, and C, who are to divide one dollar, and each of them must choose a point in the two-dimensional simplex {(a, b, c) ∈ <3

+ : a + b + c = 1}. The three players move

si-multaneously, and if at least two of them pick the same point (a, b, c), then this point will be implemented, in the sense that a, b, and c will be the payoffs of A, B, and C respectively; or else, the dollar will be destroyed. We claim that this game has no CPE’s. To see this, sup-pose instead that there were a CPE (denoted σ) in which the players get expected payoffs (a, b, c), where without loss of generality, a > 0. Given σ1, players 2 and 3 could jointly deviate in the game Γ/σ1 by

an-nouncing simultaneously (0,a 2 + b,

a

2 + c), for example, thereby having

(8)

the original equilibrium profile in the two-person finite strategic game Γ/σ1 for players 2 and 3, showing that σ cannot be self-enforcing. (For

σ to be self-enforcing, it is necessary that (σ2, σ3) be a CPE in the

game Γ/σ1, which in turn requires that (σ2, σ3) be a Pareto

undomi-nated equilibrium in Γ/σ1.) Thus by definition, σ cannot be a CPE, a

contradiction.

Lemma 1: We say that an I-person finite strategic game Γ exhibits the unique-NE property if for any J ∈ J and any σ−J, there exists

a unique NE in the game Γ/σ−J. A game exhibiting the unique-NE

property has a unique CPE.

Proof It suffices to show that for a game exhibiting the unique-NE property, self-enforcing profiles and Nash equilibrium profiles are the same. This is sufficient because in this case, a profile is a CPE if and only if it is self-enforcing.

Suppose that I = 2 for Γ. By hypothesis, this game has a unique NE, denoted by (σ1, σ2), and given σi, σj is the unique best response of

player j (the unique-NE property holds for the one-person restricted games as well). Thus σj is coalition-proof in the game Γ/σi and hence

σ is self-enforcing. Now suppose that self-enforcing profiles and NE’s have been shown equivalent for all I-person finite strategic games ex-hibiting the unique-NE property, where I ≥ 2. We now show that this equivalence continues to hold for any such games (exhibiting the unique-NE property) with I + 1 players. Let σ be the unique NE for Γ, where there are I + 1 players in Γ. Fix any J ∈ J, note that the game Γ/σ−J also exhibits the unique-NE property, and in fact σJ must be

the unique NE for this #(J)-person game. It follows from the induc-tive hypothesis, that σJ is self-enforcing and, in this case, the unique

CPE in the game Γ/σ−J. This shows that σ is self-enforcing in the

I + 1-person game Γ.k

The second problem to be discussed here is the familiar Cournot game where there are N firms producing costlessly a homogeneous good to consumers. The inverse demand is (in the relevant region) p = 1 −

PN

i=1qi. We claim that this game has a unique CPE, which is not a

strong equilibrium. To see this, note that the game exhibits the unique-NE property, and hence by lemma 1 it has a unique CPE. This game has no strong equilibria, because if it had one, then the equilibrium

(9)

must be an NE lying on the efficient frontier of the set of players’ payoff vectors space, which is obviously impossible (think about the profile (2n1 ,2n1 , · · · ,2n1 )).

8. The last equilibrium concept we shall go over is rationalizability (Bern-heim, 1984). Define Σ0

i = Σi. For all natural numbers n, define

Σn

i = {σi ∈ Σn−1i : ∃σ−i ∈ Πj6=ico(Σn−1j ), ui(σi, σ−i) ≥ ui(σi0, σ−i) ∀σi0 ∈ Σn−1i }.

We call elements in T+∞

n=0Σni rationalizable strategies. Intuitively,

ra-tional players will never use strategies which are never best responses. Rationalizability extends this idea to fully utilize the assumption that rationality of players is their common knowledge.

9. Let us now develop the notion of rationalizability in detail. Given a game Γ in normal form with I players, consider sets Hi ⊂ Σi for all

i = 1, 2, · · · , I. We shall adopt the following definitions. • Let Hi(0) ≡ Hi and define inductively

Hi(t) ≡ {σi ∈ Hi(t − 1) : ∃σ−i ∈ Πj6=ico(Hj(t − 1))

3: ui(σi, σ−i) ≥ ui(σi0, σ−i) ∀σ 0

i ∈ Hi(t − 1)},

where co(A) is the smallest convex set containing A, called the convex hull generated by A. Define

Ri(ΠIi=1Hi) ≡ ∞

\

t=1

Hi(t).

• A I-tuple of sets (A1, A2, · · · , AI) has the best response property

if for all i, Ai ⊂ Σi and for all i, for all σi ∈ Ai, there exists

σ−i ∈ Πj6=ico(Aj) such that σi is a best response for i against σ−i.

• Ai ⊂ Σi has the pure strategy property if for all σi ∈ Ai, for all

si ∈ Si such that σi(si) > 0, si ∈ Ai.

• A profile σ ∈ Σ is rationalizable, if σi ∈ Ri(Σ) for all i.

With these definitions, we have

(10)

for all i, Ai ⊂ Σi is nonempty, closed, and satisfies the pure strategy

property. Then, (a) for all i and all t ∈ Z+, Ai(t) is nonempty, closed,

and satisfies the pure strategy property; and (b) for some k ∈ Z+,

Ai(t) = Ai(k) for all i and all t ≥ k.

Proof By induction, to prove (a), it suffices to show that the statement will be true for t if it is true for t − 1. By definition, if σi ∈ Ai(t),

then each si ∈ Si with σi(si) > 0 will too, proving the pure strategy

property. To show nonemptiness, note that co(Ai(t − 1)) is compact

for all i, since Ai(t − 1) is. By the induction hypothesis, Ai(t − 1)

is nonempty for all i. Since ui is continuous, the Weierstrass theorem

ensures the nonemptiness of Ai(t). Finally, for closedness, note that any

convergent sequence { σn

i } in Ai(t) must have a limit σi in Ai(t − 1),

as by the induction hypothesis, Ai(t − 1) is closed. Suppose for each

n, σn

i is a best response against σ−in in Πj6=ico(Aj(t − 1)). Since the set

Πj6=ico(Aj(t − 1)) is compact, a subsequence { σ−ink } converges to some

σ−i ∈ Πj6=ico(Aj(t − 1)). Now σi must be a best response against σ−i

by the continuity of ui. Thus σi ∈ Ai(t), showing that Ai(t) is closed.

Finally, consider statement (b). Note that Ai(t) 6= Ai(t − 1) only if

co(Aj(t)) 6=co(Aj(t − 1)) for some j 6= i. By the pure strategy property,

this can happen only if some pure strategy sj ∈ Aj(t − 1) was deleted

and was not contained in Aj(t). Since there are only a finite number

of pure strategies for any given j, this process must stop somewhere. 10. Now we can give the main results regarding the rationalizable set of

profiles.

Proposition 1 For all i, Ri(Σ) is nonempty and it contains at least

one pure strategy.

Proof Simply let Ai = Σi and apply lemma 2.

Note that by statement (b) of lemma 2, the I tuple of sets {R1(Σ), R2(Σ), · · · , RI(Σ)}

has the best response property. Proposition 2 Define for all i,

Ei ≡ {σi ∈ Σi : for some I-tuple {A1, A2, · · · , AI} with the best response property,

σi ∈ Ai}.

Then, Ei = Ri(Σ) for all i.

(11)

11. Because of proposition 2, we can show that

Proposition 3 Every NE, denoted σ, is rationalizable.

Proof The I-tuple of sets { {σ1}, {σ2}, · · ·, {σI} } satisfies the best

response property and σi ∈ {σi}, ∀i, so that proposition 2 implies that

for all i, σi ∈ Ri(Σ).

12. An important connection between the rationalizable set of profiles and the profiles surviving the iterated strict dominance is now given. In general, the former is contained in the latter.

Proposition 4 In two-person finite games, the two concepts coincide. Proof Suppose that σi is not a best response to any element of Σj;

i.e. for each σj ∈ Σj there exists b(σj) ∈ Σi such that

ui(b(σj), σj) > ui(σi, σj).

Call the original game Γ, and construct a zero-sum game Γ0 as follows.

The new game has the same set of players and pure strategy spaces, but the payoffs are defined as

ui0(σi0, σj) ≡ ui(σi0, σj) − ui(σi, σj) for all (σ0 i, σj) ∈ Σ, and u0ji0, σj) = −u0i(σ 0 i, σj).

This game has an NE in mixed strategy. Let it be (σ∗

i, σj∗). For any σj ∈ Σj, we have u0ii∗, σj) ≥ u0i(σ ∗ i, σ ∗ j) ≥ u 0 i(b(σ ∗ j), σ ∗ j) ≥ u0i(σi, σj∗) = 0,

proving that σi is strictly dominated by σi∗. Thus a strategy for player

i that can never be a best response against player j’s strategy must be strictly dominated from player i’s point of view. Define for the purpose of iterated deletion of strictly dominated strategies S0

i = Si, Σ0i = Σi,

and for all t ∈ Z+,

St

(12)

Σt i ≡ {σi ∈ Σi : σi(si) > 0 ⇒ si ∈ Sit}, Si∞≡ \ t∈{0}S Z+ Sit, and

Σi∞ ≡ {σi ∈ Σi : ∀σi0 ∈ Σi, ∃s−i ∈ S−i∞, ui(σi, s−i) ≥ ui(σi0, s−i)}.

In terms of these new notations, we have proved that Σ1

i = Σi(1)

(since a strictly dominated strategy for player i can never be a best response against player j’s strategy). However, the above argument can be repeated which shows that Σ∞

i = Σi(∞), so that the two concepts

are equivalent.

13. Let us offer another proof to the above proposition. Fix j ∈ {1, 2}. Let (s1

j, s2j, · · · , s #(Sj)

j ) be an enumeration of player j’s pure strategies. Let

#(Sj) = nj. For each σi ∈ Σi, let

xi(σi) ≡ (ui(σi, s1j), ui(σi, s2j), · · · , ui(σi, s nj

j )),

and define the set

Xi ≡ {xi(σi) : σi ∈ Σi}.

Then Xi is non-empty, convex, and compact. To see that Xi is convex,

note that xi : Σi → R is linear, and linear image of convex set is convex:

For any σi, σi0 ∈ Σi and any λ ∈ [0, 1],

λxi(σi) + (1 − λ)xi(σi0) = λ(ui(σi, s1j), ui(σi, s2j), · · · , ui(σi, s nj j ))+(1−λ)(ui(σi0, s 1 j), ui(σi0, s 2 j), · · · , ui(σ0i, s nj j )) = xi(λσi+ (1 − λ)σi0),

and since λσi+ (1 − λ)σ0i ∈ Σi, xi(λσi + (1 − λ)σi0) ∈ Xi. Also, as the

linear function xi(·) is continuous, Xi is compact because Σi is compact.

If σi is not strictly dominated, we claim that xi(σi) is a boundary point

of Xi. (A point x ∈ A ⊂ Rm is a boundary point of A if for all r > 0,

B(x, r)T

A 6= ∅ 6= B(x, r)T

Ac.) Suppose not. Then there would exist

(13)

e ∈ (0, r), xi(σi) + (e, e, · · · , e) ∈ Xi, and it strictly dominates σi. Next,

define

Yi ≡ {y − xi(σi) : y ∈ Xi}.

It follows that zero is a boundary point of the nonempty, convex, com-pact set Yi. Consider the nonempty Z ⊂ Rnj defined by

Z ≡ {z ∈ Rnj : z  0

nj×1},

where z  0 means that for all k = 1, 2, · · · , nj, the k-th element of z,

denoted zk, is strictly positive. Note that ZTYi = ∅. Moreover, Z is

convex. One version of the separating hyperplane theorem implies the presence of some non-zero vector p ∈ Rnj such that p0y ≤ 0 ≤ p0z for

all y ∈ Yi and z ∈ Z. Now we claim that for all k = 1, 2, · · · , nj, the

k-th element of p, denoted pk, is non-negative. To see this, suppose

that pk < 0 for some k. This implies that for some l, pl > 0 (so that

nj ≥ 2). Let m be the largest l with pl > 0. Pick z∗ ∈ Z such that

z∗

k > (nj − 1)pm and zq∗ = 1 for all q 6= k. It follows that, for this z∗,

p0z< 0, which is a contradiction.

Thus we have shown the existence of a positive vector p ∈ Rnj, of

which not all elements are zero, such that p defines a hyperplane (or a functional) separating the sets Z and Yi. We can normalize this

functional by letting p be such thatPnj

k=1pk= 1, so that p is a legitimate

mixed strategy for player j. Given p, since p0y ≤ 0 for all y ∈ Y i, we

have shown that σi is a best response of player i to player j’s mixed

strategy p. As in the first proof for proposition 4, this argument can be iterated to show that the set of profiles surviving iterated strict dominance is included in the set of rationalizable profiles, so that the two solution concepts coincide in two-player finite strategic games. 14. The above proof for proposition 4 fails if I > 2 because not all prob.

distributions over S−i are products of independent prob. distributions

over Sj, for all j 6= i. (Recall that an NE in mixed strategy assumes

independent randomization among players.) However, the equivalence between the two concepts stated in proposition 4 is restored if players’ randomization can be correlated.

Definition 12 Given a game in normal form,

(14)

an (objective) correlated equilibrium is a prob. distribution p(·) over S such that for all i, for all si ∈ Si with p(si) > 0,

E[ui(si, ˜s−i)|si] ≥ E[ui(s0i, ˜s−i)|si], ∀s0i ∈ Si.

Each p(·) can be thought of as a randomization device for which s ∈ S occurs with prob. p(s), and when s occurs the device suggests player i play si without revealing to player i what s is, such that all players find

it optimal to conform to these suggestions at all times. Let P be the set of all possible devices of this sort. Immediately, all NE’s in mixed strategy are elements of P.

Proposition 5 If si is not strictly dominated for player i, then it is a

best response for some p(·) ∈ P.

15. Find all correlated equilibria for the following game:

Player 1/Player 2 L R

U 5, 1 0, 0

D 4, 4 1, 5

Solution Let the correlated device assigns (U,L), (U,R), (D,L), and (D,R) with respectively probability a, b, c, and d. Define the following 4 inequalities (referred to as I,II,III,and IV):

a a + c · 1 + c a + c · 4 ≥ a a + c · 0 + c a + c · 5, a a + b · 5 + b a + b · 0 ≥ a a + b · 4 + b a + b · 1, c c + d · 4 + d c + d · 1 ≥ c c + d · 5 + d c + d · 0, b b + d · 0 + d b + d · 5 ≥ b b + d · 1 + d b + d · 4.

For (a, b, c, d) to define a correlated equilibrium, when players are told to play (U,L) for instance, I and II should hold. Similarly, when players are told to play (D,L), (U,R), and (D,R), [I,III], [IV, II], and [III,IV] should respectively hold. Simplifying, we have four conditions:

(15)

Let the set of correlated equilibria be A. Then,

A = {(a, b, c, d) : a + b + c + d = 1, a, b, c, d ≥ 0, a, d ≥ b, c.}. Note that all NE’s are contained in A, and if (a, b, c, d) is a totally mixed strategy NE, then it must satisfy

a b = c d, a c = b d.

16. Example 6: In a duopolistic industry two risk neutral firms (i.e. expected profits maximizers) that produce respectively products A and B are faced with three segments of consumers:

Segment Population Valuation for A Valuation for B

LA α V 0

LB β 0 V

S 1 − α − β v v

where 0 < β ≤ α < α + β < 1, and 0 ≤ v < V . These segments are loyals to the two firms and the switchers who regard the two products as perfect substitutes. We have assumed that a loyal is willing to pay more than the switcher to obtain the product.

For simplicity the two firms have no production costs, and they compete in price in a simultaneous game. We shall demonstrate the equilibrium dealing behavior of the two firms.

17. First, we look for a pure strategy NE. Suppose (pA, pB) is an

equi-librium. There are 3 possibilities: (i) pA, pB > v; (ii) pA, pB ≤ v;

and (iii) max(pA, pB) > v ≥ min(pA, pB). For case (i), we must have

pA= pB = V , and for this to be an NE, we must require

βV ≥ (1 − α)v, αV ≥ (1 − β)v. (1)

When (1) holds, indeed a pure strategy where pA = pB = V exists, and

in this NE the switchers are unserved.

On the other hand, if (ii) were an NE, then pA = pB. To see this,

(16)

done better by pricing at V ! Again, pA = pB is not an NE unless

pA = pB = 0: for otherwise the equilibrium price is dominated by a

price slightly lower. It is obvious that pA= pB = 0 is still not an NE,

for each firm can at least make a profit greater than or equal to βV . Finally, for case (iii) to be an NE, we must have either (iii-a) pA= V ,

pB = v or (iii-b) pA= v, pB = V . The conditions that support (iii-a)

are

αV ≥ (1 − β)v ≥ (1 − α)v ≥ βV, (2)

and when (2) holds, indeed a pure strategy NE where pA = V and

pB = v exists. One can derive analogously conditions supporting the

pure strategy NE in (iii-b).

18. Of course, we observe no dealing behavior in a pure strategy NE. Now we look for mixed strategy NE’s. For the ease of exposition, we assume from now on that α = β. Then (2) becomes

α 1 − α =

v V ,

which cannot hold generically. Thus the only possible generic pure strategy NE of this game occurs when

α 1 − α ≥

v V. Therefore, we assume that

α 1 − α <

v

V . (3)

Condition (3) says that the loyals are not important enough, and so the firms cannot commit to not compete for the switchers.

19. Now the game is symmetric, and we shall look for a mixed strategy NE (FA(pA), FB(pB)), where Fi(x) =prob.(˜pi ≤ x) is the (cumulative)

distribution function for firm i’s random price ˜pi in equilibrium. We

shall make use of the following lemmas.

Lemma 3: In equilibrium, FA(v), FB(v) > 0.

Lemma 3 says that both firms have a positive prob. to choose some price level equal to or less than v. To see this, suppose not. Then the

(17)

switchers are never served in the NE, and if this were truly an NE, both firms choose the price V with prob. one, a contradiction to the presumption that there is a “non-degenerate” mixed strategy NE. Lemma 4: For i ∈ {A, B}, Fi(·) is continuous on (−∞, v).

Lemma 4 says that selecting a price x < v with a strictly positive prob. cannot be a good idea in equilibrium.4 To see this, suppose that

at x < v, ∆Fi(x) ≡ Fi(x) − Fi(x−) ≡ Fi(x) − limy↑xFi(y) > 0. In

mathematic terms, x is a point of jump for the function Fi(·). In this

case, compare the two pure strategies pj = x and p0j = x −  for firm

j, where ∆Fi(p0j) = 0.5 These prices generate for firm j the following

expected profits: Πj(pj) = [

1

2(1−2α)+α]x·∆Fi(x)+[(1−2α)+α]x·[1−Fi(x)]+αxFi(x−) < Πj(p0j) = [(1 − 2α) + α](x − ) · [1 − Fi(x − )] + α(x − )Fi(x − )

when  > 0 is sufficiently small. In fact, not only pj is dominated by p0j,

for δ > 0 small enough, all prices in [x, x + δ) are dominated by p0 j from

firm j’s perspective. Since in equilibrium firm j will not randomize at a dominated price, we can conclude that if it were true that in a mixed strategy NE ∆Fi(x) > 0 for some x < v, then firm j will not randomize

over an interval [x, x + δ]; that is, Fj(x) = Fj(x + δ). Now, if it were

true that in equilibrium ∆Fi(x) > 0, then of course pi = x is a best

response of firm i. We shall show that this implies a contradiction. Consider another pure strategy pi ∈ (x, x + δ) for firm i, and compare

the price x and the price pi from firm i’s perspective. These prices

generate expected profits

Πi(x) = x[(1 − 2α) + α][1 − Fj(x)] + xαFj(x)

< pi[(1 − 2α) + α][1 − Fj(x)] + piαFj(x)

4Recall that if F : R → R is increasing, then the only possible discontinuity points are of the first kind: where F (·) has well-defined left-hand and right-hand limits, but the functional values of F need not equal these limits.

5Finding the p0

j is possible no matter how small  is required, because an increasing function can have at most a countably infinite number of points of jump.

(18)

= pi[(1 − 2α) + α][1 − Fj(pi)] + pjαFj(pj),

and hence the price x is dominated by pi for firm i!

Lemma 5: In equilibrium, FA(v−), FB(v−) > 0.

Lemma 5 says that both firms must randomize at some price strictly lower than v. This is a refinement of lemma 3. To see this, suppose that Fi(v−) = 0. Then, it must be that Fj(v−) = 0 also. (Why?) But then,

lemma 3 implies that ∆Fi(v) > 0 for i = A, B. The same reasoning

as that in lemma 4 shows that this leads to a contradiction: Given ∆Fi(v) > 0, pj ∈ [v, v + δ) are dominated for firm j for sufficiently

small δ > 0, and so Fj(v) = Fj(v + δ), which implies that a point mass

at pi = v cannot be optimal for firm i, a contradiction.

Lemma 6: For i ∈ {A, B}, if at x < v, Fi(x) > 0, then for all

y ∈ (x, v), Fi(y) > Fi(x).

Lemma 6 says that in equilibrium the distribution function must be strictly increasing at all prices that are close to but lower than v. More importantly, this says that if firm i randomizes at pi < v, then not only

all x ∈ (pi, v) are best responses for firm i, in equilibrium firm i must

randomize over each x ∈ (pi, v).

To see that lemma 6 is true, suppose to the contrary that Fi(·) is flat

on an interval [x, y], i.e. Fi(x) = Fi(y) > 0, x < y < v. Then any price

pj ∈ [x, y) is dominated by pj +  from firm j’s perspective for  > 0.

(Why?) Thus firm j will not randomize over [x, y). Consider6

pi ≡ inf Fi(z)=Fi(y)

z.

Then pi is a best response for firm i. However, this is a contradiction

because pi is dominated by p0i = y − δ for small enough δ: the prob. of

winning the switchers at the price p0

i is the same as at pi, but in the

event of winning, p0

i is much higher than pi!

20. Equipped with the above lemmas, now we can rigorously derive the mixed strategy NE. Let Πi be firm i’s equilibrium expected profit. By

6Every non-empty subset of real numbers which is bounded below has a greatest lower bound.

(19)

lemma 5, we have

Πi = pi{[1 − α][1 − Fj(pi)] + αFj(pi)},

for some pi < v, so that for i, j ∈ {A, B}, i 6= j,

Fj(x) =

(1 − α) −Πi

x

1 − 2α ,

for all x ∈ [pj, v)7 and some p

j ∈ [0, v). Immediately, we have

pj = Πi

1 − α. (4)

The continuity of Fj(·) on [1−αΠi , v) allows us to take limit:

Fj(v−) =

1 − α − Πi

v

1 − 2α < 1,

where the inequality follows because if otherwise, then Πi = vα < αV ,

a contradiction. Since all pj ∈ (v, V ) are dominated by p0j = V for firm

j, it follows that either Fj(·) has a point mass at v or at V . Note that

it is impossible that both ∆Fi(v), ∆Fj(v) > 0: if this were to happen,

then v would be a best response for both firms, but given firm i’s strategy, from firm j’s perspective v would be dominated by pj = v − 

for  > 0 small enough, which is a contradiction.

21. Note that pi is a best response for firm i. To see this, note that all pi ∈ (pi, v) are, and they generate the same expected profits for firm

i. By letting pi ↓ pi and using the fact that Fj(·) is continuous on

(−∞, v), we have that pi also attains Πi and hence is a best response

for firm i. Next, we claim that pi = pj. To see this, suppose instead that pi > pj, so that firm j may randomize at, say pj ∈ (pj, pj). Note

however that pj is dominated by pj +  for  > 0 small enough. From

here, using (4), we conclude that Πi = Πj in equilibrium.

7This follows from lemma 4 which says that if p

i< vis a best response for firm i then so is x, for all x ∈ (pi, v).

(20)

22. Now we summarize the equilibria. First suppose that for one firm i, ∆Fi(v) > 0. Then ∆Fj(v) = 0, implying that ∆Fj(V ) > 0 and hence

Πj = αV . It follows that Πi = αV also. Since v is a best response for

firm i, we must have

∆Fj(V )v(1−α)+[1−∆Fj(V )]vα = αV, ⇒ ∆Fj(V ) =

α(V − v) v(1 − 2α). (5) Alternatively, we can obtain the same result from

∆Fj(V ) = 1 − Fj(v−). (6)

In this case, we have

Fi(pi) =              0, pi ≤ 1−ααV ; 1−α−αV pi 1−2α , pi ∈ [ αV 1−α, v); p∗, p i ∈ [v, V ); 1, pi≥ V, and Fj(pj) =              0, pj ≤ 1−ααV ; 1−α−αV pj 1−2α , pj ∈ [ αV 1−α, v]; 1−α−αVv 1−2α , pj ∈ (v, V ); 1, pj ≥ V, (7) where p∗ ∈ (1−α−αVv 1−2α , 1].

Next consider the case where ∆FA(v) = ∆FB(v) = 0. In this case, the

equilibrium is symmetric, and we have both FA(·) and FB(·)

character-ized by the Fj(·) above.

23. Let SA and SB be the supports of equilibrium prices pA and pB. Then

for i = A, B, we can interpret sup Si as the regular price of firm i, and

any price strictly lower than sup Si as a dealing price. Now we can

compute that the dealing frequency, which is 1−α−αVv

1−2α , and the depth

of dealing, which is

V − E[˜p|˜p ≤ v]. (8)

As an exercise, you can examine how the parameters V, v, and α affect respectively the frequency and the depth of dealing.

(21)

24. We can also consider the case where α > β, αV < (1 − β)v and βV < (1 − α)v. In fact, one can show that α(1 − α) > (1 − β)β, and hence αV < (1 − β)v implies that βV < (1 − α)v. In this case, one can show that an undominated equilibrium in mixed strategy is such that

FA(x) =                  0, x ≤ p ≡ αV 1−β; 1−px 1−1−αβ , x ∈ [p, v); 1−pv 1−1−αβ , x ∈ [v, V ); 1, x ≥ V, (9) and FB(x) =          0, x ≤ pαV 1−β; 1−xp 1− α 1−β, x ∈ [p, v); 1, x ≥ v. (10)

What is the effect of an increase in α, say, on the equilibrium pricing strategies?

Observe that an increase in α implies an increase in p. Suppose that α2 > α1, and for i = 1, 2, αi > β, αiV < (1 − β)v. Fix h, l such that

v > h > l > p2 > p1. Note that given i, firm 2 is indifferent about h and l:

hβFA(h, αi)+h(1−αi)[1−FA(h, αi)] = lβFA(l, αi)+l(1−αi)[1−FA(l, αi)], ∀i = 1, 2.

Since given i, hFA(h, αi) > lFA(l, αi), we conclude that h[1−FA(h, αi)] <

l[1 − FA(l, αi)] so that when α increases from α1 to α2, if firm 1’s

strat-egy were still FA(x, α1), then firm 2 would strictly prefer h to l.

To restore a mixed strategy pricing equilibrium, FA(x, α2) must adjust

in such a way that firm 2 remains indifferent about h and l. The question is how.

Note that the above indifference equation can be re-arranged to get FA(h, α) + l[ FA(h, α) − FA(l, α) h − l ] = 1 1 − β 1−α .

(22)

Taking limit on both sides by letting l → h and assuming that FA is

differentiable on (p, v) (which can be verified independently), we have, given α,

FA(h, α) + hfA(h, α) =

1

1 −1−αβ , ∀h ∈ (p, v),

where fA = FA0 is the density function of firm A’s equilibrium price.

From here, we see two things. Note first that FA is strictly concave on

(p, v): the above righ-hand side is independent of h, so that fA has to

be strictly decreasing in h. Second, note that there is a small interval [p, ˆp) such that at every x inside that interval, fA(x, α) is increasing in

α. This happens because the above right-hand side, 1

1−1−αβ is increasing

in α, and Leibniz rule tells us that the change in FA at h induced by a

change in α can be attributed to a change in p (which has a negative effect) and a change in the density function.8

It is easy to verify that, indeed, given any h ∈ (p2, v) we have fA(h, α2) >

fA(h, α1). In fact, by directly differentiating, we have for all h ∈ (p, v),

∂2F A(h, α) ∂h∂α = ∂fA(h, α) ∂α = ∂ ∂α p h2 1 − 1−αβ > 0.

This reflects the need of restoring indifference for firm 2 after an increase in α from α1 to α2. Following that increase, if firm 1 were to use

FA(·, α1), then firm 2 would strictly prefer h to l, and so to restore

indifference, we need to make sure that under α2, the difference in the

probabilities of losing the switchers, FA(h, α2) − FA(l, α2), is higher

than its counterpart FA(h, α1) − FA(l, α1) under α1. This being true

for all h and l, we conclude that fA is higher under α2 than under α1

at all h ∈ (p2, v). 8Note that F

A(h, α) = Rh

p(α)fA(x, α)dx. The Leibniz rule says that

∂FA(h, α) ∂α = −p 0 (α)fA(p(α), α) + Z h p(α) ∂fA(x, α) ∂α (x, α)dx, provided that fA and p(α) are both continuously differentiable in α.

(23)

Now, let us return to the effect of an increase in α on FA. By directly

differentiating, we have for all x ∈ (p, v), ∂FA(x, α) ∂α = 1 [1 − 1−αβ ]2{ β (1 − α)2[1 − p(α) x ] − p0(α) x [1 − β 1 − α]}, so that the sign of ∂FA(x,α)

∂α is the same as the sign of

G(x) ≡ β (1 − α)2[1 − p(α) x ] − p0(α) x [1 − β 1 − α].

Note that G(·) is strictly increasing, with G(p) < 0. Letting G(x∗) = 0,

we have

x∗ = p +(1 − α)(1 − α − β)V

β(1 − β) .

Thus we can conclude that • If min(1, α 1−β + (1−α)(1−α−β) β(1−β) ) > v V > α

1−β so that the interval (p, v)

does not contain x∗, then at all x ∈ (p, v), we have ∂FA

∂α < 0. • If instead 1 > v V ≥ α 1−β + (1−α)(1−α−β) β(1−β) so that x ∗ ∈ (p, v), then ∂FA ∂α (x, α) ≤ 0 if and only if x ≤ x ∗.

Intuitively, as suggested by Leibniz rule, an increase in α results in a decrease in FA at all x ∈ (p1, p2], but to restore a mixed equilibrium,

as we mentioned above, the density fA must become higher at all x ∈

(p2, v). Thus for x ∈ (p2, v), either FA becomes higher or it becomes

lower under α2, and which one would happen depends on which between

the above two opposing effects dominates.

25. Example 7: Consider an imperfectly competitive duopoly in the ca-ble TV industry where consumers A and B are willing to pay up to (2, 3.5) and (0, 5) for the goods (L, HL), which are offered respectively by firm L and firm HL. The firms compete in price. Let p and q denote generic prices chosen by firm L and firm HL respectively.

Step 1: There are no pure strategy NE’s.

Proof Suppose that there were a pure strategy NE. Then in this equi-librium, consumer A purchases H+L with either a strictly positive or a

(24)

zero probability. Suppose that the latter is the case. Then firm HL gets no more than 5 and it gets 5 by pricing at q = 5. This cannot be an equilibrium, for in this case firm L would like to price at p = 2, and ex-pecting this, firm HL would have done better by pricing at q = 3.5 − , a contradiction. Next suppose that in equilibrium consumer A feels indifferent about the two firms’ offers. Since by undercutting the price slightly, whenever feasible, will make A strictly prefer a firm’s offer (which doubles the expected sales volume from A), this can be an equi-librium only if p = 0 and q = 1.5. This cannot be an NE, for firm HL gets 3 in this “equilibrium,” while it can always get 5 by pricing at q = 5. Thus a pure strategy NE is possible only if consumer A pur-chases H+L with probability one. This cannot be either, for producing L is costless for firm L, and for any price q > 1.5 matching is firm L’s best response. But, for any price q ≤ 1.5, firm HL is better off pricing at q = 5 instead (serving consumer B alone). In sum, there are no pure strategy NE’s.

The support of a random variable is the smallest closed set (in the usual topology on R) that occurs with probability one. Let Sp and Sq

be respectively the supports of p and q in a mixed strategy NE. Note that the set of best responses in pure strategy in equilibrium, denoted Bi, is closed for firm i, and that the support for firm i’s price is a closed

subset of Bi.

Step 2: In any (mixed strategy) NE, SP ⊂ [0, 2] and Sq ⊂ [0, 5].

Proof Obvious.

From now on, F (·) and G(·) stand for respectively the equilibrium dis-tribution functions of p and q.

Step 3: In any NE, F (·) is continuous on p ∈ (0, 2).

Proof Suppose to the contrary that at p ∈ (0, 2), F (p) − F (p−) > 0, where F (p−) = limx↑pF (x) exists because F (·) is increasing. Note

that this implies that p is a best response of firm L. Then, there exist d, e > 0 small enough such that each q ∈ [p + 1.5, p + 1.5 + e] is strictly dominated by p + 1.5 − d from firm HL’s point of view. If this is consis-tent with equilibrium, then in equilibrium firm HL does not randomize over any q ∈ [p + 1.5, p + 1.5 + e]. This implies that p is not a best response (strictly dominated by p + e) from firm L’s point of view, a contradiction.

(25)

Proof Note that firm HL will not pick a price lower than 2.5, for at such a price even if both consumers buy from firm HL, and profit is lower than 5, which firm HL can get for sure from consumer B. This implies immediately that consumer A cannot get a surplus higher than 1 if she buys from firm HL. In turn, this implies that for any p < 1, which gives consumer A a surplus higher than 1, firm L gets consumer A for sure. Note, however, that each and every p < 1 is strictly domi-nated by p+12 .

Thus in any NE, firm L randomizes over [1, 2] with point mass occur-ring at most at p = 2. We shall say that p is a point of increase of F (·) if for all e > 0, F (p + e) > F (p).

Step 5: If at p ∈ [1, 2), F (p) > 0, then p is a point of increase for F (·). Equivalently, F (·) is strictly increasing on its support Sp, and the

support is a closed interval [p, 2].

Proof Suppose not. Then there exists p ∈ (1, 2) and e > 0 such that F (p) = F (p + e). Let p0 be inf{p00 : F (p00) = F (p)}. By right-continuity

of F (·), F (p0) = F (p) and for some d ≥ 0, p0 − d ∈ S

p. Since firm L

does not randomize over rval [p0, p + e), and this is indeed an NE, firm

HL should not randomize over (p0+1.5, p+1.5+e0), for some e0 ∈ (0, e).

This implies that p0 − d is not a best response for firm L: it is strictly

dominated in equilibrium by p + e0! This is a contradiction.

Step 6: In any NE, q ∈ (3.5, 5) is not a point of increase of G(·). Proof Observe that these prices are dominated strictly by q = 5. Step 7: G(5) − G(5−) > 0.

Proof Suppose not. Note that in this case the above arguments es-tablished for F (·) apply to G(·) on [2.5, 3.5] as well. In particular, Sq

is included in the union of {5} and some closed interval [q, 3.5].

We show that F (2) = F (2−) = 1 if G(5) = G(5−) and then we show that a contradiction arises if F (2) = F (2−) = 1. Suppose that G(5) = G(5−) = 1, which implies by step 6 that G(3.5) = 1. Suppose also that G(3.5) − G(3.5−) > 0. In this case, F (2) > F (2−) cannot be part of an NE: Undercutting the price p slightly is a better response for firm L. What if G(3.5) = G(3.5−) = 1? This implies that p = 2 yields zero payoff with probability one, and is hence strictly dominated by p = 1 − e for any e ∈ (0, 1). Again, this implies that F (2) = F (2−), for otherwise p = 2 would be a best response for firm L. Thus we have

(26)

shown that F (2) = F (2−) if G(5) = G(5−). Next, suppose q ∈ Sq,

and q 6= 5, which yields for firm HL a payoff

2q[1 − F (q − 1.5)] + qF (q − 1.5) ≡ πHL ≥ 5.

This gives

F (q − 1.5) = 2 − πHL q .

As q ↑ 3.5 so that F (q − 1.5) tends to 1 (as F (2) = F (2−)), we need πHL = 3.5 < 5, a contradiction.

Step 8: The complete characterization of equilibrium.

In any equilibrium, πHL = 5 with G(·) having a prob. mass at q = 5.

Let q ∈ Sq, we have

F (q − 1.5) = 2 − 5 q,

implying that F (·) has a jump at p = 2 with a size 37. For some q ∈ [2.5, 3.5), we thus have

F (p) = 2 − 5 p + 1.5,

if p ≥ q − 1.5 = p. We have p = inf{p : 2 − p+1.55 ≥ 0} giving p = 1. To sum up for F (·), we have

F (p) =      0, p < 1; 2 − 5 p+1.5, p ∈ [1, 2); 1, p ≥ 2.

Correspondingly, for p ∈ Sp, we have

p[1 − G(p + 1.5)] ≡ πL≥ 1,

which gives

G(p + 1.5) = 1 − πL p .

What is πL? Since there can be no prob. mass for G(·) on (0, 3.5)

(by an argument similar to step 3), and since we have known F (·), we deduce that πL = 1 implying that q = 2.5. Letting p ↑ 2, we have

(27)

G(3.5) = G(3.5−) = 1 2, showing that G(5) − G(5−) = 1 2. To sum up, we have G(q) =            0, q < 2.5; 1 − q−1.51 , q ∈ [2.5, 3.5); 1 2, q ∈ [3.5, 5); 1, q ≥ 5.

Thus F (·) and G(·) constitutes the unique NE of this game.

26. Example 8: A monopolistic firm can costlessly produce a durable good and sell it to consumers at dates 1 and 2. At each date t = 1, 2, the demand for the durable good’s service is D(q) = 1 − q. The firm seeks to maximize the sum of discounted profits over the two periods. The firm and consumers have a common discount factor δ ∈ (0, 1]. Let qt be the quantity produced by the firm at date t. If at date 1,

q1 units are produced, the firm has two options: either the q1 units

can be sold to consumers and allow the latter to freely resell at date 2 (assume that the durable good never depreciates), or they can be leased to consumers. We shall show that leasing is generally better than selling. As we shall see, the entire problem hinges on whether the firm can internalize the date-2 price impact brought about by the newly produced q2.

Let us ask: If the firm can sign a full-commitment long-term contract at t = 0 to sell the product over the two periods, what would the optimal selling policy (q1, q2) be? Note that for the market to clear at t = 2,

given any commitment (q1, q2), p2 = 1 − q1− q2. Thus consumers are

willing to pay the price

p1 = (1 − q1) + δp2

for the quantity q1 at t = 1, or after being rearranged,

p1 = (1 + δ)(1 − q1) − δq2.

The interpretation is that consumers at date 1, expecting that the firm will produce an additional amount of q2 at date 2, realize that

the product’s date-2 resale value will be reduced by q2. With rational

(28)

date 1 reduces by q2 accordingly. The firm can benefit from selling q2

at date 2 by q2p2. The total sum of discounted profits is

f (q1, q2) = q1[(1 + δ)(1 − q1) − δq2] + δq2[1 − q1− q2].

Note the following partial derivatives:

f1 = (1 + δ)(1 − 2q1) − 2δq2,

f2 = −δq1+ δ(1 − q1− 2q2),

f11= −2(1 + δ), f22= −2δ = f12,

which imply that

f11< 0, f11f22− f122 = 4δ > 0,

showing that f is strictly concave in (q1, q2), so that the unique optimal

selling policy solves

f1 = f2 = 0, ⇒ q1 =

1

2, q2 = 0.

Note that, by inspecting f2, producing q2 > 0 at date 2 can be beneficial

if and only if the marginal profit from selling q2 is more than the loss

in the resale value of the product; i.e.

[1 − q1− 2q2]dq2 > q1dq2, or q1 ≤ 1 2 − q2 < 1 2.

It follows that any q2 > 0 is suboptimal: the firm can at least commit

to the optimal static policy q1 = 12 and can charge p1 = 1+δ2 . At this

point, consider increasing q2 slightly from zero. The marginal benefit

is [1 − q1 − 2q2]dq2 which is lower than q1dq2 (all the q1 units sold at

date 1 lose a resale value of dq2) at q1 = 12. As the function f is strictly

concave, this local property implies that it holds true globally (roughly, for q2 far apart from zero, things only get worse).

What happens here is that after selling the q1 units of the durable

(29)

by supplying the additional q2 > 0 units at date 2: by producing the

additional q2 > 0, the date-2 market value of all the q1+ q2 units drops,

but since the firm only possesses q2 units of the durable good at date 2,

a portion of this loss is incurred to the other suppliers (purchasers at date 1); this raises the firm’s incentive to over-produce, leading to an aggregate output level higher than the monopoly case, so that the sum of discounted profits is lower than what the firm can get if it commits to maintain the supply quantity at the monopoly level 12. Since all other date-2 suppliers are actually purchasers at date 1, and since these purchasers have rational expectations, this loss of profits must again be born by the firm in the date-1 equilibrium. More precisely, at date 1, consumers when purchasing q1 realize that the firm cannot commit to

not raise the aggregate output level higher than 1

2, and so the product

they purchase today will not have a resale value as high as it would have if the date-2 supply quantity were committed to the monopolistic level, and hence they will not pay the seller as high as they would if the seller could commit to not produce q2 > 0 at date 2.

27. What if the firm can commit at t = 0 once and for all to a two-period leasing contract. In other words, the firm commits to retain ownership of the product, and charges (possibly different) rents from users in these two periods. Immediately we claim that under the optimal precommit-ment leasing contract, the firm gives identical terms of trade for both periods. The idea is that, if one period yields a higher profit than the other, then the firm should have committed to a scheme that assigns the high-profit-period terms of trade to both periods, which contradicts the assumed optimality of the precommitment contract. It then follows that the optimal contract is a straightforward repetition of the static optimal contract: pricing at 12 in each period and producing q1 = 12 at

date 1 and nothing later on. In this case, as it turns out, the scheme can be implemented without commitment power (simply because it is subgame perfect.) This fact implies that, without being able to make long-term commitments, the firm is better off leasing than selling the product at date 1.

Compared with the case of selling, here the firm possesses all the q1+q2

units at date 2 so that it must fully internalize the date-2 price impact brought about by any additional units of q2 > 0. This implies that

(30)

q2 = 0 is optimal for the firm in a subgame starting at date 2 with any

q1 ≥ 12. Backward induction then implies that q1 = 12 is optimal.

What can a money-back-guarantee (MBG) do to improve the selling scheme when direct commitments to output levels are infeasible? With such a guarantee, at date 2, given q1, the firm chooses q2 to

max

q2 (q1+ q2)(1 − q1− q2).

(We have assumed that p2 ≤ 1+δδ p1, and this can be verified to be

necessary for equilibrium with q2 ≥ 0.) Note that with the guarantee,

the price charged to date-1 consumers is actually contingent on the realized p2. But then, the seller must fully internalize the date-2 price

impact brought about by any additional q2 > 0. Hence it should not

be surprising that an MBG can help restore efficiency. References

1. Aumann, R., 1959, Acceptable points in general cooperative n-person games, in Contributions to the Theory of Games, IV, Princeton Uni-versity Press.

2. Ben-Porath, E., and E. Dekel, 1992, Signaling future actions and the potential for sacrifice, Journal of Economic Theory, 57, 36-51.

3. Bernheim, D., 1984, Rationalizable strategic behavior, Econometrica, 52, 1007-1028.

4. Bernheim, D., D. Peleg, and M. Whinston, 1987, Coalition-proof Nash equilibria, I: Concepts, Journal of Economic Theory, 42, 1-12.

5. Bulow, J., and J. Geanakoplos, and P. Klemperer, 1985, Multimarket oligopolgy: strategic substitutes and complements, Journal of Political Economy, 93, 488-511.

6. Fudenberg, D., and J. Tirole, 1991, Game Theory, MIT Press.

7. Glicksberg, I. L., 1952, A further generalization of the Kakutani fixed point theorem with application to Nash equilibrium points, Proceedings of the National Academy of Sciences, 38, 170-174.

(31)

8. Harsanyi, J., 1973, Oddness of the number of equilibrium points: a new proof, International Journal of Game Theory, 2, 235-250.

9. Kakutani, S., 1941, A generalization of Brouwer’s fixed point theorem, Duke Mathematical Journal, 8, 457-459.

10. Kahneman, D., and A. Tversky, 1979, Prospect theory: an analysis of decision under risk, Econometrica, 47, 263-291.

11. Kohlberg, E., and J.-F. Mertens, 1986, On the strategic stability of equilibria, Econometrica, 54, 1003-1037.

12. Kreps, D., and R. Wilson, 1982, Sequential equilibria, Econometrica, 50, 863-894.

13. Kuhn, H., 1953, Extensive games and the problem of information, An-nals of Mathematics Studies, No. 28, Princeton University Press. 14. Milgrom, P., and R. Weber, 1982, A theory of auctions and competitive

bidding, Econometrica, 50, 1089-1122.

15. Myerson, R., 1978, Refinements of the Nash equilibrium concept, In-ternational Journal of Game Theory, 7, 73-80.

16. Nash, J., 1950, Equilibrium points in n-person games, Proceedings of the National Academy of Sciences, 36, 48-49.

17. Newbery, D., 1984, Pareto inferior trade, Review of Economic Studies, 51, 1-12.

18. Orsborne, M. J., and A. Rubinstein, 1994, A Course in Game Theory, MIT Press.

19. Selten, R., 1965, Spieltheoretische Behandlung eines Oligopolmodells mit Nachfragetr¨agheit, Zeitchrift f¨ur die gesamte Staatswissenschaft, 12, 301-324.

20. Selten, R., 1975, Re-examination of the prefectness concept for equilib-rium points in extensive games, International Journal of Game Theory, 4, 25-55.

(32)

21. von Neumann, J., and O. Morgenstern, 1944, Theory of Games and Economic Behavior, New York: John Wiley and Sons.

22. Wilson, R., 1971, Computing equilibria of n-person games, SIAM Jour-nal of Applied Mathematics, 21, 80-87.

23. Zermelo, E., 1913, ¨Uber eine Anwendung der Mengenlehre auf der The-orie des Schachspiels, in Proceedings of the Fifth Congress on Mathe-matics.

參考文獻

相關文件

We obtain several corollaries regarding the computational power needed by the row player to guarantee a good expected payoff against randomized circuits (acting as the column player)

Now given the volume fraction for the interface cell C i , we seek a reconstruction that mimics the sub-grid structure of the jump between 0 and 1 in the volume fraction

We have derived Whitham equations to the SGN model and show that they are strictly hyperbolic for arbitrary wave amplitudes, i.e., the corresponding periodic wave trains

To proceed, we construct a t-motive M S for this purpose, so that it has the GP property and its “periods”Ψ S (θ) from rigid analytic trivialization generate also the field K S ,

All the elements, including (i) movement of people and goods, are carefully studied and planned in advance to ensure that every visitor is delighted and satisfied with their visit,

In this paper, we have shown that how to construct complementarity functions for the circular cone complementarity problem, and have proposed four classes of merit func- tions for

Hence, we have shown the S-duality at the Poisson level for a D3-brane in R-R and NS-NS backgrounds.... Hence, we have shown the S-duality at the Poisson level for a D3-brane in R-R

We must assume, further, that between a nucleon and an anti-nucleon strong attractive forces exist, capable of binding the two particles together.. *Now at the Institute for