Applied Game Theory and Strategic Behavior
全文
(2) 88. International Journal of Business and Economics. aware of this paper and even use it as the bone structure, perhaps the first four chapters in this book would have a much better look than what it stands now. Chapter 1 gives a brief history of game theory. The ambition is admirable while many jargons do not seem to be introduced in a way that would encourage readers with no prior knowledge in game theory to continue. In my opinion, the delivery style should be way different from what is in “The New Palgrave: A Dictionary of Economics.” The Appendix in Camerer (1991) has an imprecise yet informative glossary of game-theoretic terms, which could be a nice reference for writing (and revising) this book. For those who teach game theory to undergraduate students, Chapter 2 (Strategy and Game Theory Concepts) naturally is a must-read and can be used to infer the quality of the (technical) writing. My first and the least important comment is that the quote from Sun Tzu on p.9, “Thus, what is of supreme importance in war is to attack the enemy’s strategy,” looks odd here. With all due respect to Sun Tzu, I have to say that attacking the enemy’s strategy, within this context, merely reflects his “having best responses or replies against enemy’s action in mind,” which might be known even by laymen at that time. Strategies (and actions) are components of the rules of game, which we learn, follow, and work on. A phrase such as attacking rivals’ strategies might confuse the beginners, and later on make equilibrium (or solution) concepts incomprehensible. Other than that quote and a loosely organized kick-off in Chapter 2, I very much dislike the section entitled “Consumer Behavior, Utility Theory, and Game Theory” in light of problems with notations and having concepts that should not be brought up. For instance, strict preference (ordering) is defined while the indifference (relation) is not; neoclassical utility functions are introduced with the domain being the space of bundles yet any reader might wonder how on earth the rivals’ choice can affect his satisfaction or utility (let alone payoff); the notion of information set is absent; a reasonable galley-proof should detect where subscripts must be used (e.g., p.13). “Those who can, do. Those who cannot, teach,” is, unfortunately but equally acceptably, often said when an educator joins a dinner party or informal gathering in western societies. Having been “teaching” over 25 years, I would like to “do” something about my criticism given above, just to be supportive, if not constructive enough. Should I be writing Chapter 2, I would begin with the well-known prisoners’ dilemma story, coined by Albert W. Tucker, as follows. Bonnie (called player 1) and her partner-in-crime Clyde (as player 2) together robbed a bank, hid the monies and evidence, and finally were arrested. They were questioned in separate rooms simultaneously. If player 1 (resp. player 2) implicates the other while player 2 (resp. player 1) does not, player 1 (resp. player 2) will be set free (as the suspect-turned-prosecution-witness or state witness) while the other will spend 40 years in prison. If each implicates the other, both will be sentenced for 20 years. If none implicates the other, only one-year sentence will be made for both. Assume that players only care about the ordinal ranking of own outcomes, so from 0 > −1 > −20 > −40 to 4 > 3 > 1 > 0 , we now suggest that the payoff to player 1 is 4 if player 1 implicates player 2 while player 2 does not; is 3 if none implicates the other (i.e., a tacit collusion plan is carried out); is 1 if each implicates the other; is 0.
(3) Jong-Shin Wei. 89. if player 2 implicates player 1 while player 1 does not (i.e., player 1 becomes a sucker because player 2 benefits from unilaterally deviating from the tacit collusion plan). The 1987 story invented by Robert Aumann then comes in handy. Two kids are asked to simultaneously and independently make a wish: either selfishly ask for $1 for herself or altruistically ask for $3 for her friend. Immediately we can formally wrap up this scenario by introducing payoff functions π 1 and π 2 respectively for player 1 and player 2 as follows. π 1 ( s, s ) := 1 + 0 = 1 , π 1 ( s, a) := 1 + 3 = 4 , π 1 (a, s) := 0 + 0 = 0 , π 1 (a, a) := 0 + 3 = 3 , π 2 ( s, s) := 1 , π 2 ( s, a) := 0 , π 2 (a, s) := 4 , and π 2 (a, a) := 3 , where for example, the ordered pair of strategies ( s, a ) stands for the situation when player 1 is selfish (hence, letter s ) while player 2 is altruistic (justifying the letter a ). It goes without saying that Aumann’s version exactly captures the spirit of prisoners’ dilemma at less cost in elaborating and assigning payoffs. Note that both players have the same strategy set as {s, a} ; the domain of both payoff functions is { ( s, s ) , ( s, a) , (a, s ) , (a, a ) }. By observing that each payoff function assigns a unique number to each ordered pair of strategies, readers can understand what the strategic interaction is all about, which is better than bringing in the notion of interdependent utility functions. If both players agree to play ( s, s ) , then whoever makes unilateral deviation will end up with receiving payoff 0, hence not worthy. At Nash equilibrium, no player has the incentives to make any unilateral deviation. Hence, ( s, s ) is a Nash equilibrium, which has the self-enforcing property. Note also that for ( s, s ) to be a Nash equilibrium, we only need to make sure π 1 ( s, s ) ≥ π 1 (a, s ) and π 2 ( s, s) ≥ π 2 ( s, a ) , implying that Nash equilibrium is informationally efficient. To see why the tacit collusion, (a, a ) , is not a Nash equilibrium, it suffices to note either π 1 (a, a) < π 1 ( s, a ) or π 2 (a, a ) < π 2 (a, s ) . By labeling π 1 and π 2 on two axes and marking four ordered pairs of payoffs on the first quadrant, we can easily present the notion of Pareto efficiency (for this two-person society). Once readers see that (a, a ) is the unique Nash equilibrium while the other three are Pareto efficient, they will understand why the term dilemma is used. Best responses can be introduced, followed by reaction curves, dominant (or dominating) strategy, dominant strategy equilibrium, dominated strategy, and mixed strategy. Other solution concepts such as Minmax, Maxmin, and Hurwicz criterion can be introduced too. So much for the one-shot or static game introduction. Next, we assume that player 1 moves first, at which backward induction reasoning helps us understand the notion of Stackelberg equilibrium, saying that ( s, s ) will be played. At this juncture, simple 2 × 2 examples can be supplemented to illustrate that there are situations at which players would fight to be the follower or being the leader has the advantage. Those examples are quite standard in undergraduate industrial organization course. All of the above do not need calculus and might interest readers to explore further. For instance, how can we model a three-person prisoners’ dilemma? Why is the information set crucial in using a game tree (game in extensive form) to describe a simultaneous-move game? Why must we distinguish strategy from action when we analyze repeated games?.
(4) 90. International Journal of Business and Economics. Chapter 3 begins with the prisoners’ dilemma game commented earlier. With MALTAB, they show how to build a model, specify payoffs, and let the software find dominant strategies and Nash equilibria. Using computer softwares to find Nash equilibria is not new, which was done even decades ago. But I find it well justified here. I do have three comments to make. First, some serious discussion on the relation between the method of iterative elimination of (strictly) dominated strategies and Nash equilibrium computation should be added before using MALTAB to find dominated strategies. Second, it would justify the use of MALTAB if authors can have it work on some one-shot game with a large number of players. [An unpublished paper of mine is about running experiments as well as finding (with help from establishing lemmas and logical reasoning, not from computer softwares) Nash equilibria and Pareto efficient ordered pairs of strategies for a static ad hoc game with arbitrarily many players in the spirit of prisoners’ dilemma (a la Thomas Schelling).] Finally, for the battle of the sexes game, I prefer the usual out-of-equilibrium payoffs to what is on p.43, at which husband and wife are indifferent whenever the date is off. To be exact, a better and well-received way of modeling it is to assume: by nature the husband prefers football to musical while the wife prefers musical to football; another dimension is that both prefer dating to attending events separately. So, better have π 1 ( f , f ) := 3 , π 1 ( f , m) := 1 , π 1 (m, f ) := 0 , π 1 (m, m) := 2 , π 2 ( f , f ) := 2 , π 2 ( f , m) := 1 , π 2 (m, f ) := 0 , and π 2 (m, m) := 3 . An inquiry to the strategic value for a business is important, as authors put in Chapter 4. But readers might be lost along the journey. Appendices on stochastic process, Brownian motion, and Bellman equation are not doing any good here. Simply working thru some nice parametric example, as David Kreps did in “A Course in Microeconomic Theory” (also cited in the book), will be a good choice. Even the classic chain store paradox will better serve readers. It is fair to say that the merits of this book must be nowhere but in Chapters 5 thru 8, where background information on beer industry, wine and spirits industry, Corporate Average Fuel Economy Standards, and automobile industry are adequately documented. Nevertheless, I see case studies there but find no demonstration on how to apply skills learned here to come up with solid reports or policy recommendation. Having several tables showing sales data is easy; justifying payoffs by interpreting those data is tough and absent here. Recall that the ambition of this book is to define games based on actual situations, model games with payoffs and probabilities, and make strategically sound decisions. I am afraid that the task is still half-way thru. Having enough good books on game theory and the law in the market diminishes the value of Chapter 9. It also surprises me that auction is never mentioned in the book! Martin Shubik’s dollar auction is entertaining and inspiring; some discussion and even conducting experiments on comparing William Vickrey’s second-price sealed-bid auction and open-cry (English) auction can be done easily. In contrast to the days when not many economists joined Milton Friedman in the prominent advocates lineup, nowadays more pop stars of academia (e.g., Gregory Mankiw and Steven Levitt) have the tools and charm to explain people’s.
(5) Jong-Shin Wei. 91. lives to them. Freakonomics, the 2005 best seller by Steven Levitt and Stephen Dubner is a stand-out. It is unfair to compare this book to Freakonomics, Milgrom and Roberts (1992), Harrington (2009), or Rasmusen (2006). But it pays to identify markets for them. Milgrom and Roberts (1992) is a definitive MBA text covering an important part of the space of management problems, which is extremely useful if we view management in line with the stewardship. For other dimensions at which the nature of management is not (entirely) stewards, we need some other texts. Harrington (2009), which is the newcomer and my favorite, and Rasmusen (2006) are rich in contents but may be too technical for non-econ majors. Here comes my point: there is potential market for a book like this, provided that a major revision is done. Casey Stengel once said: “There are three things you can do in a baseball game. You can win, or you can lose, or it can rain.” Stealing his witty words, I would conclude by saying that “there are three things you can do in a book review. You can show thumb-up, or you can be critical, or you can save effort, cross fingers, and patiently wait for the second edition.” The last one is what I love to do. References. Camerer, C. F., (1991), “Does Strategy Research Need Game Theory?” Strategic Management Journal, 12, 137-152. Harrington, J. E. Jr., (2009), Games, Strategies, and Decision Making, New York: Worth Publishers. Milgrom, P. and J. Roberts, (1992), Economics, Organization and Management, Englewod Cliffs: Prentice-Hall. Rasmusen, E., (2006), Games and Information: An Introduction to Game Theory, 4th edition, Oxford: Blackwell Publishers. Jong-Shin Wei* Department of International Business Administration Wenzao Ursuline College of Languages. * Correspondence to: Department of International Business Administration, Wenzao Ursuline College of Languages, 900 Mintzu First Road, Kaohsiung 80793, Taiwan. Email: jsw12011958@gmail.com. Encouragement and comments from a board member is acknowledged with gratitude..
(6)
相關文件
‘What Works and for Whom: A Review of OECD Countries’ Experiences with Active Labour Market Policies’, Swedish Economic Policy Review, 8(2): 9-56. ‘Do Wage Subsidies
Promote project learning, mathematical modeling, and problem-based learning to strengthen the ability to integrate and apply knowledge and skills, and make. calculated
More than 90% of the leaders reported that the Panel Chair was also expected to ensure that all teachers followed curriculum guidelines, and to review evidence of teaching
Students are asked to collect information (including materials from books, pamphlet from Environmental Protection Department...etc.) of the possible effects of pollution on our
According to the authors’ earlier experience on symmetric cone optimization, we believe that spectral decomposition associated with cones, nonsmooth analysis regarding
Because both sets R m and L h i ði ¼ 1; 2; :::; JÞÞ are second-order regular, similar to [19, Theorem 3.86], we state in the following theorem that there is no gap between
• Both the galaxy core problem and the abundance problem are associated with dwarf galaxies on the scale < few kpc !!..
According to the authors’ earlier experience on symmetric cone optimization, we believe that spectral decomposition associated with cones, nonsmooth analysis regarding cone-