Evolutionarily stable strategy

Evolutionarily stable strategy
A solution concept in game theory
Relationships
Subset of Nash equilibrium
Superset of Stochastically stable equilibrium, Stable Strong Nash equilibrium
Intersects with Subgame perfect equilibrium, Trembling hand perfect equilibrium, Perfect Bayesian equilibrium
Significance
Proposed by John Maynard Smith and George R. Price
Used for Biological modeling and Evolutionary game theory
Example Hawk-dove

An evolutionarily stable strategy (ESS) is a strategy which, if adopted by a population in a given environment, cannot be invaded by any alternative strategy that is initially rare. It is relevant in game theory, behavioural ecology, and evolutionary psychology. An ESS is an equilibrium refinement of the Nash equilibrium. It is a Nash equilibrium that is "evolutionarily" stable: once it is fixed in a population, natural selection alone is sufficient to prevent alternative (mutant) strategies from invading successfully. The theory is not intended to deal with the possibility of gross external changes to the environment that bring new selective forces to bear.

First published as a specific term in the 1972 book by John Maynard Smith,[1] the ESS is widely used in behavioural ecology and economics, and has been used in anthropology, evolutionary psychology, philosophy, and political science.

History

Evolutionarily stable strategies were defined and introduced by John Maynard Smith and George R. Price in a 1973 Nature paper.[2] Such was the time taken in peer-reviewing the paper for Nature that this was preceded by a 1972 essay by Maynard Smith in a book of essays titled On Evolution.[1] The 1972 essay is sometimes cited instead of the 1973 paper, but university libraries are much more likely to have copies of Nature. Papers in Nature are usually short; in 1974, Maynard Smith published a longer paper in the Journal of Theoretical Biology.[3] Maynard Smith explains further in his 1982 book Evolution and the Theory of Games.[4] Sometimes these are cited instead. In fact, the ESS has become so central to game theory that often no citation is given, as the reader is assumed to be familiar with it.

Maynard Smith mathematically formalised a verbal argument made by Price, which he read while peer-reviewing Price's paper. When Maynard Smith realized that the somewhat disorganised Price was not ready to revise his article for publication, he offered to add Price as co-author.

The concept was derived from R. H. MacArthur[5] and W. D. Hamilton's[6] work on sex ratios, derived from Fisher's principle, especially Hamilton's (1967) concept of an unbeatable strategy. Maynard Smith was jointly awarded the 1999 Crafoord Prize for his development of the concept of evolutionarily stable strategies and the application of game theory to the evolution of behaviour.[7]

Uses of ESS:

Motivation

The Nash equilibrium is the traditional solution concept in game theory. It depends on the cognitive abilities of the players. It is assumed that players are aware of the structure of the game and consciously try to predict the moves of their opponents and to maximize their own payoffs. In addition, it is presumed that all the players know this (see common knowledge). These assumptions are then used to explain why players choose Nash equilibrium strategies.

Evolutionarily stable strategies are motivated entirely differently. Here, it is presumed that the players' strategies are biologically encoded and heritable. Individuals have no control over their strategy and need not be aware of the game. They reproduce and are subject to the forces of natural selection (with the payoffs of the game representing reproductive success (biological fitness)). It is imagined that alternative strategies of the game occasionally occur, via a process like mutation. To be an ESS, a strategy must be resistant to these alternatives.

Given the radically different motivating assumptions, it may come as a surprise that ESSes and Nash equilibria often coincide. In fact, every ESS corresponds to a Nash equilibrium, but some Nash equilibria are not ESSes.

Nash equilibria and ESS

An ESS is a refined or modified form of a Nash equilibrium. (See the next section for examples which contrast the two.) In a Nash equilibrium, if all players adopt their respective parts, no player can benefit by switching to any alternative strategy. In a two player game, it is a strategy pair. Let E(S,T) represent the payoff for playing strategy S against strategy T. The strategy pair (S, S) is a Nash equilibrium in a two player game if and only if this is true for both players and for all TS:

E(S,S) ≥ E(T,S)

In this definition, strategy T can be a neutral alternative to S (scoring equally well, but not better). A Nash equilibrium is presumed to be stable even if T scores equally, on the assumption that there is no long-term incentive for players to adopt T instead of S. This fact represents the point of departure of the ESS.

Maynard Smith and Price[2] specify two conditions for a strategy S to be an ESS. For all TS, either

  1. E(S,S) > E(T,S), or
  2. E(S,S) = E(T,S) and E(S,T) > E(T,T)

The first condition is sometimes called a strict Nash equilibrium.[9] The second is sometimes called "Maynard Smith's second condition". The second condition means that although strategy T is neutral with respect to the payoff against strategy S, the population of players who continue to play strategy S has an advantage when playing against T.

There is also an alternative, stronger definition of ESS, due to Thomas.[10] This places a different emphasis on the role of the Nash equilibrium concept in the ESS concept. Following the terminology given in the first definition above, this definition requires that for all TS

  1. E(S,S) ≥ E(T,S), and
  2. E(S,T) > E(T,T)

In this formulation, the first condition specifies that the strategy is a Nash equilibrium, and the second specifies that Maynard Smith's second condition is met. Note that the two definitions are not precisely equivalent: for example, each pure strategy in the coordination game below is an ESS by the first definition but not the second.

In words, this definition looks like this: The payoff of the first player when both players play strategy S is higher than (or equal to) the payoff of the first player when he changes to another strategy T and the second players keeps his strategy S. *AND* The payoff of the first player when only his opponent changes his strategy to T is higher than his payoff in case that both of players change their strategies to T.

This formulation more clearly highlights the role of the Nash equilibrium condition in the ESS. It also allows for a natural definition of related concepts such as a weak ESS or an evolutionarily stable set.[10]

Examples of differences between Nash Equilibria and ESSes

Cooperate Defect
Cooperate 3, 3 1, 4
Defect 4, 1 2, 2
Prisoner's Dilemma
A B
A 2, 2 1, 2
B 2, 1 2, 2
Harm thy neighbor

In most simple games, the ESSes and Nash equilibria coincide perfectly. For instance, in the Prisoner's Dilemma there is only one Nash equilibrium, and its strategy (Defect) is also an ESS.

Some games may have Nash equilibria that are not ESSes. For example, in Harm thy neighbor both (A, A) and (B, B) are Nash equilibria, since players cannot do better by switching away from either. However, only B is an ESS (and a strong Nash). A is not an ESS, so B can neutrally invade a population of A strategists and predominate, because B scores higher against B than A does against B. This dynamic is captured by Maynard Smith's second condition, since E(A, A) = E(B, A), but it is not the case that E(A,B) > E(B,B).

C D
C 2, 2 1, 2
D 2, 1 0, 0
Harm everyone
Swerve Stay
Swerve 0,0 -1,+1
Stay +1,-1 -20,-20
Chicken

Nash equilibria with equally scoring alternatives can be ESSes. For example, in the game Harm everyone, C is an ESS because it satisfies Maynard Smith's second condition. D strategists may temporarily invade a population of C strategists by scoring equally well against C, but they pay a price when they begin to play against each other; C scores better against D than does D. So here although E(C, C) = E(D, C), it is also the case that E(C,D) > E(D,D). As a result, C is an ESS.

Even if a game has pure strategy Nash equilibria, it might be that none of those pure strategies are ESS. Consider the Game of chicken. There are two pure strategy Nash equilibria in this game (Swerve, Stay) and (Stay, Swerve). However, in the absence of an uncorrelated asymmetry, neither Swerve nor Stay are ESSes. There is a third Nash equilibrium, a mixed strategy which is an ESS for this game (see Hawk-dove game and Best response for explanation).

This last example points to an important difference between Nash equilibria and ESS. Nash equilibria are defined on strategy sets (a specification of a strategy for each player), while ESS are defined in terms of strategies themselves. The equilibria defined by ESS must always be symmetric, and thus have fewer equilibrium points.

ESS vs. Evolutionarily Stable State

In population biology, the two concepts of an evolutionarily stable strategy (ESS) and an evolutionarily stable state are closely linked but describe different situations.

Thomas (1984)[11] applies the term ESS to an individual strategy which may be mixed, and evolutionarily stable population state to a population mixture of pure strategies which may be formally equivalent to the mixed ESS.

Whether a population is evolutionarily stable does not relate to its genetic diversity: it can be genetically monomorphic or polymorphic.[4]

Stochastic ESS

In the classic definition of an ESS, no mutant strategy can invade. In finite populations, any mutant could in principle invade, albeit at low probability, implying that no ESS can exist. In a finite population, an ESS can instead be defined as a strategy which, should it become invaded by a new mutant strategy with probability p, would be able to counterinvade from a single starting individual with probability >p.[12]

Prisoner's dilemma and ESS

Cooperate Defect
Cooperate 3, 3 1, 4
Defect 4, 1 2, 2
Prisoner's Dilemma

A common model of altruism and social cooperation is the Prisoner's dilemma. Here a group of players would collectively be better off if they could play Cooperate, but since Defect fares better each individual player has an incentive to play Defect. One solution to this problem is to introduce the possibility of retaliation by having individuals play the game repeatedly against the same player. In the so-called iterated Prisoner's dilemma, the same two individuals play the prisoner's dilemma over and over. While the Prisoner's dilemma has only two strategies (Cooperate and Defect), the iterated Prisoner's dilemma has a huge number of possible strategies. Since an individual can have different contingency plan for each history and the game may be repeated an indefinite number of times, there may in fact be an infinite number of such contingency plans.

Three simple contingency plans which have received substantial attention are Always Defect, Always Cooperate, and Tit for Tat. The first two strategies do the same thing regardless of the other player's actions, while the later responds on the next round by doing what was done to it on the previous round—it responds to Cooperate with Cooperate and Defect with Defect.

If the entire population plays Tit-for-Tat and a mutant arises who plays Always Defect, Tit-for-Tat will outperform Always Defect. If the population of the mutant becomes too large — the percentage of the mutant will be kept small. Tit for Tat is therefore an ESS, with respect to only these two strategies. On the other hand, an island of Always Defect players will be stable against the invasion of a few Tit-for-Tat players, but not against a large number of them.[13] If we introduce Always Cooperate, a population of Tit-for-Tat is no longer an ESS. Since a population of Tit-for-Tat players always cooperates, the strategy Always Cooperate behaves identically in this population. As a result, a mutant who plays Always Cooperate will not be eliminated. However, even though a population of Always Cooperate and Tit-for-Tat can coexist, if there is a small percentage of the population that is Always Defect, the selective pressure is against Always Cooperate, and in favour of Tit-for-Tat. This is due to the lower payoffs of cooperating than those of defecting in case the opponent defects.

This demonstrates the difficulties in applying the formal definition of an ESS to games with large strategy spaces, and has motivated some to consider alternatives.

ESS and human behavior

The fields of sociobiology and evolutionary psychology attempt to explain animal and human behavior and social structures, largely in terms of evolutionarily stable strategies. Sociopathy (chronic antisocial or criminal behavior) may be a result of a combination of two such strategies.[14]

Evolutionarily stable strategies were originally considered for biological evolution, but they can apply to other contexts. In fact, there are stable states for a large class of adaptive dynamics. As a result, they can be used to explain human behaviours that lack any genetic influences.

See also

References

  1. 1 2 Maynard Smith, J. (1972). "Game Theory and The Evolution of Fighting". On Evolution. Edinburgh University Press. ISBN 0-85224-223-9.
  2. 1 2 Maynard Smith, J.; Price, G.R. (1973). "The logic of animal conflict". Nature 246 (5427): 15–8. Bibcode:1973Natur.246...15S. doi:10.1038/246015a0.
  3. Maynard Smith, J. (1974). "The Theory of Games and the Evolution of Animal Conflicts". Journal of Theoretical Biology 47 (1): 209–21. doi:10.1016/0022-5193(74)90110-6. PMID 4459582.
  4. 1 2 3 Maynard Smith, John (1982). Evolution and the Theory of Games. ISBN 0-521-28884-3.
  5. MacArthur, R. H. (1965). Waterman T.; Horowitz H., eds. Theoretical and mathematical biology. New York: Blaisdell.
  6. Hamilton, W.D. (1967). "Extraordinary sex ratios". Science 156 (3774): 477–88. Bibcode:1967Sci...156..477H. doi:10.1126/science.156.3774.477. JSTOR 1721222. PMID 6021675.
  7. Press release for the 1999 Crafoord Prize
  8. Alexander, Jason McKenzie (23 May 2003). "Evolutionary Game Theory". Stanford Encyclopedia of Philosophy. Retrieved 31 August 2007.
  9. Harsanyi, J (1973). "Oddness of the number of equilibrium points: a new proof". Int. J. Game Theory 2 (1): 235–50. doi:10.1007/BF01737572.
  10. 1 2 Thomas, B. (1985). "On evolutionarily stable sets". J. Math. Biology 22: 105–115. doi:10.1007/bf00276549.
  11. Thomas, B. (1984). "Evolutionary stability: states and strategies". Theor. Pop. Biol. 26 (1): 49–67. doi:10.1016/0040-5809(84)90023-6.
  12. King, Oliver D.; Masel, Joanna (1 December 2007). "The evolution of bet-hedging adaptations to rare scenarios". Theoretical Population Biology 72 (4): 560–575. doi:10.1016/j.tpb.2007.08.006. PMC 2118055. PMID 17915273.
  13. Axelrod, Robert (1984). The Evolution of Cooperation. ISBN 0-465-02121-2.
  14. Mealey, L. (1995). "The sociobiology of sociopathy: An integrated evolutionary model". Behavioral and Brain Sciences 18 (03): 523–99. doi:10.1017/S0140525X00039595.

Further reading

External links

This article is issued from Wikipedia - version of the Sunday, November 22, 2015. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.