Potential game
In game theory, a game is said to be a potential game if the incentive of all players to change their strategy can be expressed using a single global function called the potential function. Robert W. Rosenthal created the concept of a congestion game in 1973. Dov Monderer and Lloyd Shapley [1] created the concept of a potential game and proved that every congestion game is a potential game.
The properties of several types of potential games have since been studied. Games can be either ordinal or cardinal potential games. In cardinal games, the difference in individual payoffs for each player from individually changing one's strategy ceteris paribus has to have the same value as the difference in values for the potential function. In ordinal games, only the signs of the differences have to be the same.
The potential function is a useful tool to analyze equilibrium properties of games, since the incentives of all players are mapped into one function, and the set of pure Nash equilibria can be found by locating the local optima of the potential function. Convergence and finite-time convergence of an iterated game towards a Nash equilibrium can also be understood by studying the potential function.
Definition
We will define some notation required for the definition. Let be the number of players, the set of action profiles over the action sets of each player and be the payoff function.
A game is:
- an exact potential game if there is a function such that ,
- That is: when player switches from action to action , the change in the potential equals the change in the utility of that player.
- a weighted potential game if there is a function and a vector such that ,
- an ordinal potential game if there is a function such that ,
- a generalized ordinal potential game if there is a function such that ,
- a best-response potential game if there is a function such that ,
where is the best payoff for player given .
A simple example
+1 | –1 | |
+1 | +b1+w, +b2+w | +b1–w, –b2–w |
–1 | –b1–w, +b2–w | –b1+w, –b2+w |
Fig. 1: Potential game example |
In a 2-player, 2-strategy game with externalities, individual players' payoffs are given by the function ui(si, sj) = bi si + w si sj, where si is players i's strategy, sj is the opponent's strategy, and w is a positive externality from choosing the same strategy. The strategy choices are +1 and −1, as seen in the payoff matrix in Figure 1.
This game has a potential function P(s1, s2) = b1 s1 + b2 s2 + w s1 s2.
If player 1 moves from −1 to +1, the payoff difference is Δu1 = u1(+1, s2) – u1(–1, s2) = 2 b1 + 2 w s2.
The change in potential is ΔP = P(+1, s2) – P(–1, s2) = (b1 + b2 s2 + w s2) – (–b1 + b2 s2 – w s2) = 2 b1 + 2 w s2 = Δu1.
The solution for player 2 is equivalent. Using numerical values b1 = 2, b2 = −1, w = 3, this example transforms into a simple battle of the sexes, as shown in Figure 2. The game has two pure Nash equilibria, (+1, +1) and (−1, −1). These are also the local maxima of the potential function (Figure 3). The only stochastically stable equilibrium is (+1, +1), the global maximum of the potential function.
|
|
A 2-player, 2-strategy game cannot be a potential game unless
Equilibrium Selection
The existence of pure strategy Nash equilibrium is guaranteed in potential games, and multiple Nash equilibria may exist. Learning algorithms such as "best response" and "better response" can only guarantee that the iterative learning process can converge to one of the Nash equilibria (if multiple). Equilibrium selective learning algorithms aim to design a strategy where convergence to the best Nash equilibrium, with respect to the potential function, is guaranteed. In,[2] the authors propose an equilibrium selective algorithm named MaxLogit, which provably converges to the best Nash equilibrium at the fastest speed in its class, using mixing rate analysis of induced Markovian chains. In a special case where every player shares the same objective function (hence the potential function), and possibly the same action set, the problem is equivalent to distributed combinatorial optimization which arises in many engineering applications. Equilibrium selective learning algorithms such as MaxLogit can be used in such combinatorial optimizations, even in a distributed fashion.
Bounded Rational Models
A logit equilibrium, the Gibbs measure from statistical mechanics, was shown to be the equilibrium of a finite-player potential game,[3] where players are assumed to be bounded-rational in one of two ways. Dynamically, players follow the gradient of the potential on pure strategy space, perturbed by a random variable (motivated by the inherent behavior strategy randomness used to justify a classical mixed-strategy Nash equilibrium). Alternately, a static notion of equilibrium can be used, based on agents arbitraging information out of the system to adapt and improve, as gauged by (Shannon) information entropy.
For a finite number of agents, both result in the same Gibbs equilibrium measure, where the potential exactly corresponds to the negative of the "energy" in physics. There are some important corollaries to this:
- even a single player is complex in the sense that her endogenous randomness results in her motion being that for an irreversible dissipative system (convective), which converges to a steady-state Gibbs equilibrium (the concept of equilibrium is qualitatively irreversible, otherwise agents would pass right through it as though it were any other point),
- any model in economics that uses, a-priori, a Gibbsian-derived model (standard or mean-field interacting particle systems, such as Curie-Weiss) from statistical mechanics, has a potential (the negative of the Hamiltonian/energy of the a-priori model) and can thus be interpreted in the context of a bounded-rational potential game, and
- as a potential refines the Nash equilibriums (eliminates local maximums that aren't global maximums), statistical mechanics can refine the potential by singling out multiple global maximums via symmetry breaking (much as minimum free-energy "all up" or "all down" can be selected in a ferromagnet).
This model satisfies the (Bohr) Correspondence Principle for any finite number or infinite number of players, since the Gibbs measure yields the (refined) Nash equilibrium in the limit of zero-temperature perfect rational dynamics; i.e., this model has a limiting classical behavior.
Phase transitions never occur for a finite number of players, but can occur in the infinite-player games, with spontaneous symmetry breaking and multiple equilibriums below a critical "temperature" (degree of non-rational behavior). For high enough non-rational behavior, i.e., high "temperature", there will always be a unique equilibrium state (e.g., Dobrushin uniqueness theorem). These phase transitions give rise to the emergence of self-organized patterns (i.e., phases) which, for example, correspond to different macroscopic buying/selling patterns of agents in a particular Cournot competition.
An economic interpretation of other parameters in the Gibbs formalism, such as "entropy", "magnetization", "susceptibility", etc., as well as scaling interactions (local[4] , power-law decay, global competition or collusion, mixtures of local/global coopetition[5] ), are explained in [3] as well as in an application to a speculative and hedging model.[6]
In this speculative and hedging model, two interdependent markets are examined, with bounded rationality assumptions. The existence of multiple equilibriums is shown to be dependent on certain parameters in the model; i.e., equilibrium(s) depend on the phase of the model (c.f., Sonnenschein–Mantel–Debreu theorem).
Other models assume an agent has the ability to compute her instantaneous expected payoff by conditionally averaging out other agents' distributions. This results in a mean-field-type model for which the equilibrium is obtained by finding fixed points.[7]
References
- ↑ Monderer, Dov; Shapley, Lloyd (1996). "Potential Games". Games and Economic Behavior 14: 124–143. doi:10.1006/game.1996.0044.
- ↑ Song, Yang; Wong, Starsky H.Y.; Lee, Kwang-Won (2011). "Optimal gateway selection in multi-domain wireless networks: a potential game perspective". Proceedings of the 17th Annual International Conference on Mobile Computing and Networking. MobiCom '11. ISBN 978-1-4503-0492-4.
- 1 2 Campbell, Michael J. (2005). "A Gibbsian approach to potential game theory (draft)". arXiv:cond-mat/0502112v2.
- ↑ Pinkse, Joris; Slade, Margaret E.; Brett, Craig (2002). "Spatial Price Competition: A Semiparametric Approach" (PDF). Econometrica 70 (3): 1111–1153. doi:10.1111/1468-0262.00320.
- ↑ Carfi, David (2015). A model for coopetitive games.
- ↑ Carfi, David; Campbell, Michael J. (2015). "Bounded Rational Speculative and Hedging Interaction Model in Oil and U.S. Dollar Markets" (PDF). Journal of Mathematical Economics and Finance (ASERS) 1 (1): 4–23. doi:10.14505/jmef.01.
- ↑ Anderson, Simon; Goeree, Jacob; Holt, Charles (2004). "Noisy Directional Learning and the Logit Equilibrium" (PDF). Scandinavian Journal of Economics 106: 581–602. doi:10.1111/j.0347-0520.2004.00378.x.
External links
- Lecture notes of Yishay Mansour about Potential and congestion games
- Section 19 in: Nisan, N.; Roughgarden, T.; Tardos, É.; Vazirani, V.V. (2007), Algorithmic Game Theory, New York: Cambridge University Press