# John Nash and the theory of games

John Forbes Nash and his wife Alicia were tragically killed in a car crash on May 23, having just returned from a ceremony in Norway where John Nash received the prestigious Abel Prize in Mathematics (which, along with the Fields Medal, is the closest thing mathematics has to a Nobel Prize). Nash’s long struggle with mental illness, as well as his miraculous recovery, are depicted vividly in Sylvia Nasar’s book “A Beautiful Mind” and the Oscar-winning film which it inspired. In this post, I want to give a brief account of Nash’s work in game theory, for which he won the 1994 Nobel Prize in Economics. Before doing that, I should mention, however, that while this is undoubtedly Nash’s most influential work, he did many other things which from a purely mathematical point of view are much more technically difficult. Nash’s Abel Prize, for example (which he shared with Louis Nirenberg), was for his work in non-linear partial differential equations and its applications to geometric analysis, which most mathematicians consider to be Nash’s deepest contribution to mathematics. You can read about that work here.

On, now, to game theory. We will focus, for simplicity, on the theory of non-cooperative matrix games between two players ${P_1}$ and ${P_2}$. We begin with the theory of zero-sum games, where any gain for $P_1$ corresponds to an equal loss for $P_2$ and vice-versa.  Suppose ${A}$ is an ${m \times n}$ matrix of real numbers. Define a mixed strategy for ${P_1}$ to be a vector ${x=(x_1,\ldots,x_m) \in {\mathbf R}^m}$ with ${x_i \geq 0}$ for all ${i}$ and ${x_1 + \cdots + x_m = 1}$, and similarly define a mixed strategy for ${P_2}$ to be a vector ${y=(y_1,\ldots,y_n) \in {\mathbf R}^n}$ with ${y_j \geq 0}$ for all ${j}$ and ${y_1 + \cdots + y_n = 1}$. We denote by ${M_i}$ the simplex representing the set of all mixed strategies for player ${P_i}$ (${i=1,2}$). The expected payoff to ${P_1}$ if ${P_1}$ plays the mixed strategy ${x}$ and ${P_2}$ plays the mixed strategy ${y}$ is ${E(x,y) = x^t A y.}$

For each ${i=1,\ldots,m}$ the mixed strategy ${p_i}$ which is ${1}$ in the ${i^{\rm th}}$ component and ${0}$ elsewhere is called the ${i^{\rm th}}$ pure strategy for ${P_1}$, and similarly for ${j=1,\ldots,n}$ the mixed strategy ${q_i}$ which is ${1}$ in the ${j^{\rm th}}$ component and ${0}$ elsewhere is called the ${j^{\rm th}}$ pure strategy for ${P_2}$. One usually thinks of a mixed strategy as a probability distribution on the space of pure strategies. The ${(i,j)}$ entry of ${A}$ is the expected payoff to player ${P_1}$ if ${P_1}$ plays his ${i^{\rm th}}$ pure strategy and ${P_2}$ plays her ${j^{\rm th}}$ pure strategy. In general, ${E(x,y)}$ is the expected payoff to ${P_1}$ if ${P_1}$ plays his ${i^{\rm th}}$ pure strategy with probability ${x_i}$ and ${P_2}$ plays her ${j^{\rm th}}$ pure strategy with probability ${y_j}$.

Game theory began with the seminal 1928 paper of John von Neumann in which he proved the celebrated minimax theorem for zero-sum two-player games.  Von Neumann defines a solution to the game to be a pair ${(\bar{x},\bar{y}) \in M_1 \times M_2}$ of mixed strategies (called optimal strategies), together with a real number ${v}$ (called the value of the game), such that ${E(\bar{x},q_j) \geq v}$ for all ${j=1,\ldots,n}$ and ${E(p_i,\bar{y}) \leq v}$ for all ${i=1,\ldots,m}$. Intuitively, this means that:

• ${P_1}$ can assure himself of an expected payoff of at least ${v}$, no matter which pure strategy ${P_2}$ plays, by playing the mixed strategy ${\bar{x}}$, and
• ${P_2}$ can protect herself of an expected loss of more than ${v}$, no matter which pure strategy ${P_1}$ plays, by playing the mixed strategy ${\bar{y}}$.

The remarkable and fundamental theorem proved by von Neumann is that every zero-sum two-player matrix game as above has a solution. To illustrate this with an example, suppose Player 1 and Player 2 each independently choose either rock, paper, or scissors, and the loser (according to the usual rules) pays the winner a dollar. This can be summarized by the payoff matrix

$\displaystyle A = \left( \begin{array}{lll} 0 & 1 & -1 \\ -1 & 0 & 1 \\ 1 & -1 & 0 \\ \end{array} \right).$
It is easy to check that ${(1/3,1/3,1/3)}$ is the unique optimal mixed strategy for both players (with value 0). In other words, both players should play rock, paper, and scissors with equal probabilities, and if they do this then each player’s expected payoff is zero.

For a more complicated example, suppose Player 1 is given the Ace of Diamonds, Ace of Clubs, and Two of Diamonds while Player 2 is given the Ace of Diamonds, Ace of Clubs, and Two of Clubs. Each player chooses one of his cards; Player 1 wins if the suits match and Player 2 wins if they do not. The amount of the payoff is the numerical value of the card chosen by the winner, with one exception: if both players choose their Two then the payoff is zero. This can be summarized by the payoff matrix

$\displaystyle A = \left( \begin{array}{lll} 1 & -1 & -2 \\ -1 & 1 & 1 \\ 2 & -1 & 0 \\ \end{array} \right).$

Exercise: Show that the mixed strategies ${(0,3/5,2/5)}$ for Player 1 and ${(2/5,3/5,0)}$ for Player 2 are optimal strategies with common value ${1/5}$.

An example of a non-zero sum game, to which von Neumann’s theorem does not apply but to which Nash’s theorem does, is the Prisoner’s Dilemma.

It is not difficult to show that von Neumann’s theorem is equivalent to the statement that for every zero-sum two-player matrix game, there exist mixed strategies ${\bar{x},\bar{y}}$ such that $\displaystyle E(x,\bar{y}) \leq E(\bar{x},\bar{y}) \leq E(\bar{x},y)$ for all mixed strategies ${x,y}$. The theorem is also equivalent to the statement that for every zero-sum two-player matrix game, the quantities $\displaystyle {\rm min}_y {\rm max}_x E(x,y) {\rm \; and \; } {\rm max}_x {\rm min}_y E(x,y)$ exist and are equal. In this last form, the result has become known as von Neumann’s minimax theorem. The min-max and the max-min are both equal to the value ${v}$ of the game, which in particular shows that this value is uniquely determined (though there might be more than one set of optimal strategies).

We now turn to Nash’s extension of von Neumann’s fundamental theorem in which one does not assume the zero-sum condition. (Nash’s theorem also applies to any number of players, which represents another improvement over von Neumann, but for simplicity of exposition we will stick to the two-player scenario.) Nash discovered this result — and the key concept now known as “Nash equilibrium” — as a graduate student at Princeton in the summer of 1949. Nash told von Neumann’s secretary that he wanted to discuss an idea which might be of interest to the distinguished professor. As Sylvia Nasar writes:

Von Neumann was sitting at an enormous desk, looking more like a prosperous bank president than an academic in his expensive three-piece suit, silk tie, and jaunty pocket handkerchief. He had the preoccupied air of a busy executive. At the time, he was holding a dozen consultancies, “arguing the ear off Robert Oppenheimer” over the development of the H-bomb, and overseeing the construction and programming of two prototype computers. He gestured Nash to sit down. He knew who Nash was, of course, but seemed a bit puzzled by his visit.

He listened carefully, with his head cocked slightly to one side and his fingers tapping. Nash started to describe the proof he had in mind… But before he had gotten out more than a few disjointed sentences, von Neumann interrupted, jumped ahead to the as yet unstated conclusion of Nash’s argument, and said abruptly, “That’s trivial, you know. That’s just a fixed point theorem.”

…[Nash] never approached von Neumann again. Nash later rationalized von Neumann’s reaction as the naturally defensive posture of an established thinker to a younger rival’s idea, a view that may say more about what was in Nash’s mind when he approached von Neumann than about the older man. Nash was certainly conscious that he was implicitly challenging von Neumann. Nash noted in his Nobel autobiography that his ideas “deviated somewhat from the ‘line’ (as if of ‘political party lines’) of von Neumann and Morgenstern’s book”.

A few days after the disastrous meeting with von Neumann, Nash accosted [fellow graduate student] David Gale… Unlike von Neumann, Gale saw Nash’s point… Gale realized that Nash’s idea applied to a far broader class of real-world situations than von Neumann’s notion of zero-sum games. “He had a concept that generalized to disarmament”, Gale said later. But Gale was less entranced by the possible applications of Nash’s idea than its elegance and generality. “The mathematics was so beautiful. It was so right mathematically.”

Gale suggested asking a member of the National Academy of Sciences to submit the proof to the academy’s monthly proceedings. “He was spacey. He would never have thought of doing that,” Gale said recently, “so he gave me his proof and I drafted the NAS note.”… Gale added later, “I certainly knew right away that it was a thesis. I didn’t know it was a Nobel.”

To explain Nash’s theorem, we first need an extension of the setup used above. Suppose ${A_1}$ and ${A_2}$ are ${m \times n}$ matrices of real numbers. Define mixed strategies for the two players ${P_1}$ and ${P_2}$ as before. If ${P_1}$ plays the mixed strategy ${x}$ and ${P_2}$ plays the mixed strategy ${y}$, there are now two expected payoffs: the payoff to ${P_1}$ is ${E_1(x,y) = x^t A_1 y}$ and the payoff to ${P_2}$ is ${E_2(x,y) = x^t A_2 y}$. The game is zero-sum if${A_2 = -A_1}$, or equivalently, the payoff ${E_1(x,y)}$ to ${P_1}$ is the negative of the payoff ${E_2(x,y)}$ to ${P_2}$ for all ${x,y}$. In this special case we recover von Neumann’s theory by setting ${A = A_1}$ and focusing on the single payoff function ${E(x,y)=x \cdot Ay}$ to ${P_1}$.

A Nash equilibrium is a pair ${(\bar{x},\bar{y})}$ with ${\bar{x} \in M_1}$ and ${\bar{y} \in M_2}$ such that ${E_1(x,\bar{y}) \leq E_1(\bar{x},\bar{y})}$ for all ${x \in M_1}$ and ${E_2(\bar{x}, y) \leq E_2(\bar{x}, \bar{y})}$ for all ${\bar{y} \in M_2}$. The idea of a Nash equilibrium is that neither player can unilaterally improve his or her payoff by switching to a different mixed strategy.

With this terminology, Nash’s fundamental theorem is:

A Nash equilibrium exists for every (not necessarily zero-sum) matrix game.

Nash’s original proof of this theorem was based on the Kakutani fixed point theorem, a generalization of the more standard Brouwer fixed point theorem which was invented by Kukutani to streamline and clarify von Neumann’s complicated original proof of the minimax theorem. Nash later found an elegant argument based on the Brouwer fixed point theorem itself, which states that if ${X}$ is homeomorphic to a Euclidean ball then any continuous map from ${X}$ to itself must have a fixed point. (It was not even known before Nash’s paper how to deduce von Neumann’s minimax theorem directly from the Brouwer theorem.) Here is Nash’s elegant “Proof from the Book”:

Proof: Let ${{\rm max}^+(t) = {\rm max}(0,t)}$. Given mixed strategies ${x,y}$, set ${a_i = a_i(x,y) := {\rm max}^+(E(p_i,y) - E(x,y))}$ for ${i=1,\ldots,m}$ and ${b_j = b_j(x,y) := {\rm max}^+(E(x,q_j) - E(x,y))}$ for ${j=1,\ldots,n}$.

We claim that ${(\bar{x},\bar{y})}$ is a Nash equilibrium if and only if ${a_i(\bar{x},\bar{y})=0}$ and ${b_j(\bar{x},\bar{y}) = 0}$ for all ${i,j}$. One direction is clear from the definition. For the other direction, note that if ${a_i = 0}$ for all ${i=1,\ldots,m}$ then ${E(p_i,\bar{y}) \leq E(\bar{x},\bar{y})}$ for all ${i}$, which implies by linearity that ${E(x,\bar{y}) \leq E(\bar{x},\bar{y})}$ for all mixed strategies ${x}$, and similarly for ${b_j}$ and ${y}$.

Now define a continuous map ${T : M_1 \times M_2 \rightarrow M_1 \times M_2}$ by setting ${T(x,y) = (x',y')}$ with

$\displaystyle x'_i = \frac{x_i + a_i}{1 + \sum a_i}, \; y'_j = \frac{y_j + b_j}{1 + \sum b_j}.$
We claim that a pair of mixed strategies ${(\bar{x},\bar{y}) \in M_1 \times M_2}$ is a Nash equilibrium if and only if ${(\bar{x},\bar{y})}$ is a fixed point of ${T}$. Given this, the theorem follows immediately from the Brouwer fixed point theorem since ${M_1 \times M_2}$ is homeomorphic to the unit ball in ${{\mathbf R}^{m+n}}$ and thus ${T}$ must have a fixed point.

One direction of the claim is clear: if ${(\bar{x},\bar{y})}$ is a Nash equilibrium then all ${a_i(\bar{x},\bar{y})=b_j(\bar{x},\bar{y}) = 0}$ and thus ${(\bar{x},\bar{y})}$ is a fixed point of ${T}$. Conversely, suppose ${(\bar{x},\bar{y})}$ is a fixed point of ${T}$. Since ${E(\bar{x},\bar{y})}$ is a convex combination of the ${E(p_i,\bar{y})}$ for which ${\bar{x}_i > 0}$, we must have ${E(\bar{x},\bar{y}) \geq E(p_i,\bar{y})}$ for some ${i}$ with ${\bar{x}_i > 0}$. Hence ${a_i = 0}$ for some ${i}$ with ${\bar{x}_i > 0}$, which implies (using the equality ${\bar{x}_i' = \bar{x}_i}$) that ${a_1 + \cdots + a_m = 0}$. Comparing the other components of ${\bar{x}'}$ and ${\bar{x}}$ now shows that ${a_i = 0}$ for all ${i}$. Similarly, we have ${b_j = 0}$ for all ${j}$. Thus ${(\bar{x},\bar{y})}$ is a Nash equilibrium by our earlier claim. ${\blacksquare}$

As a final coda to this post, we note that Brouwer fixed point theorem itself is equivalent to a certain theorem proved by John Nash about the game of Hex, which Nash himself invented as an undergraduate. (Hex was invented independently by a Danish man named Piet Hein, who marketed the game with Parker Brothers in the mid-1950’s.) A number of Nash’s fellow grad students recall thinking that Nash spent all of his time at Princeton in the common room playing board games. One morning in the winter of 1949, Nash ran into David Gale and shouted “Gale! I have an example of a game with perfect information. There’s no luck, just pure strategy. I can prove that the first player always wins, but I have no idea what his strategy will be. If the first player loses at this game, it’s because he’s made a mistake, but nobody knows what the perfect strategy is.”

The game board for Hex consists of a rhombus tiled by ${n}$ hexagons on each side. Two opposite edges of the board are colored black, the others white. The players place turns placing black (resp. white) stones on the hexagons, and once played a piece can never be moved. The black player tries to construct a connected chain of black stones between the two black boundaries, while the white player tries to do the opposite. Nash showed that the game can never end in a draw. This implies, by an idea which has come to be known as “strategy stealing”, that the first player can always win. Indeed, it is a basic fact of game theory that in a finite two-player game where no draws are possible, one of the two players always has a winning strategy. Now suppose, for the sake of contradiction, that the second player has a winning strategy in Hex. Imagine the first player makes an arbitrary move and the second player responds. The first player now puts a stone in the reflected position corresponding to the second player’s move. The game proceeds with the first player always playing the reflection of the second player’s hypothetical winning strategy. If this requires putting a stone where one of her stones already appears, then she plays arbitrarily, which can only further contribute to forming a chain. The second player cannot win in this scenario, a contradiction. Sylvia Nasar writes:

When Gale finally understood what Nash was trying to tell him, he was captivated… The game quickly swept across the common room. It brought Nash many admirers, including the young John Milnor, who was beguiled by its ingenuity and beauty… Nash’s proof [that there can never be a draw, and thus that the first player can always win] is extremely deft, “marvelously non-constructive” in the words of [John] Milnor, who plays it very well. If the board is covered by black and white pieces, there’s always a chain that connects black to black or white to white, but never both. As Gale put it, “You can walk from Mexico to Canada or swim from California to New York, but you can’t do both.”

In his lovely paper “The Game of Hex and the Brouwer Fixed-Point Theorem”, David Gale writes:

The application of mathematics to games of strategy is now represented by a voluminous literature. Recently there has also been some work which goes in the other direction, using known facts about games to obtain mathematical results in other areas. The present paper is in this latter spirit. Our main purpose is to show that a classical result of topology, the celebrated Brouwer Fixed-Point Theorem, is an easy consequence of the fact that Hex, a game which is probably familiar to many mathematicians, cannot end in a draw. This fact is of some practical as well as theoretical interest, for it turns out that the two-player, two-dimensional game of Hex has a natural generalization to a game of n players and n dimensions, and the proof that this game must always have a winner leads to a simple algorithm for finding approximate fixed points of continuous mappings. This latter subject is one of considerable current interest, especially in the area of mathematical economics. This paper has therefore the dual purpose of, first, showing the equivalence of the Hex and Brouwer Theorems and, second, introducing the reader to the subject of fixed-point computations.

I should say that over the years I have heard it asserted in “cocktail conversation” that the Hex and Brouwer Theorems were equivalent, and my colleague John Stallings has shown me an argument which derives the Hex Theorem from familiar topological facts which are equivalent to the Brouwer Theorem. The proof going in the other direction only occurred to me recently, but in view of its simplicity it may well be that others have been aware of it. The generalization to n dimensions may, however, be new.

We encourage the reader to check out Gale’s article, as well as this blog post.

Concluding remarks:

1. Nash also made many other profound contributions to mathematics, including my own field of algebraic geometry; for example, he proved that given any smooth compact ${k}$-dimensional real manifold ${M}$, there is a real algebraic variety ${V \subset {\mathbf R}^{2k+1}}$ having a connected component ${W}$ which is diffeomorphic to ${M}$. This is the kind of result which no one else had dared to conjecture, or probably even think about, because it appears impossibly strong — and yet Nash proved it. Mike Artin and Barry Mazur used this result in their famous paper on periodic points of dynamical systems.

2. My exposition of von Neumann’s minimax theorem was heavily influenced by Harold W. Kuhn’s book “Lectures on the Theory of Games”.

3. One can prove Nash’s theorem on the game of Hex by elementary means; see for example this paper by Berman or this paper by Huneke.  Combined with Gale’s theorem, these arguments give elementary proofs of the Brouwer Fixed-Point theorem. My exposition of the fact that the first player can always win in Hex was adopted from this AMS feature column.

4. For a mathematical analysis of the bar scene where Russell Crowe, as John Nash, explains his notion of equilibrium in the movie “A Beautiful Mind”, see this blog post.