Game Theory & Topology—An Unexpected Connection
Or, what international politics has to do with stirring coffee.
I love coffee; don't you? Even if (somehow) you don't, I think you'll enjoy this article.
I found out about this connection from reading Games & Decisions by Luce and Raiffa. It's a book on game theory, the mathematical study of decision-making. We use phrases like "zero-sum game" and "optimal strategy" in everyday speech, but they come from this field, which has an enormous number of applications.
The history of game theory is fascinating, thanks to the people who had a hand in forming it. Daniel Bernoulli and his brother Nicholas analyzed an early problem in the 18th century, and Émile Borel (of infinite monkey theorem fame) published some (ultimately incorrect) work on it in the early 1920s. By far the most important were John von Neumann, the great mathematician and computer scientist, and Oskar Morgenstern, who worked with him at the Institute for Advanced Study, when the Anschluss prevented him from returning to Austria.
They coauthored a book, Theory of Games and Economic Behavior, which built on von Neumann's 1928 proof of the minimax theorem for two-person zero-sum games (which Borel had discarded). It laid the foundation for work in the '50s by John Nash, who essentially rewrote the theory to give it more "teeth" when dealing with harder problems.
Aside about John Nash
If you haven't watched the movie A Beautiful Mind, I highly recommend it. It tells the story of his life, his work, and the challenges caused by his mental illness. Despite that, his work was so influential that he eventually received the Nobel Prize in Economics, and the Abel Prize in mathematics.
It's John Nash's contribution that I want to share, but to share it, I have to do a bit of setup.
A Simple Game
When I say "game" in this article, I mean this: \(n\) players, each with a set \(S_n\) of possible decisions, and a set of associated outcomes. If player 1 chooses his strategy 3, and player 2 chooses his strategy 5, then there is an outcome for that particular choice, telling what each player gains or loses. Time for an example!
Battle of the Sexes
Say a man and a woman want to see a movie. The man would rather see an action movie, and the woman would rather see a romantic comedy. However, they both prefer going to any movie together over going alone. We can rank the satisfaction or payoff with a numerical ordering.
The woman would most like to see the rom-com with the man, but she'd dislike seeing the rom-com alone. You can assign those the scores 2 and -1. She'd like watching the action movie with him, but would dislike seeing it alone, so assign those 1 and -1.
Now we can do the same for the man. You can swap their places in the ranking above—just reverse the 2 and the 1. We can make a table, called the payoff matrix.
Action Movie | Rom-com | |
Action Movie | (1, 2) | (-1, -1) |
Rom-com | (-1, -1) | (2, 1) |
In the table, the man's choice is on the left, and the woman's choice is on top. The ordered pairs are the payoffs \((w, v)\) to the woman and man, respectively. (Each choice, by the way, is an example of a pure strategy in the game.)
A game theorist wants to know what they should choose to maximize their happiness. A little critical thinking rules out either case where they aren't together. That leaves our hypothetical couple with the choices \((1, 2)\) and \((2, 1)\). Which is the "best?"
Of course, you already know the answer intuitively. There is no "best." Someone has to compromise (accept less than a 2), or both will be miserable!
Let's consider what happens when we repeat the experiment multiple times. In real life, people take turns compromising. They go to a rom-com one night, and an action movie the next. That's a great solution! As we repeat the experiment more times, the overall payoff gets closer to \((\frac{3}{2}, \frac{3}{2})\), the average of 1 and 2, and they're both equally happy.
There's also no way either one can get an advantage by changing their strategy, assuming the other player makes no change. If the man suddenly goes to action movies 3 out of 4 times, instead of 1 out of 2, he'll be worse off. This is what we call an equilibrium point, and in a sense, it's the "best solution" to the game.
Cool; so what?
The battle of the sexes was a very simple example, but it's not hard to see how you could model a complicated situation like a barter, labor dispute, or international political squabble in this framework—each side has some choices that lead to some outcomes. We also saw that, even though there was no good solution for either player in terms of a single pure strategy, there was one in terms of mixed strategies.
Question: Is it always the case there is a best solution, or equilibrium point?
The answer is yes—this is John Nash's major contribution. It's called the Nash equilibrium, and the proof that it exists is actually very beautiful.
Coffee & the Existence of Equilibrium Points
Let's put game theory on hold for a moment, and back up to the end of the 19th century. Henri Poincaré was trying to answer an old question—is the Solar System stable? He considered the motion of objects flowing on a level surface, and made a discovery. However, this discovery is usually credited to (and was actually proved by) L. E. J. Brouwer. There's a nice (apocryphal) story associated with it, so I'll tell that instead. Just remember that Poincaré published on it first.
The story goes like this: As Brouwer was stirring sugar into his coffee one morning, he noticed that one point in the coffee always seemed to be stationary. He realized this must always be the case—what would it look like if every point on the surface were moving? It's not physically possible.
The Theorem We Need
This visual intuition is connected to an actual mathematical fact, now known as Brouwer's fixed-point theorem. I'll give you an even more intuitive example, after the mathematical statement.
Brouwer's fixed-point theorem: Every continuous function from a closed disk onto itself has at least one fixed point.
Let's break this down. We have a disk (the surface of the coffee) which is closed (has a boundary), and a function mapping points on the disk, to other points on the disk. (A stirring motion moves the coffee around, but it never leaves the cup.)
Also, this function is continuous, which means all points initially in a neighborhood stay in a neighborhood after applying the function. (The coffee doesn't "jump" to a new location; it moves "continuously.")
This actually applies to any nonempty, convex, compact subset \(U\) of Euclidean space, and continuous transformation \(T : U \rarr U\).
Now for the visual example. Imagine you're holding a map of the state or country you're standing in. You know the transformation from the map to the country is always continuous—you just scale up. Points in a neighborhood remain neighbors. Spread the map on a table, and there will always be a "You Are Here" point that's the same on the map as it is in the country.
Are you convinced yet? It seemed pretty intuitive to me—it's almost one of those things where you say "I wish I'd thought of that!"
Nash's Proof
Let's return to game theory. So far, we have a game, and Brouwer's fixed point theorem. If we can make a transformation from the set of payoffs, to the set of payoffs, in such a way that a fixed point must be an equilibrium point, we have proved the existence of an equilibrium point for all finite games.
Now I can sketch the outline of Nash's proof. I'll follow the notation and structure of Nash's doctoral thesis, found here. You can follow the entire thing if you've taken a bit of calculus, yet it was revolutionary in its field! If you just want the TL;DR you can skip to the summary.
Here are the definitions we need:
Definition: Finite game: A finite game is \(n\) players, ea. with a finite set of pure strategies, and for ea. player \(i\), a payoff function \(p_i\) which maps all \(n\)-tuples of pure strategies into an ordered set, e.g. \(\Bbb{R}\) or \(\Bbb{Z}\).
Definition: Mixed strategy: A mixed strategy of a player \(i\), denoted \(S_i\), is a collection of weights or probabilites (like \(\frac{1}{2}\) in "Battle of the Sexes") on his pure strategies. \[S_i = \sum_\alpha c_{i\alpha} \pi_{i\alpha}\] where \(\sum_\alpha c_{i\alpha} = 1\) & \(c_{i\alpha} \ge 0\) (it's a probability), and \( \pi_{i\alpha}\) is player \(i\)'s \(\alpha\)-th pure strategy.
Definition: Payoff function: We can extend the payoff function in our definition of the finite game to cover all \(n\)-tuples of mixed strategies. We'll use this definition of \(p_i\) from now on. So for any mixed strategy \(𝒮\), we'd write:
\[p_i(𝒮) = p_i(S_1, S_2, \dots , S_n).\] Also note that any \(𝒮\) is a point in \(n\)-dimensional vector space, and the set of all \(𝒮\)s is a convex subset of that space. (Think about our payoff matrix, but for mixed strategies—that's what this represents.)
Definition: Equilibrium point: a point where no player can get an advantage by changing their strategy, assuming the other players make no change. Formally, a mixed strategy \(𝒮\) is an equilibrium point if and only if for every \(i\), \[p_i(𝒮) = \underset{\text{all}\>r_i\text{'s}}\max\,\big[\,p_i(𝒮; r_i)\,\big],\] given that \((𝒮; r_i)\) means "substitute mixed strategy \(r_i\) for the \(i\)-th player's strategy."
Nash actually constructs a sequence of continuous transformations, which we'll look at below. First, let's specify exactly what we're proving.
Theorem: Every finite game has an equilibrium point.
To prove this, we need some continuous functions of a mixed strategy \(𝒮\). Remember that \(p_{i\alpha}(𝒮)\) is player \(i\)'s payoff from pure strategy \(\pi_{i\alpha}\) when the other players use their strategies in \(𝒮\). For each \(\lambda \in \Bbb{Z}\), \[\begin{gather}q_i(𝒮) = \underset{\alpha}\max\>p_{i\alpha}(𝒮) \\ \Phi_{i\alpha}(𝒮, \lambda) = p_{i\alpha} - q_i(𝒮) + \frac{1}{\lambda} \\ \Phi^+_{i\alpha}(𝒮, \lambda) = \max \big(0, \Phi_{i\alpha}(𝒮, \lambda)\big). \end{gather}\]
Function (2) is either \(\frac{1}{\lambda}\), or \(\frac{1}{\lambda}\) plus some negative number. That lets us show the rational function \[ C'_{i\alpha} = \frac{\Phi^+_{i\alpha}(𝒮, \lambda)}{ \sum_{\beta}\Phi^+_{i\beta}(𝒮, \lambda) } \] is also continuous, since the denominator is never zero.
Nash then defines \( S'_i(𝒮, \lambda) = \sum_\alpha \pi_{i\alpha} C'_{i\alpha}(𝒮, \lambda) \), which are different mixed strategies for every player \(i\). We combine all those into a new point \(𝒮'\), given by \( 𝒮'(𝒮, \lambda) = (S'_1, S'_2, \dots , S'_n) \). The transformation \(𝒮\rarr 𝒮'(𝒮, \lambda)\) is continuous because all the operations that compose it are continuous. The space of all mixed strategies is also non-empty, convex, and compact (lacking "holes"). Thus, there will be a fixed point \( 𝒮^* \), by the Brouwer fixed point theorem.
We haven't yet proved that this fixed point must be an equilibrium point. Nash does that by contradiction—assuming it isn't an equilibrium point—which implies that one of its strategies \(\pi_{i\alpha}\) is non-optimal. If that were so then we can show, based on functions (1)–(3), that \(𝒮^*\) is not a fixed point. (I'll leave that to Nash's thesis. See page 6.) This contradicts the fact that \(𝒮^*\) is indeed a fixed point. Therefore, each strategy in \(𝒮^*\) must be optimal against the others. No player can gain by changing to a different strategy, so by definition, \(𝒮^*\) is an equilibrium point.
Proof Summary
Nash created some transformation \(T\), mapping \(n\)-tuples of mixed strategies into \(n\)-tuples of mixed strategies, where \(T\) has two properties:
- \(n\)-tuple \(𝒮\) is an optimal strategy if and only if \(T(𝒮) = 𝒮\), that is, \(𝒮\) is a fixed point under \(T\).
- \(T\) satisfies the conditions of Brouwer's fixed-point theorem, and consequently has a fixed point.
If those are the case, and Nash proved they must be, then any \(n\)-player finite game does indeed have at least one Nash equilibrium. This means there is a solution to any "game"—meaning an abstract model for anything from a couple at the movies to national governments—which no player can improve by switching strategies. To prove that, we used a single theorem in topology that a mathematician allegedly discovered when stirring sugar into his coffee.
If that's not a fascinating connection, I don't know what is.
References
John Nash: Non-Cooperative Games.
R. Duncan Luce & Howard Raiffa: Games & Decisions.
Wikipedia: "Brouwer fixed-point theorem"
Wikipedia: "John Forbes Nash, Jr."
As always, thanks for reading! If you have a comment, I'd love to hear from you! My email is ethan.barry@howdytx.technology. You can also reach me at my LinkedIn or GitHub, linked in the header.