### On the complexity of games

(Warning: frivolous.)

It’s not obvious what properties of games determine whether they are easy or hard. Poker has imperfect information while Sokoban can have very long solutions; which is harder? Could the lack of communication between teammates in Bridge make the game more difficult?

This post explores the computational complexity of natural classes of games, e.g. short 2-player games with imperfect information or long 1-player games with perfect information.

### Types of games

We say a game is “short” if you can describe all the events of a game, and implement the rules, in polynomial time. We say a game is “long” if it is allowed to be exponentially long, but individual states can be described and manipulated in polynomial time. We prohibit games that are longer than exponentially long.
Examples: Hex on an NxN board is “short,” because there can be at most N^2 moves.
Chess on an NxN board, with 2^N moves allowed before declaring a stalemate, is “long.”

We assume that each player either wins or loses, and that in 2-player games there is exactly one player who wins. For randomized games, we are interested in finding a strategy with an approximately optimal probability of winning—we care about the difference between winning 60% or 40% of the time, but not the difference between winning 50.00…001% and 49.99…99% of the time.

“Hidden info” means that a player might not know what has happened so far. “Forgetting” means that a player might not remember what they’ve done or seen in the past (equivalently: each “player” is really a team that can’t communicate). We assume “forgetting” implies “hidden info” implies “randomized.”
Example: Chess is deterministic. Snakes and ladders is randomized. Poker has hidden info. Bridge has forgetting.

Long imperfect information games are weird, because the optimal strategy will generally depend on the exponentially long history of everything you’ve seen so far. Depending on the formalization, such games can take doubly exponential time! I’m going to exclude them from the table for a technical reason—you can’t describe the state of such a game in polynomial space, so can’t even state the question “Can you win from this state?”

I’m skipping some details of the definition of “game,” but all the results are the same regardless of how you fill in those definitions. Go with an exponential time limit is a long 2-player game of perfect information and hence is in EXP. Go with superko is more complicated because the game state depends on every board position that’s ever occurred, so to fit it into this framework we have to view it as a short game on an exponentially bigger state space (which is therefore in PSPACE).

### The complexity of games

We’ll talk about the following four very simple complexity classes:

• NP: Problems whose solutions you can verify in polynomial time.
• PSPACE: Problems you can solve in polynomial memory.
(Fun fact: PSPACE=NPSPACE)
• EXP: Problems you can solve in exponential time.
• NEXP: Problems whose solutions you can verify in exponential time.

For each type of game, it turns out that being able to compute an optimal move for arbitrary games of that type is exactly as hard as being able to solve arbitrary problems from the corresponding complexity class.

Here is the correspondence:

Game type Complexity Citation
1-player short deterministic NP
1-player short randomized (or hidden info) PSPACE[!] IP=PSPACE
1-player long deterministic PSPACE
2-player short deterministic (or randomized) PSPACE
1-player long randomized EXP Appendix
2-player short hidden info EXP[!] Short games
2-player long deterministic (or randomized) EXP Difficult games
Anything with forgetting NEXP[!] MIP=NEXP

The results marked with [!] are the most interesting. I wouldn’t have expected these types of games to be so hard.

It might not look surprising, but it’s actually very magical that types of game correspond to simple complexity classes.

For example, it’s clear that randomized 1-player games are hard, and that e.g. you can’t solve them in NP. But does that mean that they are as hard as all of PSPACE? It turns out that they are, but for extremely subtle reasons—the proof unavoidably uses the fact that we can extend computations to low-degree polynomials over finite fields, and uses this extension to construct a very strange 1-player game corresponding to an arbitrary PSPACE computation.

Similar things happen for games of hidden information and games with forgetting. All of these turn out to match nicely onto complexity classes, but only for reasons that seem unintuitive and involve using algebraic extensions in quite different ways. If those proofs didn’t work, these types of games wouldn’t correspond to a nice complexity class at all.

### Appendix: long randomized games

It’s straightforward to prove that a long 1-player randomized game is at least as hard as a deterministic 2-player game, and hence is EXP-complete.

Take a 2-player deterministic game and replace the second player with randomness. Then if player 1 follows a winning strategy, in the deterministic game, they will win the randomized game with probability 1. If player 2 doesn’t follow a winning strategy, then they will have some risk of getting unlucky.

The risk of player 1 getting unlucky can’t be that small. In particular, at each node there is some exponentially small probability that the randomness will happen to pick some winning move for player 2. If we multiply together exponentially many exponentially small numbers, we still get an exponentially small number. We can compute this probability based only on the number of moves in the original game + the number of options per move. Let p be this minimal probability.

Now we repeat the original game 100/p times, and declare player 1 the winner if and only if they win every time. This new game still has polynomial-sized states (the state space is the same as the original space + a separate counter that goes up to 100/p) and is at most exponentially long. If player 1 isn’t playing an optimal strategy in at least 90% of the games, they have a <1% chance of winning every game. Conversely, if player 1 plays an optimal strategy in every game, then they have zero chance of losing.

So given a deterministic 2-player EXP-hard game, we convert it into a repeated randomized 1-player game. If player 1 loses at the new game, we can conclude there was no winning strategy in the original game. If player 1 wins at the repeated random game, we can construct a winning strategy in the original game (with probability 90%) by picking a random repetition and taking the strategy used by player 1 in the that repetition.