This book was a bit different – about 1/3 biography of John von Neumann (though really concentrating on his involvement with game theory and nuclear weapons development) and 2/3 intro to game theory, especially with its interpretation and implications for the Cold War.

As for von Neumann – smart guy definitely, but did not seem like he found happiness. And he died relatively young and very painfully, poor guy. My interest in him was actually more from what he did for computing (whatever device you are reading this on is an implementation of a von Neumann machine – he basically invented modern computing so he could perform computations for his real research more quickly) but this is not really discussed much in this book.

The prisoner’s dilemma is a certain formal “game” — a game theory game; not a “fun” game. The prisoner’s dilemma is any situation where the best outcome for the whole group is when the players cooperate, but there are either/both positive/negative incentives for individuals to not cooperate, aka “defect.” In the classic example, two men are implicated for the same crime. They are kept in solitary confinement and are interrogated separately. The police want someone pinned with the crime and so offer a deal to each: if they implicate their partner, the partner will get 3 years in prison and the rat (the defector) will go free. If the BOTH try to defect (implicate each other), then they both get 2 years in prison. If they both stay silent, they each get 1 year in prison for obstruction of justice.

For rational players, with no ethical or personal reservations, the optimal solution is to always defect – you’ll either get 0 or 2 years. If you confess, you could get 1 or 3 years. The more interesting studies involved an iterated prisoner’s dilemma – where the same parties play the same game over and over again. This is actually more akin to real life – if we cheat someone we might get away with it once, but never again. Then we are more likely to cooperate. The whole point of ethics and law is to get us as a society to work together, to get that “best overall” payoff for everyone rather than suboptimal payoffs which just benefit a lucky few.

Interesting interpretation of “left vs right” politics in terms of prisoner’s dilemma: liberals are “cooperators,” willing to risk exploitation in order to increase the common good – e.g. raising taxes to help the homeless, in hopes that the homeless will use the aid to restore their place as contributing members of society. On the other hand, conservatives are risk-adverse “defectors” seeking the best possible outcome based on their efforts alone. Taxes could be wasted, so the best course of action is to keep them low.

Poundstone mentions one researcher’s “tournament” of iterated prisoner’s dilemma strategies. Each strategy was pitted against all the rest, and an overall winner emerged called “Tit for Tat.” In this simple strategy, the player cooperates on the first move, and thereafter always plays whatever his opponent played in the *last* round. If the opponent is a cooperator, then the strategy yields maximum overall benefit; if the opponent sometimes defects then “it isn’t your fault” and you’ll sometimes “tie” with mutual defections and sometimes win big with your own isolated defections. This strategy is anecdotally confirmed as pretty good by a few examples of world leaders and negotiators seen as “successful” – willing to cooperate, but also quick to respond to justified provocation.

The Cold War is easily envisioned as a prisoner’s dilemma. (or maybe a game of chicken or perhaps a dollar auction) It would be best for everyone if all nations cooperated and got rid of their nukes entirely. But, we cannot do so unilaterally because we would then be exposing ourselves to the “sucker payoff” with the biggest penalty of all: being at the mercy of, if not annihilated by, the other side should they chose to defect and keep their own nukes.

In the end though, of course the Cold War and almost any real interaction between interested parties is not as simple as the toy examples provided by game theory. A huge hole arises with the assumption of rational players – people are often *not* rational. And so any strategies derived from complex game theoretical arguments may be useless anyway.

Interesting. Havent read it but I am interested in game theory.