In a very cool new paper, University of Hamburg economists Khadjavi and Lange tested the Prisoner’s Dilemma with actual prisoners. Interestingly enough, apparently no one had ever bothered to check how they would actual behave in the game named for them. 

Before getting to their new results, let’s review the Prisoner’s Dilemma first. Imagine you and a cohort have been captured. The interrogator comes to you and makes the following offer. If you rat out your criminal colleague by confessing (defect) and he says nothing, then you go free while he serves 3 years. Conversely, if you say nothing (cooperate) and he confess, he goes free and you serve 3 years. If you both confess, then you both get 2 years. If you both stay quiet, then you both serve 1 year. We can display the four possible options like this:

Screen Shot 2013-08-01 at 9.49.54 AM

The optimal option for both of you is if both cooperate. This produces the lowest total amount of jail time between the two of you. However, in traditional game theory, as John Nash explained, the only stable, equilibrium point is DefectDefect. That means that the rational thing for you to do is defect, even though the same is true for the other person. If all you care about is getting the least amount of jail time for yourself, then you should defect. Suppose, for instance, that you know the other person is cooperating. In that case, you have no self-interested reason to cooperate as well; you’d do better by defecting. If on the other hand, you know the other is defecting, you have no self-interested reason for helping them out (and harming yourself) by cooperating. So you should always defect. And the same is true of the other person too.

Though game theory says you should defect, we’ve known for some time from various experimental results that people tend to cooperate (though not always or everyone). So what are we to say about game theory? Does it describe a level of rationality that people are generally incapable? Or has it missed some important facets of human cognition (and perhaps rationality)? I’ll only mention these questions today. Now let’s move to Khadjavi and Lange’s new results.

Khadjavi and Lange got real prisoners to play the Prisoner’s Dilemma, though not for reducing in sentences (but still using something with real utility to the prisoners). They also had students play the same Prisoner’s Dilemma game. Their results were interesting. When both players acted simultaneously, the prisoners cooperated at a higher rate than the students.

This paper is praiseworthy at the very least for actually including prisoners. For anyone who has every had to deal with an IRB (institutional review board) to experiment on human subjects knows, it’s a lot of extra headaches to include prisoners among your participants. German regulations are different from those I deal with in the US, but I’m sure there’s still  a lot of bureaucracy involving experiments with prisoners. But it’s also a cool result showing the impetus to cooperate contrary to self-interest is perhaps even stronger than we thought.