Field of Science

The Prisoner's Dilemma

You and an acquaintance are charged (and rightfully so!) as co-conspirators in a train robbery. You are being interviewed separately by the police. You can either rat your buddy out or keep silent and do more time. Your acquaintance has the same choices.

If one of you rats and the other remains silent, the one who cooperated with the police gets off free and the other serves 10 years. If you both keep silent, they can only convict on a lesser charge (for lack of evidence), so you each do a year. If you both rat on each other, you each do five years.

Both you and your acquaintance know this information. Assuming neither of you cares what happens to the other, and there are no recriminations in the outside world (we'll revisit both of these assumptions later), what is likely to happen?

Under the assumptions we've made, neither of you has any incentive to help the other. No matter what the other guy does, you get a better result by ratting on him. You both come to the same conclusion, so you both do five years.

This game is one of the most famous examples in game theory. It presents something of a dilemma: By each choosing the option that serves them best, the two "players" in the game end up with a result (5 years each) that is worse than if they had each kept silent. Choosing the best option individually leaves them worse off as a whole.

The game is traditionally phrased in terms of prisoners, but it applies pretty well to any situation when people have an opportunity to screw someone else over for their own benefit. If it truly is better in each situation to screw the other person, then everyone will end up screwing everyone else (in a bad way), and everyone will be worse off.

I think of this game when I drive up my street after a snowstorm, looking for a parking spot. People on my block (and all over Boston, from what I've seen) have the perverse idea that if they dig their car out of the snow, they "own" the spot they parked it in until the snow melts. They mark their spots with chairs or traffic cones. I've thought about doing the same. On the one hand, I think it's ridiculous for people to "claim" spots, just because they happened to park there before the storm. On the other hand, if everyone else does it and I don't, I can't park anywhere. If everyone else has chosen the selfish option, why shouldn't I? Classic Prisoner's Dilemma.

So does this bode ill for humankind? Is this a game-theoretic "proof" that we're all going to stab each other in the back? To answer these questions, let's look back at the assumptions we made.

First, we assumed that you don't care what happens to the other person. If you do care, you'd be much more likely to keep silent, which would end with a better result for both of you. A little selflessness helps everyone.

We also assumed that there were no consequences to your actions beyond what was spelled out in the game. A lot of ways you can screw others for your own benefit, such as breaking into your neighbor's house, are illegal. Laws can't deal with all prisoner's dilemma situations, but they can eliminate some of the worst ones.

There is a third, hidden assumption that we made: we assumed the game would only be played once. If the game is played over and over many times, is it possible for more cooperative strategies to emerge successful? This question was addressed by David Axelrod in The Evolution of Cooperation who found that, while selfishness is the best short-term strategy, cooperative strategies will win in the long run if the game is played enough times. More specifically, he identified four hallmarks of successful strategies: (here I quote Wikipedia)

  • Nice: The most important condition is that the strategy must be "nice", that is, it will not defect before its opponent does. Almost all of the top-scoring strategies were nice; therefore a purely selfish strategy will not "cheat" on its opponent, for purely utilitarian reasons first.

  • Retaliating: However, Axelrod contended, the successful strategy must not be a blind optimist. It must sometimes retaliate. An example of a non-retaliating strategy is Always Cooperate. This is a very bad choice, as "nasty" strategies will ruthlessly exploit such softies.

  • Forgiving: Another quality of successful strategies is that they must be forgiving. Though they will retaliate, they will once again fall back to cooperating if the opponent does not continue to play defects. This stops long runs of revenge and counter-revenge, maximizing points.

  • Non-envious: The last quality is being non-envious, that is not striving to score more than the opponent (impossible for a ‘nice’ strategy, i.e., a 'nice' strategy can never score more than the opponent).

Are these principles to live by? Perhaps. Axelrod and others think the success of these kinds of strategies may help explain the evolution of altruistic behavior in animals. At any rate, it seems to suggest that nice guys can get ahead, if they're willing to be mean at the right moments.

5 comments:

  1. Seems like the gap here could be filled by... morality? If both players had a greater sense that they should do unto others as they would have done unto them, they would both stay silent.

    Of course, telling the truth here is framed as "ratting him out", and they already committed a crime, so there is no simple moral answer.

    I think there is a very strong scientific argument to be made for the benefits of being good to people all the time, and this idea plays into that. I think it's clear that humans have evolved to be social animals that survived by working together, and it makes sense that our best interests now would still be to work together and help out.

    I hope you do a post on karma along the same vein at some point, analyzing it in terms of specific six degrees of separation and human connections.

    ReplyDelete
  2. "Axelrod and others think the success of these kinds of strategies may help explain the evolution of altruistic behavior in animals. At any rate, it seems to suggest that nice guys can get ahead, if they're willing to be mean at the right moments."

    I just happened across a video illustration of this attitude the other day, or at least I think it is an illustration of this! Check out the battle at Kruger, which has been out for a while. The payoff for the 'altruistic minded' is near the end, when the buffalo show that a little community spirit can actually produce tangible results.

    ReplyDelete
  3. Ah, morality. Certainly we can agree that if everyone did the "right" thing, the whole world would be much better off. But can we rely on people to act morally for its own sake? Some people will, but more people will do good if they believe it tangibly benefits them somehow.

    There are several studies out there supporting the idea that altruism has tangible benefits. In addition to iterated prisoner's dilemma studies, there is this paper showing an advantage to species that do not overpopulate their environments. So some evidence is there. While it's not true that doing the right thing will always benefit you, or that bad guys always pay in the end, I agree that behaving cooperatively is probably better on average, in the long run.

    Man, that video is sick! (in a good way!)

    ReplyDelete
  4. Was this the study that prompted your post?

    http://www.boston.com/news/science/articles/2008/03/19/it_pays_to_play_nice_harvard_study_says/

    It's kind of an odd twist on the prisoner's dilemma, adding in a punishment factor, but it seems to basically back up the idea that cooperative playing is mutually beneficial in the long run.

    of course, a system that makes both players lose money WOULD seem to reduce the net amount of money everyone makes, but I'm sure these guys thought of that...

    ReplyDelete
  5. oops need to add html code to make the link work...

    The Harvard study

    ReplyDelete

Markup Key:
- <b>bold</b> = bold
- <i>italic</i> = italic
- <a href="http://www.fieldofscience.com/">FoS</a> = FoS