Field of Science

The Punishment Paradox

Why do attempted murderers get less jail time than murderers? Why is a drunken driver who hits a tree punished less than one who hits a person? If our laws are set up to correct behavior, why do they punish according to the outcome of an action rather than the intention?

Questions like these were raised in a talk I heard today by Fiery Cushman who, in addition to having an awesome name, has done some fascinating research on this subject.

It turns out the phenomenon of punishing by outcomes rather than intentions is reflected not just in our legal system, but in our individual behavior. In a variety of experiments, (see his publications page for details) Cushman found that people's decisions to reward and punish, even amoung children as young as five, are based on the result of someone's actions rather than on what the person was thinking.

Interestingly, when asked if they want to be friends with a person, or whether that person is "good," intentionality becomes much more important. Thus, if you throw a rock at me and miss, I'll think you're a jerk, but I won't chase you down the way I would if you'd hit me. (Hah, I'm actually a wimp. I'd go home and cry. But you get the point.)

This talk being part of an evolutionary dynamics class, Cushman turned to the question of how this punishment instinct might have evolved, and why it evolved so differently from moral judgments.

The answers to these questions are still cloudy, but they may have to do with our interactions with the natural (non-human) environment. Consequences in the natural world are based on outcomes: If you climb a dangerous cliff but don't fall, you aren't punished, even though it was still a bad idea. So from these non-social interactions, we're "used" to being punished based on outcomes; in evolutionary terms, we've adapted to it. And according to some of Cushman's experiments, we learn better from outcome-based punishment, because it's what we expect. So punishment evolved to fit our already-established learning patterns. I think. If you're having trouble following this, it's tricky stuff. I can barely follow it myself.

10 comments:

  1. The "instinct" to punish outcomes rather than intentions may have more to do with maximizing the probability of a "correct" decision about a person's behavior than about outcomes versus intentions per se. To maximize "correct" decisions may reduce the likelihood of a negative counter-reaction to oneself and/or one's progeny. One has to go very far back in evolutionary terms, it seems to me, to find a situation where social interactions played no role in setting up expectancies.

    ReplyDelete

  2. One has to go very far back in evolutionary terms, it seems to me, to find a situation where social interactions played no role in setting up expectancies.


    Yes, this is an important point. But it seems reasonable that the punishment instinct evolved later than the "learning from consequences" instinct.


    The "instinct" to punish outcomes rather than intentions may have more to do with maximizing the probability of a "correct" decision about a person's behavior than about outcomes versus intentions per se. To maximize "correct" decisions may reduce the likelihood of a negative counter-reaction to oneself and/or one's progeny.


    I think this depends on what you mean by "correct." Correct as in you correctly infer the intentions and character of the other person? Or correct as in your response is judged acceptable by the other person?

    ReplyDelete
  3. By "correct", I meant both in terms of matching the outcome with the actual intent, and in the perpetrator's and other's interpretation of what happened. But, you hit the nail on the head when you said this stuff is tricky! I would love to know, if indeed it is possible to know, whether most injuries an individual experiences that are directly inflicted by another person are on average more the result of intended versus accidental harm. If that's the case, then it seems reasonable to take an outcome-based approach in maximizing a "correct" response. A correct response would be less likely to elicit additional punishment from society or others who would view the response as just. I'm sure Fiery has a view on this. Looking at a later evolution of punishment versus learning instincts, I guess Fiery is saying that an early stimulus-response kind of understanding of the world was the base template over which understanding of perspective and theory of mind was laid. I'm stuck in my thinking here. I keep going back to the cliff climbing example - not falling seems to be more of a reward than a non-punishment, and is highly dependent on the person's view of his/her own intentions in climbing in the first place, and attributions about the reasons for success.

    ReplyDelete
  4. Hey

    Im on your blog and this "punishment paradox" post is a fun start. I'll b reading :)

    ReplyDelete
  5. A further thought. The cliff example may not really be an example of a "punishment", but of a kind of negative reinforcement, which we know increases the occurrence of a behavior. I'm not sure how Skinner's schedules of reinforcement intersect with all of this in terms of some kind of primal/biological set of expectancies about outcomes versus an organism's cognition about them. If the primal set of expectancies were established evolutionarily before organisms had developed perspective or theory of mind, then, of course, all expectancies would be based on the effect solely on the organism (and maybe the organism's offspring?). I wonder if part of the evolutionary "push" toward more social behavior and social cognition arose partly as a result of the need to consider the effects of one's behavior on one's offspring as well as oneself. I guess where I'm going with this is to say that "outcome" oriented thinking about behavior versus "intention" oriented thinking may be superstitious, unless it helps to maximize potential harm to oneself and/or one's progeny. Perhaps a clue to changing people's thinking toward specific tragedies or crimes, is to turn an intention argument into an outcome argument to appeal to base evolutionary motives?

    ReplyDelete
  6. Oops - I meant to say Minimize potential harm to oneself or one's progeny at the end there!

    ReplyDelete
  7. The cliff example may not really be an example of a "punishment", but of a kind of negative reinforcement, which we know increases the occurrence of a behavior. I'm not sure how Skinner's schedules of reinforcement intersect with all of this in terms of some kind of primal/biological set of expectancies about outcomes versus an organism's cognition about them. If the primal set of expectancies were established evolutionarily before organisms had developed perspective or theory of mind, then, of course, all expectancies would be based on the effect solely on the organism (and maybe the organism's offspring?).

    If by "increases" you mean "decreases" then I think we're in agreement here. Response to conditioning (positive and negative reinforcement) is observed in a wide variety of animals, so it certainly preceded social behavior.

    And clearly punishment must have evolved to sync up with our ability to respond to conditioning. The punisher punishes the offender in order to condition the offender, with the goal of, as you say, minimizing harm to the punisher and his/her children.

    But you make a good point that punishment has to be considered in a social context; in particular, the punishment must be accepted as just by the offender in order for it to be effective. There's some interesting research on this subject which I'll try to write about soon.

    ReplyDelete
  8. I think I meant increases actually. A negative reinforcer increases the probability of a behavior. Dangerous cliff climbing would be more likely to happen again following a safe arrival at the top because the potential of falling and injury or death was avoided - all of this without the benefit of any apparent cognition about skill, luck, intentions, etc because animals such as pigeons, who I, perhaps unjustifiably, assume do not think in terms of intention, or luck, can be trained with negative reinforcers easily. By the same token, punishers, which decrease the likelihood of the behavior that preceded them, also operate very effectively in animals who do not have a sense of justice or even any obvious social organization; though the recent research that dogs seem to have some kind of justice meter, suggests that maybe other animals with social organizations may also have this sense of fairness. It would be interesting to compare animals with greater and lesser use of social hierarchies, or societies, if you will, to see if they show differences in response to schedules of reinforcement placed in a social context versus in isolation. How would the apparent inflexibility of, say, bee social organization, differ from the more seemingly flexible dog or primate kind of society? I don't know if the dog results can be more parsimoniously explained as simple observational learning, or imitation even, rather than some kind of primitive sense of justice. It is fascinating to speculate that justice could have arisen from this kind of observational learning. I know that the role of emotion is postulated to factor into moral decision making. Out of jealousy comes justice!

    ReplyDelete
  9. Interesting topic. I think Lynnwood hit the nail on the head by pointing out that humans presumably were the first organisms that could effectively suss out intention from action. But even so, humans sometimes go to great lengths to hide intention, so even with humans it's not easy to consistently base fairness on intention. Thus, it's not surprising that we confuse intention and action in our mental calculi at times.

    ReplyDelete
  10. Yes, that's a good point--intention-based punishment presumes you can correctly divine another person's intention, which you often can't. This again reduces the evolutionary advantage of intention-based punishers.

    ReplyDelete

Markup Key:
- <b>bold</b> = bold
- <i>italic</i> = italic
- <a href="http://www.fieldofscience.com/">FoS</a> = FoS