Read An Introduction to Evolutionary Ethics Online
Authors: Scott M. James
Tags: #Philosophy, #Ethics & Moral Philosophy, #General
Are these results inconsistent with the Fehr and Gachter's results? Not necessarily. Dreber
et al.
found that, in environments where punishment is an option, cooperators do better than their counterparts in environments where punishment is not an option. But taking the punishment option is almost always a bad idea. So we have something of a paradox here. Your best hope (in these artificial settings) would be to play nice in an environment where punishment is occasionally meted out. But this requires, quite obviously, punishers. And punishers do really badly. In fact, although cooperation increases in the punishment option setting, the added benefit to cooperators is offset by added losses to punishers, such that the
aggregated payoff of all participants
was pretty much the same whether or not punishment was an option. How do these results make contact with our more general concern with morality?
It's important to remember that the leading idea under consideration here is
not
that judging that an individual has violated a moral norm involves judging that he
should
be punished. The leading idea is that judging that an individual has violated a moral norm involves judging that he
deserves
to be punished. This difference may sound meager, but it's not. For if it's true that natural selection favored a moral sense like ours, then we should
not
observe individuals reflexively punishing others, for, as Dreber
et al.
seem to show, this strategy founders. Instead, we should observe something less, something more restrained. And that is indeed what we see. People are quick to identify wrongs, but they do not blindly retaliate. When someone cuts us off on the highway, we do not automatically speed up and do the same in retaliation. While we don't hesitate to think that the jerk
deserves
to be cut off, we do hesitate to do so. Our disapproval is handled differently (with unmentionable expressions, say, or a choice hand gesture). It would appear that, in many instances, retaliation is substituted by feelings or affective judgments. What does seem consistent is the tendency to
avoid
the wrongdoer. If we've been egregiously wronged, the need for retaliation can indeed carry us away (see, for example, Hamlet). But when the wrong is less than capital, we simply “write the person off.” Or, just as Dreber
et al.
found, we respond to defection by defecting ourselves – not costly punishing.
All of this research on punishment, however, fails to answer a deeper question:
why punish?
When we punish others, what's driving us? How do we justify (to ourselves, if you like) making wrongdoers pay? These questions encourage us to think harder about punishment's role in our moral sense and may offer us an independent line of support for the idea that our moral sense was indeed an adaptation. The psychology of punishment is quite a new area of research, but some of the findings are suggestive.
Psychologists Kevin Carlsmith, John Darley, and Paul Robinson (2002) tried to get to the bottom of our “naïve psychology of punishment” by testing to see which features of moral norm violations influence us the most in punishment decisions. More specifically, the tests were designed to uncover which of two competing philosophies of punishment people generally adhered to. One philosophy of punishment – the
deterrence
model – is “forward-looking”: we punish for the good consequences that follow. It deters not only
this
perpetrator, but potential perpetrators from committing similar violations in the future. The other philosophy of punishment – the
retributive
or
just deserts
model – is “backward-looking”: we punish because a wrong was committed and the perpetrator deserves to punished. The punishment is proportionate to the crime, for the aim is to “right a wrong.” Interestingly, when subjects were presented with these two models of punishment, subjects generally had “a positive attitude toward both” and “did not display much of a tendency to favor one at the expense of the other” (Carlsmith
et al.
2002: 294). However, when the subjects were given the opportunity to actually mete out punishment (either in terms of “not at all severe” to “extremely severe” or in terms of “not guilty” to “life sentence”) in response to a specific act of wrongdoing, subjects operated “primarily from a just deserts motivation” (2002: 289). That is, subjects seemed to be responding almost exclusively to the features picked out by the just deserts model (for example, the seriousness of the offense and the absence of mitigating circumstances) while ignoring the features picked out by the deterrence model (for example, the probability of detection and the amount of publicity). The upshot is that, while people may express general support for differing justifications of punishment, when it comes to dealing with a specific case, people are almost always driven by “a strictly deservingness-based stance” (2002: 295).
The results of Carlsmith et al.'s study are consistent with the philosophical picture of moral judgment sketched in the previous chapter. According to that picture, part of the process of making a moral judgment involves judging that someone who violates a moral norm deserves to be punished. What the present research indicates is that moral outrage drives the desire to punish. We do not punish because it deters. We punish because the punished deserve it.
The categorical nature of punishment (i.e., that deservingness is not contingent on the consequences of punishment) does, however, suggest something deeper about punishment and the evolution of morality: if people cannot be reasoned out of the desire to punish, then getting caught for wrongdoing just about guarantees punishment (be it moderate or costly). If you get caught violating a social or moral norm, don't expect your neighbors to be open to discussion – about, say, the disvalue of punishing you. The sense that you deserve punishment is virtually automatic. After all, as Fehr and Gachter (2002) found, the drive to punish is largely a product of anger – not reason. And anger comes unbidden. What this means, I would argue, is that in social settings of this sort there would have been considerable pressure on individuals to avoid getting caught in an act of wrongdoing. And how does one do that? Avoid wrongdoing in the first place! This is effectively the strategy that subjects in Fehr and Gachter's study eventually adopted. When punishment was an option, subjects began to “straighten up and fly right.” Instead of investing a little of their money and hoping others invested heavily, they put their trust in the group. They had learned that pretty much anything else guaranteed a punitive response. True, punitive responses could be costly (as Dreber
et al.
demonstrated),
3
but they could also drive up and secure cooperative arrangements. In the next section, I want to connect this discussion of punishment with the experience of guilt.
4.4 The Benefits of Guilt Return to our year-long Prisoner's Dilemma game example. The preceding discussion of punishment should make it patently obvious – if it wasn't already – how important it would be for me to pay attention to what others are doing and saying (and you bet I am: after all, I have a year of this game). Others will be quick to pick up on accusations. In the event that I am called out for breaking my promise to cooperate with you, it would be wise to do damage control. I might accuse you of lying. I might claim my defection was an accident (“You mean, D is for
defection
?!”). Probably the most effective response would be contrition: I was foolish, I did wrong, I'm sorry. And most importantly:
It. Won't. Happen. Again
.
Better yet, if I could
actually feel
contrite or guilty, this would do more to repair my image than anything I might say. Socrates had it right: “The way to gain a good reputation is to endeavor to
be
what you desire to
appear
” (see Plato's
Phaedo
). To really feel guilty signals to others, first and foremost, that I'm actually
experiencing my punishment
, the punishment of my own conscience. And this is just what you and the community seek. After all, if I break my promise to you, retaliation is going to be one of the central things that crosses your mind. One reason has to do with what punishment can do
for you
. Retaliation is the urge to “get back” at someone for something he's done wrong. In our case, it might involve refusing to play me again, telling others about my double-crossing, punching me out. But the urge to punish protects your own interests. This won't likely cross your mind, but punishment serves the purpose of either banishing the wrongdoer from the group (“Whatever you do, don't play
him
”), eliminating the possibility of getting double-crossed by me again, or spurring me to feel bad about my action. In the latter case, a connection between defection and psychic harm is created.
This can be good for you in several ways. In the short run, my feelings of remorse might prod me to make amends, to correct the harm. This can benefit you directly. In the long run, my feelings of being
chastised
function as a sort of internal check on future promise-breaking. Here you might benefit either directly – when, for example, I cooperate with you in a future game – or indirectly – when I do not undermine the general level of trust in a group. Either way, punishment pays.
The general lesson, though, is worth repeating. Through the mechanism of punishment and the corresponding feelings of guilt, a group can effectively insulate itself from cheaters, and this benefits everyone.
4
Gossip helps to inform us of who to keep close and who to watch out for. (Does this explain the strange attraction of reality television? We can't get enough of the backbiting and double-crossing and scheming. There's a reason we call gossip “juicy,” for it is to the social mind what fatty foods are to the taste buds.) Does all this mean that guilt serves only the interests of others? No. My feelings of guilt can serve my
own
interests as well. In most cases, they drive me to repair the damage I've done to my own reputation. I take steps to re-enter the group of participants, to present myself as trustworthy after all, and my emotions can mediate this process. He who feels no remorse might calculate ways of re-entering the group, but people are surprisingly good at smelling out a fraud. People can usually distinguish between he who merely “goes through the motions” and he who goes through remorse. The best way to signal to others that you feel bad about your behavior is to
really feel bad about your behavior
. Remember, we need not assume that you are aware of any of this. Your feeling the need to retaliate, my feelings of remorse, these are quite automatic. And that's a good thing if their behavioral consequents are to do their job: your desire for retaliation must be genuine, not calculated, if I am to believe that my act of promise-breaking has consequences. By the same token, my feelings of remorse cannot be staged, if I am really to convince you that I can be trusted in the future.
The economist Robert Frank (1988) has suggested that emotions like guilt are hard to fake precisely for this reason: they signal to others that one's remorse is genuine. The cost of sacrificing the ability to control some of our emotions is more than made up for by the trust others put in us.
What I've tried to show in this section is that the evolutionary account of morality provides a plausible explanation of why moral thinking is connected to punishment and guilt. An individual who, when wronged, did
not
threaten retaliation or regard punishment as justified was more or less inviting others to exploit him. In the absence of a protective social structure, like a family, such an individual would have been at a distinct disadvantage among his peers. Likewise, an individual who was incapable of experiencing guilt and who could not put on an adequate show of remorse (in most cases, by actually
experiencing
remorse) was similarly at a disadvantage, since he would over time repel potential partners.
The story just told enjoys growing support. Versions of it appear on websites and in popular scientific magazines. The online journal
Evolutionary Psychology
regularly features scholarly articles exploring different facets of the story. Even the
New York Times
recounted it in a feature article in the Sunday magazine. But is the popularity premature? Are we celebrating merely a good story – or a true story? Since its introduction, some theorists have questioned the story's legitimacy. In this section I want to discuss several objections to the account just offered. One objection, despite its initial appeal, is unlikely to unseat the story. That objection runs like this: even if moral thinking happened to evolve in a particular group according to the preceding story, it would ultimately be overrun by mutant immoralists or amoralists. Two objections that cannot be so easily dismissed are: first, while the story does an adequate job of explaining our moral attitudes toward cooperation, promise-breaking, and the like, it does not so easily explain our moral attitudes toward, for example, ourselves, the unborn, or the terminally ill. It's hard to see how these subjects could be captured by an account modeled on Prisoner's Dilemma-style games. Second, when we turn our attention to moral attitudes across different cultures, there appears to be substantial variation; on its face, this is not what the evolutionary account predicts. Some have thus speculated that the evolutionary story only goes so far: it may explain certain other-regarding feelings (e.g., altruism), but what we regard as distinctly
moral thought
is the result of local training. In short, evolution endowed us with powerful learning mechanisms, not an innate moral sense. Let's consider these objections in turn.
4.5 A Lamb among Lions?
Some have expressed doubts that moral thinking could hold its own among individuals who have no concern for morality. Since an individual who possessed a moral conscience would be reluctant to capitalize on golden opportunities (that is, opportunities to advance one's own interests at someone else's cost but without being detected), and since he could be counted on to cooperate regularly, he would seem to do worse than an individual who had no moral conscience, but who could feign moral feelings. Even if the advantage were slight, over many generations this kind of individual would come to dominate a population. Moreover, it's hard to see how a population of non-moral individuals could ever be overrun by moral individuals. So, the objection concludes, the presence of moral thinking exists
in spite of
– not
because of
– evolution by natural selection.