Read Would You Kill the Fat Man Online
Authors: David Edmonds
Suppose that we are in a boat in a storm and we see two capsized yachts. We can either rescue one person clinging to one upturned yacht, or five people whom we cannot see, but we know are trapped inside the other upturned yacht. We will have time to go to only one of the yachts before they are pounded onto the rocks and, most likely, anyone clinging to the yacht we do not go to will be drowned. We can identify the man who is alone—we know his name and what he looks like, although otherwise we know nothing about him and have no connection with him. We don’t know anything about who is trapped inside the other yacht, except that there are five of them.
7
Countless studies show how our moral decisions—such as how much charity we donate to a cause, or how harsh we think a punishment should be—are significantly influenced by whether we can identify the person or persons affected by our actions.
8
But surely, says Singer of his example, we should save the five—even if evolution has, as it were, inveigled us into caring more about the victim we can identify. And we need to draw an obvious conclusion from Singer’s scenario: some of our moral instincts are inappropriate for our age, an era in which people live in large, anonymous groups in an interconnected world.
The forces of evolution have shaped our moral instincts in another sense. Evolution has provided us with heuristics—rules of thumb—about how we should behave. Rules of thumb are convenient since we do not have infinite time, money, or information to work out what to do on each occasion. They’re useful to navigate complexity, and decision making is routinely
complex. But although heuristics may work for us in the majority of occasions, they can also let us down. For one thing, as already discussed,
9
rules may conflict, so we need a procedure for resolving clashes. “Save lives” and “Do not lie” will clash if we have to lie to save lives. Moreover, sometimes a rule will use a cue, or signal, or proxy, and this can produce both false positives and negatives. Take the heuristic rule against incest: there are rational medical and biological reasons not to reproduce with a sibling. Evolution appears to have given us a rule of thumb that discourages incest: do not find sexually attractive another person with whom you’ve been raised. It’s a rule that has served us well. But it might lead to problems when siblings are separated in childhood and find each other attractive when they meet later in life, and it has caused a crisis in the kibbutzim in Israel, where children from different families are raised communally and grow up feeling little sexual attraction to one another—with a low rate of marriage within the kibbutzim as a consequence.
10
Farewell Freedom
At one level it’s scarcely surprising that the scientist can contribute to the Fat Man dilemma in particular and to the understanding of morality in general. Of course there is a link between the brain and morality. It is impossible to conceive how it could be otherwise. Our behavior and our beliefs have to be the product in part of our neural circuitry. Without brains there could be no beliefs.
But what’s new is our growing understanding of how the architecture and engineering works, which bits of the brain do what, and how they are connected. It’s a debate that is relevant
to the encroachment of neuroethics into the law. In the future, we can expect more pleas of mitigation of the form, “it wasn’t me, it was my brain.” Our system of justice rests on the notion that humans are free to act, free to choose. We don’t hold a person responsible for an action that they were forced to perform. And the more we discover about the brain, and the more we can explain and predict action, the smaller and smaller becomes the space available for the operation of free will—or so it might seem.
However, “compatibilists” maintain that free will is consistent with a full causal explanation of our thoughts and actions. Even if a gigantic computer programmed with zillions of bytes of data could accurately predict a person’s actions, that wouldn’t imply—insists the compatibilist—that action was not free. This seems a puzzling claim, to me at least, though the compatibilist position on free will is probably the most popular among philosophers writing on this subject. But whatever position one adopts in this perennial debate, it’s inevitable that the courts will increasingly be asked to take into account biologically grounded excuses and pleas for mitigation based on brain scans and medical evidence.
Consider the example of a predatory sexual harasser in the year 2000. This middle-aged American male had spent years happily married, exhibiting no unusual sexual proclivities. Almost overnight he developed an interest in prostitution and child pornography. His wife became aware of this, and when he began making advances toward his stepdaughter, she informed the authorities. Her husband was found guilty of child molestation and sentenced to rehabilitation. That did nothing to discourage him: he carried on harassing women at the center where he was undergoing his rehabilitation. A jail sentence seemed inevitable.
For some time he had been beset by headaches, and these were becoming more intense. Just hours before his sentencing he went to the hospital, where a brain scan revealed a massive tumor. Once this was removed, his conduct returned to normal. That could have been the end of the story, but six months later the wildly inappropriate behavior started up again. The man went back to the doctors. It turned out that a part of the tumor had been missed in the first operation and had now expanded. A second operation was entirely successful and had an instantaneous effect on the patient’s aberrant sexuality. The man was spared jail.
A tumor is an extreme example. Few would hold a person responsible for his actions if such a growth had radically altered his decision making. But in the future, neuroscientists will point to other physical causes that we don’t currently categorize under terms like “disease,” “illness,” or “condition.” A neuroscientist might say, “Mary’s shoplifting can be explained by the chemical composition and synapses in her brain.” It’s not obvious why this excuse would be, in theory, any less convincing than one that references a tumor.
11
One important means by which neuroscientists are learning about the relationship between the brain and ethics is through atypical cases, arising from accidental lesions and disease. Although neuroethics is a niche area, the emerging picture of ethics has similarities to that being drawn by specialists in other parts of the brain, be it language, the senses, face recognition, the relationship between the brain and the body, or consciousness. The brain is a delicate, intricate, interlinked construction, in precarious equilibrium, and the absence or removal or mis-wiring of one tiny piece of the engineering can produce weird phenomena and curious behavior.
Capgras syndrome is the perfect illustration. Capgras is a condition in which a person believes that his wife or father or close friend has been replaced by an imposter. In the past those making such a claim were quickly labeled insane. But neuroscientists like Vilayanur Ramachandran, intrigued by such cases, sought a physiological explanation—and came up with a simple one. Most of us are superb at recognizing faces and storing information about them: if asked, we may not be able to articulate how the faces of two brothers differ, but in their presence we have no trouble telling them apart. This vital skill appears to depend on the normal functioning of a particular part of the brain called the fusiform gyrus. Damage to this area can lead to prosopagnosia, a condition in which patients cannot distinguish faces. According to Ramachandran, patients with Capgras syndrome have normally functioning face recognition, but there is some kind of transmission problem connecting the fusiform gyrus to the limbic system, central to our emotional life. The absence of any emotional kick when Capgras sufferers see a person with their mother’s face leads them to conclude that this person is some kind of charlatan.
12
Dual Systems
The typical ethical outlook of the typical human being relies on a balance of neural systems.
Joshua Greene initially saw the opposition as one between emotion and calculation, Haidt between emotion and reason (and in his more recent work, between automaticity/intuition and reason), Daniel Kahneman, the Nobel Prize–winning psychologist, between fast and slow systems.
13
These dual systems need not be entirely independent of one another. Thus, even if, as Haidt insists, emotion is in the driver’s seat, reason might have acted in an earlier and influential role as driving instructor. For example, in most of the developed world, homosexuality doesn’t repulse people in the way that it used to—so people are less likely to judge it wrong. But reason, presumably, played at least some role in altering the social norm that found homosexuality disgusting.
14
Many of those working on the science of morality believe that their findings have normative import. Thus Haidt says that in his incest scenario, people should overcome their emotional Yuk! reaction—reason tells us that there can be no objection to a relationship between two consenting adults where no harm is done. And Greene argues that our automatic responses to situations—though hugely useful—can also misfire, and that in moral dilemmas our calculating side should take primacy: we should shift into manual mode. We should push the fat man, despite our instinctive abhorrence of doing so. Peter Singer agrees: if resistance to pushing the fat man is driven by the brain’s emotional mechanisms, we should overcome our squeamishness.
15
Some people have little or no squeamishness. What makes some people more utilitarian than others is now under investigation. People who are good at visual imagery have weaker utilitarian instincts (presumably the image of killing the fat man strikes them with greater force).
16
If subjects are forced to think longer about a problem, their judgment will be more utilitarian than if they have to give an instant response.
17
That emotions are linked to the frontal lobe of the brain has been known at least since the iron rod transformed Phineas P. Gage. We can have a guess at what Phineas Gage’s post-
accident reaction would have been to the imaginary rail disasters in trolleyology. In the past few years, studies have been carried out on people with damage to the ventromedial pre-frontal cortex.
18
Such patients are more blasé about the fate of the fat man. Damaged patients are about twice as likely as normal people to say that it’s acceptable to push the fat man to his death to save other lives. There are similar findings when patients are asked some heart-stopping cases discussed earlier, such as the parents hiding from the Nazis who must suffocate their child to prevent the entire group being discovered and killed. Damaged patients feel less internal conflict than the healthy: that suffocating the child is the right thing to do seems more obvious to them. They have a reduced emotional response to causing harm.
There are also related studies on psychopaths. Psychopaths, and those with psychopathic traits, tend to be more likely than others to endorse direct harm in trolley-like scenarios.
19
Some psychologists have turned their specialist eye on rigid utilitarians like Jeremy Bentham: one paper posits that his moral outlook is linked to a diagnosis of Asperger’s Syndrome.
20
It’s not easy working out the implications of these studies for morality. If there is a link between a certain type of brain damage and utilitarianism, are we to infer that sometimes brain-damaged patients have clearer moral vision than others? Or should we instead take such findings as evidence that there is something not fully rounded about utilitarianism—and that those who advocate the pushing of the fat man have a fundamental flaw in their ethical apparatus? The latter is at least plausible. Since psychopaths are poor at judging what is right in certain uncontentious cases, it seems reasonable to conclude that their judgment is also suspect in trolley cases. In
other words, the fact that psychopaths are more likely to endorse the killing of the fat man provides weak evidence that killing the fat man is wrong.
Neurobabble
Neuroscience is muscling in on many disciplines. It is new, exciting, and producing fascinating results. But it has fierce critics, particularly when it purports to shed light on ethics. One line of attack is that it is flawed methodologically: that it is poor science.
Brain-scanning is indeed still a crude tool with crude measurements. And gauging the response of subjects while they are lying prone in a long tube can hardly replicate any real-life dilemma. However deeply the patients immerse themselves in the dilemma, however successful they are in imagining themselves inside it, in suspending disbelief, they’re unlikely to feel the thumping heart, the sweaty palms, the fear, panic, and anxiety of real life. The ordinary sounds, smells, and sights are absent. There is no chatter or rumbling street noise in the background, no raindrops or sunshine.
21
The point is not that sunshine
ought
to affect our decisions. Whether or not I donate to drought victims on the far corner of the globe should not depend on how my mood is altered by the weather. But real life does contain multiple influencing factors, so we should be wary of extrapolating from the white tube to real life.
But there’s a more fundamental objection to the claims of neuroscience. The gravamen of the charge is that there’s some sort of category error involved. The twentieth-century British philosopher, Gilbert Ryle, who introduced the notion of the
category error, illustrated it with the example of the American tourist who arrived at Oxford and, after seeing the Sheldonian theater, the Bodleian library, and the colleges and quads, innocently asked, “But where is the University?” as though the university were somehow a separate physical entity.