Would You Kill the Fat Man (16 page)

BOOK: Would You Kill the Fat Man
3.32Mb size Format: txt, pdf, ePub

This would have radical implications, such as in warfare. The future of warfare is robotic warfare, in which machines will have growing “autonomy” to make decisions without direct human oversight.
14
It would be naive to believe that machine “agents,” such as those in the novels of Isaac Asimov or in movies like
Blade Runner
, are any longer confined to the realm of fiction.

The Google Driverless Car is in an advanced stage of development. In cities around the world, driverless trains, already a feature at numerous airports, are now being introduced. In Co-penhagen, for example, computers control almost everything centrally. One could imagine that a runaway driverless train may face the “choice” between killing five and killing one—and that it could be programmed to respond to pertinent characteristics of the situation.

Artificially intelligent machines—be they driverless trains or gun-wielding robots—might even “behave” better than humans. Under stressful conditions—under fire for example—humans might push the fat man, an action which on reflection they might regret. Machine “decisions” need not be impaired by any rush of adrenaline.

The only (!) thing the software engineers need to agree on is what the moral rules are …

PART 3

 

Mind and Brain and the Trolley

CHAPTER 12

 

The Irrational Animal

 

I can calculate the motion of heavenly bodies but not the madness of people.

 

—Isaac Newton
’Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger.

 

—David Hume
When a man has just been greatly honored and has eaten a little he is at his most charitable.

 

—Nietzsche

 

WHEN IT COMES TO THE FAT MAN, the philosopher wants to know the answer to a moral question:
should
we push him to the great beyond? The philosopher is interested in normative (value) questions—such as, how
should
we lead our lives?

Can the scientist help? The scientist, here, is broadly defined to include the psychologist and the neuroscientist. Typically the scientist is interested in different, non-normative, questions. Why do we give the answers we give? How do we reach our judgments? What influences our behavior? The Scottish Enlightenment philosopher, David Hume (1711–
1776), insisted that there was a distinction between fact and value—so no description of how we
do
judge can determine how we
should
judge. After all, if it turned out that we humans were innately disposed to be racist (or at least to favor our in-group over some out-group), that wouldn’t be evidence that racism was in any way acceptable. But a generation of scientists have begun to investigate the trolley problem—and some of them claim that certain empirical discoveries have normative implications.

Bread and Clutter

 

For those wishing to believe that humans are governed by the dream team of reason and benevolence, much of the work of social psychology is unsettling. The experiments conducted by the Yale psychologist Stanley Milgram in the 1960s demonstrated that many people are willing to put their consciences to one side, when told by an authority figure to perform a bad action—in this case to turn a dial to give other people an electric shock.
1
The prison experiments conducted by the Stanford psychologist, Philip Zimbardo, also showed how badly people can behave when given a (pseudo) legitimate power. In a role-play, some subjects were assigned the role of guard, others that of prisoner, and they were put in a mock dungeon. Many of the “guards” quickly began to exhibit sadistic tendencies toward the “prisoners.”

In another oft-recited experiment, divinity students at the Princeton Theological Seminary were informed that they had to give a presentation on the parable of the Good Samaritan.
2
As they were dispatched off across the quad to deliver it, some of them were told that they were a few minutes late. Before they reached their next destination they encountered a man slumped in an alleyway, coughing and moaning and clearly in distress. The vast majority of those who thought they were in a hurry ignored the man. Some literally stepped over him.
3
The result was surprising; one might have expected those reflecting on the Good Samaritan to recognize that helping a stranger was, in the grand scheme of things, more important than being punctual for a seminar.

Still, at least a rationale of sorts could be offered for their conduct: it’s not considerate to keep people waiting. But more recently there has been a plethora of studies showing that our ethical behavior appears to be linked to countless irrational or nonrational factors. For example, before mobile phones became ubiquitous, one American study showed that when subjects emerged from a public phone booth, they were much more likely to help a passer-by who dropped a pile of papers if they had first found a dime in the return slot of the phone. This nano-fragment of good fortune, of negligible monetary value, had a huge impact on how people acted. Yet another study proved that our behavior is affected by smell. We’re more likely to be generous toward others if we’re outside a bakery, breathing in the delicious aroma of baking bread. Whether the desk on which we’re filling out a questionnaire is tidy and clean or messy with sticky stains can influence our answers to moral questions, such as opinions on crime and punishment. Scarily, the chance that a judge will rule that a prisoner be granted parole appears to depend on how long it’s been since the judge’s last meal.
4

Although we like to fool ourselves into believing that we freely make decisions in the light of informed and reasoned reflection, the growing evidence from experimentation is that reason often takes a back seat to unconscious influences. Certainly
our behavior is far more “situationist”—affected by a multitude of circumstances—than we might previously have imagined, and the research is a blow to the idea that character traits are stable and consistent, that the brave person will always be brave, the stingy person stingy, and the compassionate person compassionate. This has implications for government and education policy. Perhaps we should be focusing more on shaping conditions than character. As Anthony Appiah puts it: “Would you rather have people be helpful or not? It turns out that having little nice things happen to them is a much better way of making them helpful than spending a huge amount of energy on improving their characters.”
5

Three-dimensional Trolley

 

Trolleys have provided plenty of buffet-carriage fodder for psychologists. Philosophers have posed the trolley dilemmas in seminar rooms or on paper or on screen. But reading text on a screen doesn’t even approximate to real-life situations.

So how could one ever engineer realistic trolley-like scenarios for unsuspecting subjects? Testing for real-life trolley reaction is not as straightforward as testing for a behavioral effect of the smell of baking bread or tinkering with the conditions that might influence people to help a stranger in distress.

This hasn’t stopped ingenious psychology experimentalists from trying. A study conducted in 2011 placed subjects in a 3-D virtual-reality environment. In one scenario, the trolley was heading toward five and subjects could turn it to hit the one. In the other, the trolley was in any case en route to hitting
the one and so subjects had nothing to do to avoid killing the five (although there was the option of turning the trolley so that it did hit the five). In an attempt to replicate reality, shrieks of distress became audible as the train careened toward those on the track. The study raised ethical issues of its own: several people were so disturbed by the experiment that they withdrew from it. In both cases, the vast majority of those who persisted with the experiment chose to kill or allow the one to die, to spare the five. But when positive action was required to save the five, subjects became more emotionally aroused than when they had to do nothing to achieve the same outcome.
6

Psychologists have also altered other variables. One experiment divided subjects into two groups. Before the first group was exposed to the trolley problem they were shown a funny five-minute clip from the television show,
Saturday Night Live
. The second group had to sit through part of a tedious documentary about a Spanish village. Those who had been exposed to the comedy, and so (presumably) contemplating matters of life and death in a jaunty mood, were more likely to sanction the killing of the fat man.
7

Our reactions can even be influenced by the name accorded the fat man, as another study revealed. Subjects were offered the choice between pushing “Tyrone Payton” (a stereotypical African American name) off the footbridge to save one hundred members of the New York Philharmonic and pushing “Chip Ellsworth III” (a name conjuring up white Anglo-Saxon old money) to save one hundred members of the Harlem Jazz Orchestra. The researchers discovered conservatives were indifferent between these options, but at the hands of liberals, aristocratic Chip fared less well than Tyrone. Perhaps liberals were bending over backward not to be racist—or perhaps Chip
Ellsworth III conjured up an image of wealth and privilege and they were motivated by egalitarian considerations (or, less charitably, envy?).
8

Intriguingly, although only 10 percent of people would push the fat man, we tend to have far stronger utilitarian instincts when the dilemma is presented with animals instead of humans. Thus, one study asked subjects whether they would push a fat monkey off the footbridge to save five monkeys. The answer was “yes.” People do not object to treating animals as means to a greater end. Our typical reflexes about animals are not Kantian but Benthamite.
9

Janet and Jon

 

Although there are an infinite number of factors that might potentially influence our behavior and moral judgments, a consensus is emerging that there are two broad processes involved. Exactly how to characterize these two processes, and the balance of power between them, is contested territory. But the dichotomy, drawing on twenty-first century tools and methods, echoes a much older clash between the two most important philosophers of the eighteenth century—David Hume and Immanuel Kant. “Reason is, and ought only to be the slave of the passions,” wrote Hume.
10
Kant held, on the contrary, that morality must be governed by reason.

In pioneering papers such as
The Emotional Dog and its Rational Tail
, the psychologist Jonathan Haidt argues that the emotions do much of the heavy lifting. Haidt has principally interrogated aspects of our morality that provoke disgust, or YUK! reactions. Take the imaginary scenario for which he is probably best known. Julie and Mark are siblings traveling in
France on summer vacation from college. One night they’re staying alone in a cabin near the beach. They decide that it would be interesting and fun if they tried making love. At the very least, it would be a new experience for each of them. Julie is already taking birth control pills, but Mark uses a condom too, just to be safe. They both enjoy making love, but they decide never to do it again. They keep that night they slept together as a special secret, one that makes them feel even closer to each other.
11

If you fail to find the idea of Julie and Mark having sex a bit yucky, well, at the very least you’re in a small minority. Haidt found that, to varying degrees, almost everyone he questioned thought the siblings’ behavior was morally reprehensible. But when he questioned people about why it was wrong, his subjects struggled to account for their feelings. Thus, they might first say that they were worried that any offspring from the sexual act might have genetic flaws, until reminded that there would be no offspring since two forms of contraception were used. Or they might raise concern at the long-term psychological impact, forgetting that for Julie and Mark the experience was an entirely positive one.

So here was an example in which nobody was harmed, and yet people still felt an immoral act had taken place: why it was wrong they somehow could not quite pinpoint. Baffled and frustrated, they ran out of explanation. They resorted to comments like, “Well, I just know in my gut it’s wrong.” Haidt gave this feeling a name: he labeled it “moral dumbfounding.”
12

In one experiment Haidt and a colleague used hypnosis to make people feel disgusted when an arbitrarily chosen word was used. That word was “often.” They found that if a scenario was presented into which this word was slotted, hypnotized subjects judged any moral wrongdoing more harshly. More
strikingly yet, a substantial minority still identified wrongdoing in situations where clearly there was none—such as the following. “Dan is a student council representative at his school. This semester he is in charge of scheduling discussions about academic issues. He often picks topics that appeal to both professors and students in order to stimulate discussion.” When asked why they thought Dan had done something wrong, subjects would flounder around in a search of a response. “It just seems like he’s up to something.”
13

Other books

In the Red Zone by Crista McHugh
Cherry Bomb by Leigh Wilder
Jayden (Aces MC Series Book 4.5) by Aimee-Louise Foster
Finding Kylie by Kimberly McKay
Grunt Life by Weston Ochse
Unexpected Angel by Sloan Johnson
Rise of the Transgenics by J.S. Frankel
A Knight of the Sacred Blade by Jonathan Moeller
Sway by Amy Matayo
Monkey Suits by Jim Provenzano