The clearest statement of the new program came from the distinguished biologist Edward O. Wilson. “The time has come,” he wrote in his book
Sociobiology,
“for ethics to be removed temporarily from the hands of the philosophers and biologicized.”
11
A few years later he confidently predicted that “Science for its part will test relentlessly every
assumption about the human condition and in time uncover the b
edrock of the moral and religious sentiments.”
12
Both philosophers and psychologists took some time to respond to Wilson’s challenge, but a highly interesting investigation is now being undertaken by both groups, working partly in collaboration.
The new view of morality, that it is at least partly shaped by evolution, has not been arrived at easily. Philosophers long focused on reason as the basis of morality. David Hume, the eighteenth-century Scottish philosopher, defied this tradition in arguing strongly that morals spring not from conscious reasoning but from the emotions. “Morals excite passions, and produce or prevent actions. Reason of itself is utterly impotent in this particular. The rules of morality, therefore, are not conclusions of our reason,” Hume wrote in his
Treatise on Human Nature.
But Hume’s suggestion only made philosophers keener to fo
und morality in reason. The German philosopher Immanuel Kant sought to base morality
outside of nature, in a world of pure reason and of moral impe
ratives that met the test of being fit to be universal laws. This proposal, Wilson w
rote acidly, made no sense at all: “Sometimes a concept is baffling not becaus
e it is profound but because it is wrong. This idea does not a
ccord, we know now, with the evidence of how the brain works.”
13
Psychologists too, however, were long committed to the philosophers’ program of deriving morality exclusively from reason. The Swiss psychologist Jean Piaget, following Kant’s ideas, argued that children learned ideas about morality as they passed through various stages of mental development. Lawrence Kohlberg, an American psychologist, built on Piaget’s ideas, arguing that children went through six stages of moral reasoning. But his analysis was based on interviewing children and having them describe their moral reasoning, so reason was all he could perceive.
Even primatologists, who would eventually contribute to the new view of morality, were muzzled because animal behaviorists, under the baleful influence of the psychologist B. F. Skinner, accused anyone of anthropomorphism if they attributed emotions like empathy to animals.
With everyone on the wrong track, and Hume’s insight neglected, the study of morality was at something of a stalemate. “It is an astonishing circumstance that the study of ethics has advanced so little since the nineteenth century,” Wilson wrote in 1998, dismissing a century’s work.
14
A development that helped break the logjam was an article in 2001 by Jonathan Haidt, a psychologist at the University of Virginia. Haidt had taken an interest in the emotion of disgust and was intrigued by a phenomenon he called moral dumbfounding. He would read people stories about a family that cooked and ate its pet dog after it had been run over, or a woman who cleaned a toilet with the national flag. His subjects were duly disgusted and firmly insisted these actions were wrong. But several were unable to explain why they held this opinion, given that no one in the stories was harmed.
It seemed to Haidt that if people could not explain their moral judgments, then evidently they were not reasoning their way toward them.
The observation prompted him to develop a new perspective on how people make moral decisions. Drawing on his own research and that of others, he argued that people make two kinds of moral decision. One, which he called moral intuition, comes from the unconscious mind and is made instantly. The other, moral reasoning, is a slower, after-the-fact process made by the conscious mind. “Moral judgments appear in consciousness automatically and effortlessly as the result of moral intuitions.... Moral reasoning is an effortful process, engaged in after a moral judgment is made, in which a person searches for arguments that will support an already made judgment,” he wrote.
15
The moral reasoning decision, which had received the almost exclusive attention of philosophers and psychologists for centuries, is just a façade, in Haidt’s view, and it is mostly intended to impress others that a person has made the right decision. People don’t in fact know how they make their morally intuitive decisions, because these are formed in the unconscious mind and are inaccessible to them. So when asked why they made a certain decision, they will review a menu of logically possible explanations, choose the one that seems closest to the facts, and argue like a lawyer that that was their reason. This, he points out, is why moral arguments are often so bitter and indecisive. Each party makes lawyerlike rebuttals of the opponent’s arguments in the hope of changing his mind. But since the opponent arrived at his position intuitively, not for his stated reasons, he is of course not persuaded. The hope of changing his mind by reasoning is as futile as trying to make a dog happy by wagging its tail for it.
Haidt then turned to exploring how the moral intuition process works. He argued, based on a range of psychological experiments, that the intuitive process is partly genetic, built in by evolution, and partly shaped by culture.
The genetic component of the process probably shapes specialized neural circuits or modules in the brain. Some of these may prompt universal moral behaviors such as empathy and reciprocity. Others probably predispose people to learn the particular moral values of their society at an appropriate age.
This learning process begins early in life. By the age of two, writes the psychologist Jerome Kagan, children have developed a mental list of prohibited actions. By three, they apply the concepts of good and bad to things and actions, including their own behavior. Between the ages of three and six, they show feelings of guilt at having violated a standard. They also learn to distinguish between absolute standards and mere conventions. “As children grow, they follow a universal sequence of stages in the development of morality,” Kagan writes.
16
That children everywhere follow the same sequence of stages suggests that a genetic program is unfolding to guide the learning of morality, including the development of what Haidt calls moral intuition.
Such a program would resemble those known to shape other important brain functions. The brain does much of its maturing after birth, forming connections and refining its neural circuitry when the infant encounters relevant experience from the outside world. Vision is one faculty that matures at a critical age; language is another, and moral intuition is a third.
Damage to a special region of the prefrontal cortex, its ventromedial area located just behind the bridge of the nose, is associated with poor judgment and antisocial behavior. Neural circuitry in the brain’s prefrontal cortex is evidently associated with the cultural shaping of moral intuitions.
The existence of special neural circuitry in the brain dedicate
d to moral decisions is further evidence that morality is an ev
olved faculty with a genetic basis. In the well-known case of Phineas Gage, a thin i
ron rod was shot through Gage’s frontal lobe in a railroad construction accide
nt in 1848. Gage, astonishingly, survived the accident but his personality was chang
ed. Previously hardworking and responsible, he was now “f
itful, irreverent, indulging at times in the grossest profanity (which was not previ
ously his custom), manifesting but little deference for his fel
lows, impatient of restraint or advice when it conflicts with his desires,” ac
cording to a physician who examined him 20 years later.
17
A more specific damage to moral sensibilities is seen in patients with Huntington’s disease. Strangely, they become very utilitarian, making moral judgments by weighing only the consequences and ignoring strong social taboos. Consider a situation where a man’s wife has just died. Her body is there on the bed, and he decides to have intercourse with her one last time. Is that OK? Most people will say absolutely not. Huntington’s patients see no problem. Their sense of disgust, an emotion that intensifies certain moral judgments, seems strangely relaxed: if shown a piece of chocolate molded in the form of dog turd, most people will lose any appetite for it, but many Huntington’s patients will happily wolf it down.
18
There seem to be neural circuitries for morality and for disgust, since specific damage to the brain can cause a loss of either behavior. But these behaviors, though at their core very similar in every society, are heavily shaped by culture. Because of cultural differences, societies may vary widely in terms of the actions they consider morally permissible. In Western societies, for instance, killing an infant is generally regarded as murder. But among the !Kung San, a hunting and gathering people in the Kalahari desert of southern Africa, it is the mother’s moral duty to kill after birth any infant that is deformed, and one of each pair of twins.
19
A !Kung mother must carry her infant wherever she goes, and does for some 5,000 miles before the child learns to walk. Since she must also carry food, water and possessions, she cannot carry twins. So the duty to kill a twin, and to avoid investment in a defective child with limited prospects of survival, can be seen not as any moral deficiency on the !Kungs’ part but rather as a shaping of human moral intuitions to their particular circumstances.
Standards of sexual morality vary widely, particularly in regio
ns like aboriginal Australia and neighboring Melanesia where co
nception is not regarded as dependent on the father’s sperm and men are theref
ore less jealous of sexual access to their partners. Thus at
kayasa,
the festival gatherings held by people of the Trobriand Islands off the eastern end
of Papua New Guinea, the sportive element in games was taken s
omewhat further than is customary in Western countries. “At a tug-of-war
kayasa
in the south,” reports the anthropologist Bronislaw Mali
nowski, “men and women would always be on opposite sides. The winning side wou
ld ceremonially deride the vanquished with the typical ululati
ng scream
(katugogova),
and then assail their prostrate opponents
, and the sexual act would be carried out in public. On one occasion when I discusse
d this matter with a mixed crowd from the north and the south,
both sides categorically confirmed the correctness of this statement.”
20
As Rudyard Kipling had occasion to note, “The wildest dreams of Kew are the facts of Khatmandu, And the crimes of Clapham chaste in Martaban.”
But the commonalities in morality are generally more striking than the variations. The fundamental moral principle of “do as you would be done by” is found in all societies, as are prohibitions against murder, theft and incest. Many of these universal moral principles are likely to be shaped by innate neural circuits, while the variations spring from moral learning systems that are more guided by cultural traditions and a society’s particular ecological circumstances.
Returning to moral intuition and moral reasoning, the two basic psychological processes that underlie morality, the question arises as to why evolution has so generously equipped us with two processes, when one might seem plenty. The most plausible answer is that the two processes emerged from different stages of human evolution.
Moral intuition is the more ancient system, presumably put in place before humans gained either the power of reasoning or the faculty of language. After the evolution of language, when people needed to explain and justify their actions to others, moral reasoning would have developed. But evolution would have had no compelling rationale for handing over control of individual behavior to this novel faculty, at the expense of the moral intuition that had safeguarded human societies for so long. So the arrangement that evolved was that both systems were retained. The moral intuitive system continues to work beneath the level of consciousness, delivering its snap judgments to the conscious mind. The moral reasoning system then takes over, working like a lawyer or public relations agent to rationalize the moral input it has been given and to justify an individual’s actions to himself and his society.
Moral Intuition and Trolley Problems
Though the moral intuitive system is inaccessible to the conscious mind, some intriguing traces of its presence can be seen in the subtle moral exercises known as trolley problems. First devised by the moral philosopher Philippa Foot, trolley problems have been developed by psychologists interested in probing the invisible moral rules of the intuitive system. The problems are entirely artificial, which avoids real-life complications and purifies the moral decision to be made.
In the typical trolley problem, a trolley or train is barreling down on five people who are rashly walking on the tracks, oblivious to the danger until the last moment and unable to escape because the track runs through an embankment with steep sides. An individual standing between the train and the five people has the power to save them, but only after making a fraught moral decision.
So consider first the case of Denise, who is standing by a switch that can divert the train onto a side-track. However, a hiker is walking on the side-track and he too cannot escape the train. Would Denise be right to pull the switch and divert the train, saving five people but killing one?
Ethicists may debate the correct answer but psychologists are more interested in the practical matter of what answer do most people in fact give. Marc Hauser, a psychologist at Harvard University, posed the question on his Web site. Tallying the results after several thousand people had taken the test, he found 90 percent said it was OK for Denise to pull the switch.