How Many Friends Does One Person Need? (29 page)

BOOK: How Many Friends Does One Person Need?
5.52Mb size Format: txt, pdf, ePub
ads

So, girls, all of this seems to suggest that there is about a six in ten chance of picking a reliable partner if you choose at random from the population. Which rather makes it look like that old cigarette-butt trick might be the clever way to select a mate. Offer him a cigarette and,
[Page 264]
after he has finished, whisk the butt down to the genetics lab, where they can now squeeze out a sample of his DNA from the saliva stains and scan it for allele 334 at the RS3 gene locus. Not so good if it comes up positive. Very bad if it comes up double positive.
[Page 265]

Chapter 21
Morality on the Brain

In 1906, New York’s Bronx Zoo exhibited an African Pygmy in a cage next door to its gorillas, a spectacle that attracted huge crowds. Sadly, Ota Benga, the Pygmy in question, committed suicide in Virginia a couple of years later after being released, unable to cope either with the life he now faced in America or with the fact that he was completely cut off from his home in the Congo by what was in his impecunious circumstances an all but impassable sea journey. Today, we would regard this whole episode as an unacceptable breach of civil rights, an example of thoughtless cruelty and racism.

Our modern willingness to extend equal rights regardless of race reflects the belief that we are all of the same ‘kind’. And we believe that to be the case because all of us, regardless of race, seem to share certain traits (notably the capacity to be moral) that make us all human. But how is it that we accord these rights to others? What makes us think we should? And where should we draw the line? These are thorny issues that have troubled philosophers for centuries, but might now be possible to answer thanks to insights from neuroscience.
[Page 267]

Morality on the brain

That great paragon of the eighteenth-century Edinburgh Enlightenment, David Hume, argued that morality is mainly a matter of emotion: our gut instincts, the great man opined, drive our decisions about how we and others ought to behave. Sympathy and empathy play a significant role. But his equally great German contemporary, Immanuel Kant, took exception to what he saw as an entirely unsatisfactory way to organise one’s life: our moral sentiments, he argued with equal insistence, are the product of rational thought as we evaluate the pros and cons of alternative actions.

Kant’s rationalist view gained ascendancy in the nineteenth century, mainly thanks to the utilitarian theories of Jeremy Bentham and John Stuart Mill, who argued that the right thing to do was whatever yielded the greatest good for the most people – the view that underpins much modern law-making. Successive generations of philosophers have continued to argue the merits of both views.

However, recent advances in neuropsychology look like they are about to come down firmly in favour of good Scottish common sense. One such insight into how we make moral judgements has come from an elegantly simple series of experiments by Jonathan Haidt and his colleagues at the University of Virginia. They asked subjects to make judgements about morally dubious behaviour, but some did so while rather closer than they might have wished to a smelly toilet or a messy desk, and others did so in a more salubrious environment. The first group gave much harsher judgements than the second,
[Page 268]
suggesting that their judgements were affected by their emotional state.

One of the classic dilemmas used in studies of morality is known as the ‘trolley problem’. It goes like this. Imagine you are the driver of a railway trolley approaching a set of points. You realise that your route takes you down a line where five men are working on the railway unaware of your approach. But there is a switch you can pull that would throw the points and send you off down the other line where just one man is working. Would you pull the switch? Most people would say yes, on the grounds that one certain death is better than five, and this is the Kantian rational answer predicated on the utilitarian view that our actions would maximise the greatest good.

But now suppose you are not driving the trolley, but standing on a bridge above the railway. Beside you is a giant of a man, of a size capable of stopping the trolley dead if you threw him off the bridge onto the railway line, so saving the five workers at the expense of this one luckless victim. Most people now hesitate to act so as to save the five workers, even though the utilitarian value is exactly the same – one man dies to save five. In most such cases, subjects cannot say why they have changed their minds, but one difference seems to lie in a subtle distinction between accidents and intentions.

The important role of intentions was borne out by a study of stroke patients, which showed that people with damage to the brain’s frontal lobe will usually opt for the rational utilitarian option and throw their companion off the bridge. The frontal lobes provide one area
[Page 269]
in the brain where we evaluate intentional behaviour.

The importance of intentionality has recently been confirmed by Marc Hauser from Harvard and Rebecca Saxe from MIT: they found that, when subjects are processing moral dilemmas like the trolley problem, the areas in the brain that are especially involved in evaluating intentionality (such as the right temporal-pari-etal junction just behind your right ear) are particularly active. Our appreciation of intentions is crucially wrapped up with our ability to empathise with others.

The final piece in the jigsaw has now been added by Ming Hsu and colleagues at the California Institute of Technology in Pasadena. In a recent neuroimaging study, they forced subjects to consider a trade-off between equity (an emotional response to perceived unfairness) and efficiency in a moral dilemma about delivering food to starving children in Uganda. They found that when decisions were based on efficiency, there was more neural activity in the areas of the brain associated with reward (particularly the region known as the putamen), whereas when decisions were more influenced by perceived inequality, it was areas associated with emotional responses to norm violations (such as the insula) that were more active. More importantly, the stronger the neural response in each of these areas, the more likely the appropriate behavioural response by the subject. In other words, judgements about morality and those about utilitarian efficiency are made in separate places in the brain, and may not necessarily be called on at the same time.

It seems that Hume was right all along.
[Page 270]

A very peculiar species of morality

However, if morality is simply a reflection of empathy (and/or sympathy), then it seems unlikely that we really need a great deal more than second-order intentionality: it is only necessary that
I understand that you feel something
(or that
you believe something to be the case
). But morality based on this as a founding principle will always be unstable: it is susceptible to the risk that you and I differ in what we consider to be acceptable behaviour. I may think there is nothing wrong with stealing and be unable to empathise with your distraught feelings on finding that I have robbed you of your most treasured possessions. It’s not that I don’t recognise that you are distraught (or understand what it means for me to feel the same way), it’s just that I happen to believe that theft is perfectly OK and that you’re making a big fuss about nothing. If you want to steal from me, that’s just fine... help yourself. I will surely try to defend my possessions, but my view of the world is that possession is nine-tenths of the law, and may the best man win.

If we want morality to stick, we have to have some higher force to justify it. The arm of the civil law will do just fine as a mechanism for enforcing the collective will. But equally, so will a higher moral principle – in other words, belief in a sacrosanct philosophical principle or a belief in a higher religious authority (such as God). The latter is particularly interesting because, if we unpack its cognitive structure, it seems likely to be very demanding of our intentionality abilities. For a religious system to have any kind of force, I have to
believe
that you
suppose
that there is a higher being who
understands
that you and
[Page 271]
I
wish
something will happen (such as the divinity’s intervention on our behalf). It seems that we need at least the fourth order to make the system fly. And that probably means that someone with fifth-order abilities is needed to think through all the ramifications to set the thing up in the first place. In other words, religion (and hence moral
systems
as we understand them) is dependent on social cognitive abilities that lie at the very limits of what humans can naturally manage.

The significance of this becomes apparent if we go back to the differences in social cognition between monkeys, apes and humans and relate these to the neuroanatomical differences between us. While humans can achieve fifth-order intentionality, and apes can just about manage level two, everyone is agreed that monkeys are stuck very firmly at level one (they cannot imagine that the world could ever be different from how they actually experience it). They could never imagine, for example, that there might be a parallel world peopled by gods and spirits whom we don’t actually see but who know how we feel and can interfere in our world.

At this point, an important bit of the neuroanatomical jigsaw comes into play. If you plot the volume of the striate cortex (the primary visual area in the brain) against the rest of the neocortex for all primates (including humans), you find that the relationship between these two components is not linear: it begins to tail off at about the brain size of great apes. Great apes and humans have less striate cortex than you might expect for their brain size. This may be because, after a certain point, adding more visual cortex doesn’t necessarily add significantly to the first layer of visual processing (which mostly deals with
[Page 272]
pattern recognition). Instead, as brain volume (or at least neocortex volume) continues to expand, more neurons become available for those areas anterior to the striate cortex (i.e. those areas that are involved in attaching
meaning
to the patterns picked out in the earlier stages of visual processing). An important part of that is, of course, the high-level executive functions associated with the frontal lobes. Since the brain has, in effect, evolved from back to front (i.e. the increase in brain size during primate evolution is disproportionately associated with expansion of the frontal and temporal lobes), it is precisely those areas associated with advanced social cognitive functions that become disproportionately available once primate brain size passes beyond the size of great apes. Indeed, great ape brain size seems to lie on a critical neuroanatomical threshold in this respect: it marks the point where non-striate cortex (and especially frontal cortex) starts to become disproportionately available.

It seems to me no accident that this is precisely the point at which advanced social cognition (i.e. theory of mind) is first seen in nonhuman animals. Moreover, if we plot the achieved levels of intentionality for monkeys, apes and humans against frontal-lobe volume, we get a completely straight line. That, too, seems to me no accident.

So, we seem to have arrived at a point where we can begin to understand why humans – and only humans –are capable of making moral judgements. The essence of the argument is that the dramatic increase in neocortex size that we see in modern humans reflects the need to evolve much larger groups than are characteristic of other primates (either to cope with higher levels of predation or to facilitate a more nomadic lifestyle). After a certain
[Page 273]
point, however, the computing power that a large neocortex brought to bear on processing and manipulating information about the (mainly social) world passed through a critical threshold that allowed the individual to reflect back on its own mind. As we saw in an earlier chapter, great apes probably lie just at that critical threshold. With more computing power still, this process could become truly reflexive, allowing an individual to work recursively through layers of relationships at either the dyadic level (
I believe that you intend that I should suppose that you
want to do something...
) or between individuals (
I
believe that you intend that James thinks that Andrew
wants...
). At that point, and only at that point, can religion and its associated moral systems come into being. In terms of frontal-lobe volume expansion, the evidence from the human fossil record suggests that this point is likely to have been quite late in human history. It is almost certainly associated with the appearance of archaic humans around half a million years ago. I’ll come back to this in the next chapter. Before I do, however, let’s explore the possibility of morality in other species a little bit more.

Can apes be moral?

Our nearest living relatives are, beyond any question, the great apes. Until only twenty years ago, it was widely accepted that the ape lineage consisted of two groups: modern humans and their ancestors on the one hand and, on the other, the four species of great apes (two chimps, the gorilla and the orang utan) and their ancestors. However, modern genetic evidence has shown that this classification, based largely on body form, is in fact incor-
[Page 274]
rect. There are indeed two groups, but the two groups are made up of the African apes (humans, two chimpanzees and the gorilla) on the one hand and the Asian great apes (the orangs) on the other. Physical appearance, it seems, is not always a sound guide to the evolutionary relationships that lie beneath the skin. So should the apes – or perhaps even just the African apes – be included in the club of ‘moral beings’ (those capable of holding moral views or being moral)?

One of the main reasons we are convinced that we should accord equality of rights to all humans is that we all share the same cognitive abilities from empathy to language. So the test might rest on the question of whether or not any of the other great apes share these traits with us.

BOOK: How Many Friends Does One Person Need?
5.52Mb size Format: txt, pdf, ePub
ads

Other books

Dead Stars by Bruce Wagner
Unleashed by Kimelman, Emily
Love in the Time of Scandal by Caroline Linden
Silent Fall by Barbara Freethy
Winterkill by C. J. Box
Tempting Taine by Kate Silver
Bitten by Desire by Marguerite Kaye
The Train to Lo Wu by Jess Row