Read Liars and Outliers Online
Authors: Bruce Schneier
We not only innately trust, but we want to be trusted. A lot of this is intellectually calculated, but it goes deeper than that. Our need to be trusted is innate. There's even a biological feedback loop. Researchers have
found that oxytocin
—a hormone released during social bonding—naturally increases in a person who perceives that he is trusted by others. Similarly, artificially increasing
someone's oxytocin
level makes her more trusting.
The philosopher and economist
Adam Smith
expressed a similar sentiment 300 years ago:
How selfish so ever man may be supposed, there are evidently some principles in his nature, which interest him in the fortune of others, and render their happiness necessary to him, though he derives nothing from it except the pleasure of seeing it.
Of course, human trust isn't all-or-nothing. It's contextual, calibrated by our ability to calculate costs and benefits. A lot of our willingness to trust non-kin is calibrated by the society we live in. If we live in a polite society where trust is generally returned, we're at ease trusting first. If we live in a violent society where strangers are hostile and untrustworthy, we don't trust so easily and require further evidence that our trust will be reciprocated.
Our trust rules can be sloppy. We're more likely to trust people who are similar to us: look like us, dress like us, and speak the same language. In general, we're more likely to trust in familiar situations. We also generalize: if we have a good experience with people of a particular nationality or a particular profession, we are likely to trust others of the same type. And if we have a bad experience, we're likely to carry that mistrust to others of the same type.
25
These rules of thumb might not make logical sense in today's diverse world, but they seem to have been good ideas in our evolutionary past.
This is all good, but we have a chicken-and-egg problem. Until people start trusting non-kin, there is no evolutionary advantage to trusting non-kin. And until there's an evolutionary advantage to trusting non-kin, people won't be predisposed to trust non-kin. Just as a single hawk in a Hawk-Dove game can take advantage of everybody, a single dove in a Hawk-Dove game gets taken advantage of by everybody. That is, the first trusting person who engages with a group of untrustworthy people isn't going to do very well.
It turns out that cooperative behavior can overcome these problems. Mathematical biologist Martin A. Nowak has explored the evolution of cooperation using mathematics, computer models, and experiments, and has found four different mechanisms by which altruistic behavior can spontaneously evolve in
non-kin groups
:
What methods work depend on how much it costs for one individual to help another, how beneficial the help is, and how likely it is that helpful individuals meet and recognize each other in the future. And, depending on details, there are several plausible
biological models
of how this sort of thing might have jump-started itself. Exactly how this evolved in humans is debated.
28
Philosopher
Patricia Churchland
suggests four coexistent characteristics of our pre-human ancestors that make all of Nowak's mechanisms likely: “loose hierarchy and related easygoing temperament, cooperative parenting extending to cooperating with the group, sexual selection, and lethal intergroup competition.” The last one is especially interesting;
our murderousness
helped make us cooperative.
What's likely is that all six mechanisms—Nowak's four, kin selection, and Zahavi's handicap principle—were working at the same time. Also that there was a strong positive-feedback loop, as we became smarter and more social. Each individual mechanism contributes a bit towards the evolution of cooperation, which makes resultant individuals better able to pass their genes on to the next generation, which selects for a little more contribution from each mechanism, which makes resultant individuals even better able to pass their genes on, and so on.
And these processes
, especially group selection, work on both the genetic and cultural levels.
We became trustworthy, well…most of the time. We trusted others, well…most of the time. And, as we'll see, we used security to fill in the gaps where otherwise trust would fail. In a way, humans domesticated themselves.
29
Chapter 4
A Social History of Trust
Trust is rare on this planet. Here's primatologist
Robert Sapolsky
:
When baboons hunt together they'd love to get as much meat as possible, but they're not very good at it. The baboon is a much more successful hunter when he hunts by himself than when he hunts in a group because they screw up every time they're in a group. Say three of them are running as fast as possible after a gazelle, and they're gaining on it, and they're deadly. But something goes on in one of their minds—I'm anthropomorphizing here—and he says to himself, “What am I doing here? I have no idea whatsoever, but I'm running as fast as possible, and this guy is running as fast as possible right behind me, and we had one hell of a fight about three months ago. I don't quite know why we're running so fast right now, but I'd better just stop and slash him in the face before he gets me.” The baboon suddenly stops and turns around, and they go rolling over each other like Keystone cops and the gazelle is long gone because the baboons just became disinhibited. They get crazed around each other at every juncture.
We're not like that. Not only do we cooperate with people we know, we cooperate with people we've never even met. We treat strangers fairly, altruistically sometimes. We put group interest ahead of our own selfishness. More importantly, we control other people's selfish behaviors.
We do this through a combination of our own prosocial impulses and the societal pressures that keep us all in line. This is what allowed for the hunter-gatherer societies of prehistory, the civilization of history, and today's globalization.
But while our cultures evolved, our brains did not. As different as our lives are from those of the primitive hunter-gatherers who lived in Africa 100,000 years ago, genetically we have barely changed at all.
1
There simply hasn't been enough time. As
Matt Ridley
writes in
The Red Queen
:
Inside my skull is a brain that was designed to exploit the conditions of an African savanna between 3 million and 100,000 years ago. When my ancestors moved into Europe (I am a white European by descent) about 100,000 years ago, they quickly evolved a set of physiological features to suit the sunless climate of northern latitudes: pale skin to prevent rickets, male beards, and a circulation relatively resistant to frostbite. But little else changed. Skull size, body proportions, and teeth are all much the same as they are in a San tribesman from southern Africa. And there is little reason to believe that the grey matter inside the skull changed much, either. For a start, 100,000 years is only three thousand generations, a mere eye blink in evolution, equivalent to a day and a half in the life of bacteria. Moreover, until very recently the life of a European was essentially the same as that of an African. Both hunted meat and gathered plants. Both lived in social groups. Both had children dependent on their parents until their late teens. Both passed wisdom down with complex languages. Such evolutionary novelties as agriculture, metal, and writing arrived less than three hundred generations ago, far too recently to have left much imprint on my mind.
It is this disconnect between the speed of cultural evolution and memes—intragenerationally fast—and the speed of genetic evolution—glacially slow, literally—that make trust and security hard. We've evolved for the trust problem endemic to living in small family groups in the East African highlands in 100,000 BC. It's 21st century New York City that gives us problems.
2
Our brains are sufficiently neuroplastic that we can adapt to today's world, but vestiges of our evolutionary past remain. These cognitive biases affect how we respond to fear, how we perceive risks (there's a whole list of them in Chapter 15), and how we weigh short-term versus long-term costs and benefits. That last one is particularly relevant to decisions about cooperation and defection. Psychological studies show that we have what's called a
hyperbolic discounting
rate: we often prefer lower payoffs sooner to higher payoffs later. As we saw in the previous chapter, decisions to cooperate often involve putting our long-term interests ahead of our short-term interests. In some ways, this is unnatural for us.
As we saw in the previous chapter, any system of cooperators also includes some defectors. So as we as a species became more cooperative, we evolved strategies for dealing with defectors.
Making this happen isn't free. We have evolved a variety of different mechanisms to induce cooperation, the societal pressures I'll discuss in Chapters 6 through 10.
Francis Fukuyama
wrote: “Widespread distrust in society…imposes a kind of tax on all forms of economic activity, a tax that high-trust societies do not have to pay.” It's a tax on the honest. It's a tax imposed on ourselves by ourselves, because, human nature being what it is, too many of us would otherwise become hawks and take advantage of the rest of us. And it's an expensive tax.
3
James Madison
famously wrote: “If men were angels, no government would be necessary.” If men were angels, no security would be necessary. Door locks, razor wire, tall fences, and burglar alarms wouldn't be necessary. Angels never go where they're not supposed to go. Police forces wouldn't be necessary. Armies? Countries of angels would be able to resolve their differences peacefully, and military expenses would be unnecessary.
Currency, that paper stuff that's deliberately made hard to counterfeit, wouldn't be necessary, as people could just write down how much money they had.
4
Angels never cheat, so nothing more would be required. Every security measure that isn't designed to be effective against accident, animals, forgetfulness, or legitimate differences between scrupulously honest angels could be dispensed with.
We wouldn't need police, judges, courtrooms, jails, and probation officers. Disputes would still need resolving, but we could get rid of everything associated with investigating, prosecuting, and punishing crime. Fraud detection would be unnecessary: the parts of our welfare and healthcare system that make sure people fairly benefit from those services and don't abuse them; and all of the anti-shoplifting systems in retail stores.
Entire industries would be unnecessary, like private security guards, security cameras, locksmithing, burglar alarms, automobile anti-theft, computer security, corporate security, airport security, and so on. And those are just the obvious ones; financial auditing, document authentication, and many other things would also be unnecessary.
Not being angels is expensive.
We don't pay a lot of these costs directly. The vast majority of them are hidden in the price of the things we buy. Groceries cost more because some people shoplift. Plane tickets cost more because some people try to blow planes up. Banks pay out lower interest rates because of fraud. Everything we do or buy costs more because some sort of security is required to deliver it.
Even greater are the non-monetary costs: less autonomy, reduced freedom, ceding of authority, lost privacy, and so on. These trade-offs are subjective, of course, and some people value them more than others. But it's these costs that lead to social collapse if they get too high.
Security isn't just a tax on the honest, it's a very expensive tax on the honest. If all men were angels, just think of the savings!
It wasn't always like this. Security used to be cheap. Societal pressures used to be an incidental cost of society itself. Many of our societal pressures evolved far back in human prehistory, well before we had any societies larger than extended family groups. We touched on these mechanisms in the previous chapter: both the moral mechanisms in our brains that internally regulate our behavior, and the reputational mechanisms we all use to regulate each other's behavior.
Morals and reputation comprise our prehistoric toolbox of societal pressures. They are informal, and operate at both conscious and subconscious levels in our brains: I refer to the pair of them, unenhanced by technology, as
social pressures
. They evolved together, and as such are closely related and intertwined in our brains and societies. From a biological or behaviorist perspective, there's a reasonable argument that my distinction between moral and reputational systems is both arbitrary and illusory, and that differentiating the two doesn't make much sense. But from our perspective of inducing trust, they are very different.
Despite the prevalence of war, violence, and general deceptiveness throughout human history—and the enormous amount of damage wrought by defectors—these ancient moral and reputational systems have worked amazingly well. Most of us try not to treat others unfairly, both because it makes us feel bad and because we know they'll treat us badly in return. Most of us don't steal, both because we feel guilty when we do and because there are consequences if we get caught. Most of us are trustworthy towards strangers—within the realistic constraints of the society we live in—because we recognize it's in our long-term interest. And we trust strangers because we recognize it is in their interest to act trustworthily. We don't want a reputation as an untrustworthy, or an untrusting, person.
Here's an example from early human prehistory: two opposing tendencies that would cause society to fall apart if individuals couldn't trust each other. On one hand, we formed pair bonds for the purpose of child-rearing. On the other hand, we had a primarily gender-based division of labor that forced men and women to separate as they went about their different hunting and gathering tasks. This meant that primitive humans needed to trust that everyone honored the integrity of these pair bonds, since individuals often couldn't be around to police them directly. The difficulty in resolving those opposing tendencies is known as Deacon's Paradox.
5
No, anthropologists don't have unrealistic views on the sanctity of marriage. They know that illicit affairs go on all the time.
6
But they also realize that such indiscretions occur with much less frequency than they would if mating weren't largely based on pair-bonding.
7
Most people are honest most of the time, and most pair bonds are respected most of the time. Deacon singled out one particular human capability—the ability to form symbolic contracts—as the particular mechanism that polices sexual fidelity. This isn't just about two people deciding to cohabitate, share food, and produce and raise offspring. It's about two people making a public declaration of commitment in marriage ceremonies, and enlisting other members of the community to simultaneously recognize and promote the stability of their pair bond. Because everyone has a stake in supporting sexual fidelity within the community, everyone keeps an eye on everyone else and punishes illicit matings.
This is an example of a social pressure. It's informal and ad hoc, but it protects society as a whole against the potentially destabilizing individual actions of its members. It protects society from defectors, not by making them disappear, but by keeping their successes down to a manageable rate. Without it, primitive humans wouldn't have trusted each other enough to establish gender-based division of labor and, consequently, could never have coalesced into communities of both kith and kin.
Other examples include being praised for good behavior, being gossiped about and snubbed socially for bad behavior, being shamed, shunned, killed, and—this is much the same as being killed—ostracized and cast out of the group.
I'm omitting a lot of detail, and there are all sorts of open research questions. How did these various social pressures evolve? When did they first appear, and how did their emergence separate us from the other primates—and other protohumans?
8
How did trust affect intelligence, and how did intelligence affect trust? For our purposes, it's enough to say that they evolved to overcome our increased deceptiveness and murderousness.
In a primitive society, these social pressures are good enough. When you're living in a small community, and objects are few and hard to make, it's pretty easy to deal with the problem of theft. If Alice loses a bowl at the same time Bob shows up with an identical bowl, everyone in the community knows that Bob stole it from Alice and can then punish Bob. The problem is that these mechanisms don't scale. As communities grow larger, as they get more complex, as social ties weaken and anonymity proliferates, this system of theft prevention—morals keeping most people honest, and informal detection, followed by punishment, leading to deterrence to keep the rest honest—starts to fail.