Everything Is Obvious (15 page)

Read Everything Is Obvious Online

Authors: Duncan J. Watts

BOOK: Everything Is Obvious
2.48Mb size Format: txt, pdf, ePub

Finally, and quite apart from any specific findings, these studies help us to see a major shortcoming of commonsense thinking. It is ironic in a way that the law of the few is portrayed as a counterintuitive idea because in fact we’re so used to thinking in terms of special people that the claim that a few special people do the bulk of the work is actually extremely natural. We think that by acknowledging the importance of interpersonal influence and social networks, we have somehow moved beyond the circular claim from the previous chapter that “X happened because that’s what people wanted.” But when we try to imagine how a complex network of millions of people is connected—or worse still, how influence propagates through it—our intuition is immediately defeated. By effectively concentrating
all
the agency into the hands of a few individuals, “special people” arguments like the law of the few reduce the problem of understanding how network structure affects outcomes to the much simpler problem of
understanding what it is that motivates the special people. As with all commonsense explanations, it sounds reasonable and it might be right. But in claiming that “X happened because a few special people made it happen,” we have effectively replaced one piece of circular reasoning with another.

CHAPTER 5
History, the Fickle Teacher

The message of the previous three chapters is that commonsense explanations are often characterized by circular reasoning. Teachers cheated on their students’ tests because that’s what their incentives led them to do. The
Mona Lisa
is the most famous painting in the world because it has all the attributes of the
Mona Lisa
. People have stopped buying gas-guzzling SUVs because social norms now dictate that people shouldn’t buy gas-guzzling SUVs. And a few special people revived the fortunes of the Hush Puppies shoe brand because a few people started buying Hush Puppies before everyone else did. All of these statements may be true, but all they are really telling us is that what we know happened, happened, and not something else. Because they can only be constructed after we know the outcome itself, we can never be sure how much these explanations really explain, versus simply describe.

What’s curious about this problem, however, is that even once you see the inherent circularity of commonsense explanations, it’s still not obvious what’s wrong with them. After all, in science we don’t necessarily know why things happen either, but we can often figure it out by doing experiments in a lab or by observing systematic regularities in the world. Why can’t we learn from history the same way? That is, think of history as a series of experiments in which certain general “laws” of cause and effect determine the outcomes that we
observe. By systematically piecing together the regularities in our observations, can we not infer these laws just as we do in science? For example, imagine that the contest for attention between great works of art is an experiment designed to identify the attributes of great art. Even if it’s true that prior to the twentieth century, it might not have been obvious that the
Mona Lisa
was going to become the most famous painting in the world, we have now run the experiment, and we have the answer. We may still not be able to say what it is about the
Mona Lisa
that makes it uniquely great, but we do at least have some data. Even if our commonsense explanations have a tendency to conflate what happened with why it happened, are we not simply doing our best to act like good experimentalists?
1

In a sense, the answer is yes. We probably are doing our best, and under the right circumstances learning from observation and experience can work pretty well. But there’s a catch: In order to be able to infer that “A causes B,” we need to be able to run the experiment many times. Let’s say, for example, that A is a new drug to reduce “bad” cholesterol and B is a patient’s chance of developing heart disease in the next ten years. If the manufacturer can show that a patient who receives drug A is significantly less likely to develop heart disease than one who doesn’t, they’re allowed to claim that the drug can help prevent heart disease; otherwise they can’t. But because any one person can only either receive the drug or not receive it, the only way to show that the drug is causing anything is to run the “experiment” many times, where each person’s experience counts as a single run. A drug trial therefore requires many participants, each of whom is randomly assigned either to receive the treatment or not. The effect of the drug is then measured as the difference in outcomes between the “treatment” and the “control” groups, where the
smaller the effect, the larger the trial needs to be in order to rule out random chance as the explanation.

In certain everyday problem-solving situations, where we encounter more or less similar circumstances over and over again, we can get pretty close to imitating the conditions of the drug trial. Driving home from work every day, for example, we can experiment with different routes or with different departure times. By repeating these variations many times, and assuming that traffic on any given day is more or less like traffic on any other day, we can effectively bypass all the complex cause-and-effect relationships simply by observing which route results in the shortest commute time, on average. Likewise, the kind of experience-based expertise that derives from professional training, whether in medicine, engineering, or the military, works in the same way—by repeatedly exposing trainees to situations that are as similar as possible to those they will be expected to deal with in their eventual careers.
2

HISTORY IS ONLY RUN ONCE

Given how well this quasi-experimental approach to learning works in everyday situations and professional training, it’s perhaps not surprising that our commonsense explanations implicitly apply the same reasoning to explain economic, political, and cultural events as well. By now, however, you probably suspect where this is heading. For problems of economics, politics, and culture—problems that involve many people interacting over time—the combination of the frame problem and the micro-macro problem means that
every
situation is in some important respect different from the situations we have seen before. Thus, we never really get to run the same experiment more than once. At some level, we understand this problem. Nobody really thinks that the war in
Iraq is directly comparable to the Vietnam War or even the war in Afghanistan, and one must therefore be cautious in applying the lessons from one to another. Likewise, nobody thinks that by studying the success of the
Mona Lisa
we can realistically expect to understand much about the success and failure of contemporary artists. Nevertheless, we do still expect to learn some lessons from history, and it is all too easy to persuade ourselves that we have learned more than we really have.

For example, did the so-called surge in Iraq in the fall of 2007 cause the subsequent drop in violence in the summer of 2008? Intuitively the answer seems to be yes—not only did the drop in violence take place reasonably soon after the surge was implemented, but the surge was specifically intended to have that effect. The combination of intentionality and timing strongly suggests causality, as did the often-repeated claims of an administration looking for something good to take credit for. But many other things happened between the fall of 2007 and the summer of 2008 as well. Sunni resistance fighters, seeing an even greater menace from hard-core terrorist organizations like Al Qaeda than from American soldiers, began to cooperate with their erstwhile occupiers. The Shiite militia—most importantly Moktada Sadr’s Mahdi Army—also began to experience a backlash from their grassroots, possibly leading them to moderate their behavior. And the Iraqi Army and police forces, finally displaying sufficient competence to take on the militias, began to assert themselves, as did the Iraqi government. Any one of these other factors might have been at least as responsible for the drop in violence as the surge. Or perhaps it was some combination. Or perhaps it was something else entirely. How are we to know?

One way to be sure would be to “rerun” history many
times, much as we did in the Music Lab experiment, and see what would have happened both in the presence and also the absence of the surge. If across all of these alternate versions of history, violence drops whenever there is a surge and doesn’t drop whenever there isn’t, then we can say with some confidence that the surge is causing the drop. And if instead we find that most of the time we have a surge, nothing happens to the level of violence, or alternatively we find that violence drops whether we have a surge or not, then whatever it is that is causing the drop, clearly it isn’t the surge. In reality, of course, this experiment got run only once, and so we never got to see all the other versions of it that may or may not have turned out differently. As a result, we can’t ever really be sure what caused the drop in violence. But rather than producing doubt, the absence of “counterfactual” versions of history tends to have the opposite effect—namely that we tend to perceive what actually happened as having been inevitable.

This tendency, which psychologists call creeping determinism, is related to the better-known phenomenon of hindsight bias, the after-the-fact tendency to think that we “knew it all along.” In a variety of lab experiments, psychologists have asked participants to make predictions about future events and then reinterviewed them after the events in question had taken place. When recalling their previous predictions, subjects consistently report being more certain of their correct predictions, and less certain of their incorrect predictions, than they had reported at the time they made them. Creeping determinism, however, is subtly different from hindsight bias and even more deceptive. Hindsight bias, it turns out, can be counteracted by reminding people of what they said before they knew the answer or by forcing them to keep records
of their predictions. But even when we recall perfectly accurately how uncertain we were about the way events would transpire—even when we concede to have been caught completely by surprise—we still have a tendency to treat the realized outcome as inevitable. Ahead of time, for example, it might have seemed that the surge was just as likely to have had no effect as to lead to a drop in violence. But once we know that the drop in violence is what actually happened, it doesn’t matter whether or not we knew all along that it was going to happen (hindsight bias). We still believe that it
was
going to happen, because it did.
3

SAMPLING BIAS

Creeping determinism means that we pay less attention than we should to the things that don’t happen. But we also pay too little attention to most of what does happen. We notice when we just miss the train, but not all the times when it arrives shortly after we do. We notice when we unexpectedly run into an acquaintance at the airport, but not all the times when we do not. We notice when a mutual fund manager beats the S&P 500 ten years in a row or when a basketball player has a “hot hand” or when a baseball player has a long hitting streak, but not all the times when fund managers and sportsmen alike do not display streaks of any kind. And we notice when a new trend appears or a small company becomes phenomenally successful, but not all the times when potential trends or new companies disappear before even registering on the public consciousness.

Just as with our tendency to emphasize the things that happened over those that didn’t, our bias toward “interesting” things is completely understandable. Why would we be interested
in uninteresting things? Nevertheless, it exacerbates our tendency to construct explanations that account for only some of the data. If we want to know why some people are rich, for example, or why some companies are successful, it may seem sensible to look for rich people or successful companies and identify which attributes they share. But what this exercise can’t reveal is that if we instead looked at people who aren’t rich or companies that aren’t successful, we might have found that they exhibit many of the same attributes. The only way to identify attributes that differentiate successful from unsuccessful entities is to consider both kinds, and to look for systematic differences. Yet because what we care about is success, it seems pointless—or simply uninteresting—to worry about the absence of success. Thus we infer that certain attributes are related to success when in fact they may be equally related to failure.

This problem of “sampling bias” is especially acute when the things we pay attention to—the interesting events—happen only rarely. For example, when Western Airlines Flight 2605 crashed into a truck that had been left on an unused runway at Mexico City on October 31, 1979, investigators quickly identified five contributing factors. First, both the pilot and the navigator were fatigued, each having had only a few hours’ sleep in the past twenty-four hours. Second, there was a communication mix-up between the crew and the air traffic controller, who had instructed the plane to come in on the radar beam that was oriented on the unused runway, and then shift to the active runway for the landing. Third, this mix-up was compounded by a malfunctioning radio, which failed for a critical part of the approach, during which time the confusion might have been clarified. Fourth, the airport was shrouded in heavy fog, obscuring both the
truck and the active runway from the pilot’s view. And fifth, the ground controller got confused during the final approach, probably due to the stressful situation, and thought that it was the inactive runway that had been lit.

As the psychologist Robyn Dawes explains in his account of the accident, the investigation concluded that although no one of these factors—fatigue, communication mix-up, radio failure, weather, and stress—had caused the accident on its own, the combination of all five together had proven fatal. It seems like a pretty reasonable conclusion, and it’s consistent with the explanations we’re familiar with for plane crashes in general. But as Dawes also points out, these same five factors arise all the time, including many, many instances where the planes did not crash. So if instead of starting with the crash and working backward to identify its causes, we worked forward, counting all the times when we observed some combination of fatigue, communication mix-up, radio failure, weather, and stress, chances are that most of those events would not result in crashes either.
4

Other books

The Shop by J. Carson Black
Dragon Awakened by Jaime Rush
In Pursuit of the Green Lion by Judith Merkle Riley
The Sins of Lord Easterbrook by Madeline Hunter
The Day Steam Died by Brown, Dick
Lost in His Arms by Carla Cassidy
You Don't Have to be Good by Sabrina Broadbent
The Runaways by Elizabeth Goudge