A Field Guide to Lies: Critical Thinking in the Information Age (21 page)

BOOK: A Field Guide to Lies: Critical Thinking in the Information Age
11.32Mb size Format: txt, pdf, ePub

The kinds of experiences that a seventy-five-year-old socialite has with the New York City police department are likely to be very different from those of a sixteen-year-old boy of color; their
experiences are selectively windowed by what they see. The sixteen-year-old may report being stopped repeatedly without cause, being racially profiled and treated like a criminal. The seventy-five-year-old may fail to understand how this could be. “All
my
experiences with those officers have been so
nice
.”

Paul McCartney and Dick Clark bought up all the celluloid film of their television appearances in the 1960s, ostensibly so that they could control the way their histories are told. If you’re a scholar doing research, or a documentarian looking for archival footage, you’re limited to what they choose to release to you. When looking at data or evidence to support a claim, ask yourself if what you’re being shown is likely to be representative of the whole picture.

Selective Small Samples

Small samples are usually not representative.

Suppose you’re responsible for marketing a new hybrid car. You want to make claims about its fuel efficiency. You send a driver out in the vehicle and find that the car gets eighty miles to the gallon. That looks great—you’re done! But maybe you just got lucky. Your competitor does a larger test, sending out five drivers in five vehicles and gets a figure closer to sixty miles per gallon. Who’s right? You both are! Suppose that your competitor reported the results like this:

Test 1: 58 mpg
Test 2: 38 mpg
Test 3: 69 mpg
Test 4: 54 mpg
Test 5: 80 mpg

Road conditions, ambient temperature, and driving styles create a great deal of variability. If you were lucky (and your competitor unlucky) your one driver might produce an extreme result that you then report with glee. (And of course, if you want to cherry-pick, you just ignore tests one through four). But if the researcher is pursuing the truth, a larger sample is necessary. An independent lab that tested fifty different excursions might find that the average is something completely different. In general, anomalies are more likely to show up in small samples
. Larger samples more accurately reflect the state of the world.
Statisticians call this
the law of large numbers.

If you look at births in a small rural hospital over a month and see that 70 percent of the babies born are boys, compared to 51 percent in a large urban hospital, you might think there is something funny going on in the rural hospital. There might be, but that isn’t enough evidence to be sure. The small sample is at work again. The large hospital might have reported fifty-one out of a hundred births were boys, and the small might have reported seven out of ten. As with the coin toss mentioned above, the statistical average of fifty-fifty is most recognizable in large samples.

How many is enough? This is a job for a professional statistician, but there are rough-and-ready rules you can use when trying to make sense of what you’re reading. For population surveys (e.g., voting preferences, toothpaste preferences, and such), sample-size calculators can readily be found on the Web. For determining the local incidence of something (rates such as how many births are boys, how many times a day the average person reports being hungry) you need to know something about the base rate (or incidence rate) of the thing you’re looking for. If a researcher wanted to know
how many cases of albinism are occurring in a particular community, and then examined the first 1,000 births and found none, it would be foolish to draw any conclusions: Albinism occurs in only 1 in 17,000 births. One thousand births is too small a sample—“small” relative to the scarcity of the thing you’re looking for. On the other hand,
if the study was on the incidence of preterm births, 1,000 should be more than enough because they occur in one in nine births.

Statistical Literacy

Consider a street game in which a hat or basket contains three cards, each with two sides: One card is red on both sides, one white on both sides, and one is red on one side and white on the other. The con man draws one card from the hat and shows you one side of it and it is red. He bets you $5 that the other side is also red. He wants you to think that there is a fifty-fifty chance that this is so, so you’re willing to bet against him, that is, that the other side is just as likely to be white. You might reason something like this:

He’s showing me a red side. So he has pulled either the red-red card or the red-white card. That means that the other side is either red or white with equal probability. I can afford to take this bet because even if I don’t win this time, I will win soon after.

Setting aside the gambler’s fallacy—many people have lost money by doubling down on roulette only to find out that chance is not a self-correcting process—the con man is relying on you (counting on you?) to make this erroneous assignment of probability, and usually talking fast in order to fractionate your attention. It’s helpful to work it out pictorially.

Here are the three cards:

 

 
Red 
 
Red 
 
White 
 
White 
 
Red 
 
White 

If he is showing you a red side, it could be any one of
three
sides that he’s showing you. In two of those cases, the other side is red and in only one case the other side is white. So there is a two in three chance that if he showed you red the other side will be red, not a one in two chance. This is because most of us fail to account for the fact that on the double-red card, he could be showing you
either
side. If you had trouble with this, don’t feel bad—
similar mistakes were made by mathematical philosopher Gottfried Wilhelm Leibniz and many more recent textbook authors. When evaluating claims based on probabilities, try to understand the underlying model. This can be difficult to do, but if you recognize that probabilities are tricky, and recognize the limitations most of us have in evaluating them, you’ll be less likely to be conned. But what if everyone around you is agreeing with something that is, well, wrong? The exquisite new clothes the emperor is wearing, perhaps?

C
OUNTERKNOWLEDGE

Counterknowledge, a term coined by the U.K. journalist Damian Thompson, is misinformation packaged to look like fact and that some critical mass of people have begun to believe. Examples come from science, current affairs, celebrity gossip, and pseudo-history. It includes claims that lack supporting evidence, and claims for which evidence exists that clearly contradicts them. Take the pseudo-historical claims that the Holocaust, moon landings, or the attacks of September 11, 2001, in the United States never happened, but were part of massive conspiracies. (Counterknowledge doesn’t always involve conspiracies—only sometimes.)

Part of what helps counterknowledge spread is the intrigue of imagining
what if it were true?
Again, humans are a storytelling species, and we love a good tale. Counterknowledge initially attracts us with the patina of knowledge by using numbers or statistics, but further examination shows that these have no basis in fact—the purveyors of counterknowledge are hoping you’ll be sufficiently impressed (or intimidated) by the presence of numbers that you’ll blindly accept them. Or they cite “facts” that are simply untrue.

Damian Thompson tells the story of how these claims can take hold, get under our skin, and cause us to doubt what we know . . .
that is, until we apply a rational analysis. Thompson recalls the time a friend, speaking of the 9/11 attacks in the United States, “grabbed our attention with a plausible-sounding observation: ‘Look at the way the towers collapsed vertically, instead of toppling over. Jet fuel wouldn’t generate enough heat to melt steel. Only controlled explosions can do that.’”

The anatomy of this counterknowledge goes something like this:

The towers collapsed vertically:
This is true. We’ve seen footage.
If the attack had been carried out the way they told us, you’d expect the building to topple over:
This is an unstated, hidden premise. We don’t know if this is true. Just because the speaker is asserting it doesn’t make it true. This is a claim that requires verification.
Jet fuel wouldn’t generate enough heat to melt steel:
We don’t know if this is true either. And it ignores the fact that other flammables—cleaning products, paint, industrial chemicals—may have existed in the building so that once a fire got going, they added to it.

If you’re not a professional structural engineer, you might find these premises plausible. But a little bit of checking reveals that professional structural engineers have found nothing mysterious about the collapse of the towers.

It’s important to accept that in complex events, not everything is explainable, because not everything was observed or reported. In the assassination of President John F. Kennedy, the Zapruder film
is the only photographic evidence of the sequence of events, and it is incomplete.
Shot on a consumer-grade camera, the frame rate is only 18.3 frames per second and it is low-resolution. There are many unanswered questions about the assassination, and indications that evidence was mishandled, many eyewitnesses were never questioned, and many unexplained deaths of people who claimed or were presumed to know what really happened. There may well have been a conspiracy, but the mere fact that there are unanswered questions and inconsistencies is not proof of one. An unexplained headache with blurred vision is not evidence of a rare brain tumor—it is more likely something less dramatic.

Scientists and other rational thinkers distinguish between things that we know are almost certainly true—such as photosynthesis or that the Earth revolves around the sun—and things that are
probably
true, such as that the 9/11 attacks were the result of hijacked airplanes, not a U.S. government plot. There are different amounts of evidence, and different kinds of evidence, weighing in on each of these topics. And a few holes in an account or a theory does not discredit it.
A
handful
of unexplained anomalies does not discredit or undermine a well-established theory that is based on
thousands
of pieces of evidence. Yet these anomalies are typically at the heart of all conspiratorial thinking, Holocaust revisionism, anti-evolutionism, and 9/11 conspiracy theories.
The difference between a false theory and a true theory is one of probability. Thompson dubs something counterknowledge when it runs contrary to real knowledge and has some social currency.

When Reporters Lead Us Astray

News reporters gather information about important events in two different ways. These two ways are often incompatible with each other, resulting in stories that can mislead the public if the journalists aren’t careful.

In
scientific investigation
mode, reporters are in a partnership with scientists—they report on scientific developments and help to translate them into a language that the public can understand, something that most scientists are not good at. The reporter reads about a study in a peer-reviewed journal or press release. By the time a study reaches peer review, usually three to five unbiased and established scientists have reviewed the study and accepted its accuracy and its conclusions. It is not usually the reporter’s job to establish the weight of scientific evidence supporting every hypothesis, auxiliary hypothesis, and conclusion; that has already been done by the scientists writing the paper.

Now the job splits off into two kinds of reporters. The serious investigative reporter, such as for the
Washington Post
, or the
Wall Street Journal
, will typically contact a handful of scientists
not associated
with the research to get their opinions. She will seek out opinions that go against the published report. But the vast majority of reporters consider that their work is done if they simply report on the story as it was published, translating it into simpler language.

In
breaking news
mode, reporters try to figure out something that’s going on in the world by gathering information from sources—witnesses to events. This can be someone who witnessed a holdup in Detroit or a bombing in Gaza or a buildup of troops in
Crimea. The reporter may have a single eyewitness, or try to corroborate with a second or third. Part of the reporter’s job in these cases is to ascertain the veracity and trustworthiness of the witness. Questions such as “Did you see this yourself?” or “Where were you when this happened?” help to do so. You’d be surprised at how often the answer is no, or how often people lie, and it is only through the careful verifications of reporters that inconsistencies come to light.

So in Mode One, journalists report on scientific findings, which themselves are probably based on thousands of observations and a great amount of data. In Mode Two, journalists report on events, which are often based on the accounts of only a few eyewitnesses.

Because reporters have to work in both these modes, they sometimes confuse one for the other. They sometimes forget that the plural of anecdote is not data; that is, a bunch of stories or casual observations do not make science. Tangled in this is our expectation that newspapers should entertain us as we learn, tell us stories. And most good stories show us a chain of actions that can be related in terms of cause and effect. Risky mortgages were repackaged into AAA-rated investment products, and that led to the housing collapse of 2007. Regulators ignored the buildup of debris above the Chinese city of Shenzhen, and in 2015 it collapsed and created an avalanche that toppled thirty-three buildings. These are not scientific experiments, they are events that we try to make sense of, to make stories out of. The burden of proof for news articles and scientific articles is different, but without an explanation, even a tentative one, we don’t have much of a story. And newspapers, magazines, books—people—need stories.

Other books

The Burning Shore by Ed Offley
The Well by Labrow, Peter
The Last Orphans by N.W. Harris
Dreaming Anastasia by Joy Preble
The Flesh Cartel #2: Auction by Rachel Haimowitz and Heidi Belleau
Burn- pigeon 16 by Nevada Barr
Whenever You Come Around by Robin Lee Hatcher
The Alley by Eleanor Estes
A True Princess by Diane Zahler