Mastermind: How to Think Like Sherlock Holmes (26 page)

BOOK: Mastermind: How to Think Like Sherlock Holmes
12.89Mb size Format: txt, pdf, ePub
ads

Consider the following descriptions of two people, Bill and Linda. Each description is followed by a list of occupations and avocations. Your task is to rank the items in the list by the degree that Bill or Linda resembles the typical member of the class.

Bill is thirty-four years old. He is intelligent but unimaginative, compulsive, and generally lifeless. In school he was strong in mathematics but weak in social studies and humanities.

Bill is a physician who plays poker for a hobby.

Bill is an architect.

Bill is an accountant.

Bill plays jazz for a hobby.

Bill is a reporter.

Bill is an accountant who plays jazz for a hobby.

Bill climbs mountains for a hobby.

Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.

Linda is a teacher in an elementary school.

Linda works in a bookstore and takes yoga classes.

Linda is active in the feminist movement.

Linda is a psychiatric social worker.

Linda is a member of the League of Women Voters.

Linda is a bank teller.

Linda is an insurance salesperson.

Linda is a bank teller and is active in the feminist movement.

After you’ve made your ranking, take a look at two pairs of statements in particular:
Bill plays jazz for a hobby
and
Bill is an accountant who plays jazz for a hobby
, and
Linda is a bank teller
and
Linda is a bank teller and is active in the feminist movement
. Which of the two statements have you ranked as more likely in each pair?

I am willing to bet that it was the second one in both cases. If it was, you’d be with the majority, and you would be making a big mistake.

This exercise was taken verbatim from a 1983 paper by Amos Tversky and Daniel Kahneman, to illustrate our present point: when it comes to separating crucial details from incidental ones, we often don’t fare particularly well. When the researchers’ subjects were presented with these lists, they repeatedly made the same judgment that I’ve just predicted you would make: that it was more likely that Bill was an accountant who plays jazz for a hobby than it was that he plays jazz for a hobby, and that it was more likely that Linda was a feminist bank teller than that she was a bank teller at all.

Logically, neither idea makes sense: a conjunction cannot be more likely than either of its parts. If you didn’t think it likely that Bill played jazz or that Linda was a bank teller to begin with, you should not have altered that judgment just because you
did
think it probable that Bill was an accountant and Linda, a feminist. An unlikely element or event when combined with a likely one does not somehow magically become any more likely. And yet 87 percent and 85 percent of participants, for the Bill scenario and the Linda scenario, respectively, made that exact judgment, in the process committing the infamous conjunction fallacy.

They even made it when their choices were limited: if only the two relevant options (Linda is a bank teller or Linda is a feminist bank teller) were included, 85 percent of participants
still
ranked the conjunction as more likely than the single instance. Even when people were given the logic behind the statements, they sided with the incorrect resemblance logic (
Linda seems more like a feminist, so I will say it’s more likely that she’s a feminist bank teller
) over the correct extensional logic (feminist bank tellers are only a specific subset of bank tellers, so Linda must be a bank teller with a higher likelihood than she would be a feminist one in particular) in 65 percent of cases. We can all be presented with the same
set of facts and features, but the conclusions we draw from them need not match accordingly.

Our brains weren’t made to assess things in this light, and our failings here actually make a good amount of sense. When it comes to things like chance and probability, we tend to be naive reasoners (and as chance and probability play a large part in many of our deductions, it’s no wonder that we often go astray). It’s called probabilistic incoherence, and it all stems from that same pragmatic storytelling that we engage in so naturally and readily—a tendency that may go back to a deeper, neural explanation; to, in some sense, W.J. and the split brain.

Simply put, while probabilistic reasoning seems to be localized in the left hemisphere, deduction appears to activate mostly the right hemisphere. In other words, the neural loci for evaluating logical implications and those for looking at their empirical plausibility may be in opposite hemispheres—a cognitive architecture that isn’t conducive to coordinating statement logic with the assessment of chance and probability. As a result, we aren’t always good at integrating various demands, and we often fail to do so properly, all the while remaining perfectly convinced that we have succeeded admirably.

The description of Linda and feminist (and Bill and accountant) coincides so well that we find it hard to dismiss the match as anything but hard fact. What is crucial here is our understanding of how frequently something occurs in real life—and the logical, elementary notion that a whole simply can’t be more likely than the sum of its parts. And yet we let the incidental descriptors color our minds so much that we overlook the crucial probabilities.

What we should be doing is something much more prosaic. We should be gauging how likely any separate occurrence actually is. In chapter three, I introduced the concept of base rates, or how frequently something appears in the population, and promised to revisit it when we discussed deduction. And that’s because base rates, or our ignorance of them, are at the heart of deductive errors like the conjunction fallacy. They hamper observation, but where they really throw you off is in deduction, in moving from all of your observations to the conclusions they imply. Because here, selectivity—and selective ignorance—will throw you off completely.

To accurately cast Bill and Linda’s likelihood of belonging to any of the professions, we need to understand the prevalence of accountants, bank tellers, amateur jazz musicians, active feminists, and the whole lot in the population at large. We can’t take our protagonists out of context. We can’t allow one potential match to throw off other information we might have.

So, how does one go about resisting this trap, sorting the details properly instead of being swept up in irrelevance?

Perhaps the pinnacle of Holmes’s deductive prowess comes in a case that is less traditional than many of his London pursuits. Silver Blaze, the prize-winning horse of the story’s title, goes missing days before the big Wessex Cup race, on which many a fortune ride. That same morning, his trainer is found dead some distance from the stable. His skull looks like it has been hit by some large, blunt object. The lackey who had been guarding the horse has been drugged and remembers precious little of the night’s events.

The case is a sensational one: Silver Blaze is one of the most famous horses in England. And so, Scotland Yard sends Inspector Gregson to investigate. Gregson, however, is at a loss. He arrests the most likely suspect—a gentleman who had been seen around the stable the evening of the disappearance—but admits that all evidence is circumstantial and that the picture may change at any moment. And so, three days later, with no horse in sight, Sherlock Holmes and Dr. Watson make their way to Dartmoor.

Will the horse run the race? Will the trainer’s murderer be brought to justice? Four more days pass. It is the morning of the race. Silver Blaze, Holmes assures the worried owner, Colonel Ross, will run. Not to fear. And run he does. He not only runs, but wins. And his trainer’s murderer is identified soon thereafter.

We’ll be returning to “Silver Blaze” several times for its insights into the science of deduction, but first let’s consider how Holmes introduces the case to Watson.

“It is one of those cases,” says Holmes, “where the art of the reasoner should be used rather for the sifting of details than for the acquiring of
fresh evidence. The tragedy has been so uncommon, so complete, and of such personal importance to so many people that we are suffering from a plethora of surmise, conjecture, and hypothesis.” In other words, there is too much information to begin with, too many details to be able to start making them into any sort of coherent whole, separating the crucial from the incidental. When so many facts are piled together, the task becomes increasingly problematic. You have a vast quantity of your own observations and data but also an even vaster quantity of potentially incorrect information from individuals who may not have observed as mindfully as you have.

Holmes puts the problem this way: “The difficulty is to detach the framework of fact—of absolute undeniable fact—from the embellishments of theorists and reporters. Then, having established ourselves upon this sound basis, it is our duty to see what inferences may be drawn and what are the special points upon which the whole mystery turns.” In other words, in sorting through the morass of Bill and Linda, we would have done well to set clearly in our minds what were the actual facts, and what were the embellishments or stories of our minds.

When we pry the incidental and the crucial apart, we have to exercise the same care that we spent on observing to make sure that we have recorded accurately all of the impressions. If we’re not careful, mindset, preconception, or subsequent turns can affect even what we think we observed in the first place.

In one of Elizabeth Loftus’s classic studies of eyewitness testimony, participants viewed a film depicting an automobile accident. Loftus then asked each participant to estimate how fast the cars were going when the accident occurred—a classic deduction from available data. But here’s the twist: each time she asked the question, she subtly altered the phrasing. Her description of the accident varied by verb: the cars
smashed
,
collided
,
bumped
,
contacted
, or
hit
. What Loftus found was that her phrasing had a drastic impact on subjects’ memory. Not only did those who viewed the “smashed” condition estimate a higher speed than those who viewed the other conditions, but they were also far more likely to recall, one week later, having seen broken glass in the film, even though there was actually no broken glass at all.

It’s called the misinformation effect. When we are exposed to misleading information, we are likely to recall it as true and to take it into consideration in our deductive process. (In the Loftus experiment, the subjects weren’t even exposed to anything patently false, just misleading.) All the specific word choice does is act as a simple frame that impacts our line of reasoning and even our memory. Hence the difficulty, and the absolute necessity, that Holmes describes of learning to sift what is irrelevant (and all that is media conjecture) from the real, objective, hard facts—and to do so thinkingly and systematically. If you don’t, you may find yourself remembering broken glass instead of the intact windshield you actually saw.

In fact, it’s when we have more, not less, information that we should be most careful. Our confidence in our deductions tends to increase along with the number of details on which we base them—especially if one of those details makes sense. A longer list somehow seems more reasonable, even if we were to judge individual items on that list as less than probable given the information at hand. So when we see one element in a conjunction that seems to fit, we are likely to accept the full conjunction, even if it makes little sense to do so. Linda the feminist bank teller. Bill the jazz-playing accountant. It’s perverse, in a way. The better we’ve observed and the more data we’ve collected, the more likely we are to be led astray by a single governing detail.

Similarly, the more incidental details we see, the less likely we are to home in on the crucial, and the more likely we are to give the incidental undue weight. If we are told a story, we are more likely to find it compelling and true if we are also given more details, even if those details are irrelevant to the story’s truth. Psychologist Ruma Falk has noted that when a narrator adds specific, superfluous details to a story of coincidence (for instance, that two people win the lottery in the same small town), listeners are more likely to find the coincidence surprising and compelling.

Usually when we reason, our minds have a tendency to grab any information that seems to be related to the topic, in the process retrieving both relevant cues and those that seem somehow to be connected but may not actually matter. We may do this for several reasons: familiarity,
or a sense that we’ve seen this before or should know something even when we can’t quite put our finger on it; spreading activation, or the idea that the activation of one little memory node triggers others, and over time the triggered memories spread further away from the original; or simple accident or coincidence—we just happen to think of something while thinking about something else.

If, for example, Holmes were to magically emerge from the book and ask us, not Watson, to enumerate the particulars of the case at hand, we’d rummage through our memory (
What did I just read? Or was that the other case?
), take certain facts out of storage (
Okay: horse gone, trainer dead, lackey drugged, possible suspect apprehended. Am I missing anything?
), and in the process, likely bring up others that may not matter all that much (
I think I forgot to eat lunch because I was so caught up in the drama; it’s like that time I was reading
The Hound of the Baskervilles
for the first time, and forgot to eat, and then my head hurt, and I was in bed, and
. . .).

If the tendency to over-activate and over-include isn’t checked, the activation can spread far wider than is useful for the purpose at hand—and can even interfere with the proper perspective needed to focus on that purpose. In the case of Silver Blaze, Colonel Ross constantly urges Holmes to do more, look at more, consider more, to leave, in his words, “no stone unturned.” Energy and activity, more is more; those are his governing principles. He is supremely frustrated when Holmes refuses, choosing instead to focus on the key elements that he has already identified. But Holmes realizes that to weed out the incidental, he should do anything
but
take in more and more theories and potentially relevant (or not) facts.

BOOK: Mastermind: How to Think Like Sherlock Holmes
12.89Mb size Format: txt, pdf, ePub
ads

Other books

In Bed with a Spy by Alyssa Alexander
Committed by E. H. Reinhard
Night's Cold Kiss by Tracey O'Hara
Under a Broken Sun by Kevin P. Sheridan
Rock On by Howard Waldrop, F. Paul Wilson, Edward Bryan, Lawrence C. Connolly, Elizabeth Hand, Bradley Denton, Graham Joyce, John Shirley, Elizabeth Bear, Greg Kihn, Michael Swanwick, Charles de Lint, Pat Cadigan, Poppy Z. Brite, Marc Laidlaw, Caitlin R. Kiernan, David J. Schow, Graham Masterton, Bruce Sterling, Alastair Reynolds, Del James, Lewis Shiner, Lucius Shepard, Norman Spinrad
The Recollection by Powell, Gareth L.
Ripped by V. J. Chambers
Dark Companions by Ramsey Campbell