The Canon (11 page)

Read The Canon Online

Authors: Natalie Angier

BOOK: The Canon
7.9Mb size Format: txt, pdf, ePub

None of this is to suggest that estimating probabilities in most real-world settings is easy, or that you should start second-guessing medical advice by running your test results through a two-way ANOVA statistical analysis. Still, it never hurts to ask some simple questions, such as, How common is this illness or condition in the general population? In other words, What is the size of the sample space I'm up against? This question is particularly useful in trying to get a reasonable sense of a "risk factor," or of one's "relative risk" compared to Max and Bryanna Populi. For example, five bad sunburns before the age of fifteen is said to double your odds of developing malignant melanoma. How awful! A few lousy days at Camp Minnehaha spent extracting oar splinters from your palms and taking group lanyard lessons under the full noonday sun, and you can raise your risk of contracting a potentially deadly skin cancer by
100 percent?
Yes, but as it happens, melanoma is quite rare, afflicting only 1.5 percent of the U.S. population; so even with the legacy of your childhood stir-fries, and assuming no other risk elevators like a family history of the disease, you're still talking about a lifetime risk below 4 percent. By all means, watch out for the appearance of new skin moles, particularly those shaped like raisins, Rorschach blots, or the literary caricatures of David Levine; and make sure that you and your loved ones are fully shellacked in sunblock prior to opening the window shades; but putting your dermatologist's pager on speed dial is surely going too far.

You might also want to ask your doctor about the published rates of false negatives and false positives for a given assay, and whether the measure of those accuracy statistics is itself accurate. Most health care professionals, despite their descriptor, are far more concerned with diagnosing and treating
illness
than they are in minimizing the number of false alarms their screens may activate among the healthy. As they see it, it's worse to miss a real case of a disease than to spot what initially looks like trouble and then find out, whew, you're fine after all. Yet for you the medical consumer, the devastating impact of a false positive, however brief its tenure, can feel like an illness, so if there's any way to
combat it with an estimate like the one for our hypothetical AIDS test, fire away.

Another way to feel more comfortable around quantitative reasoning is to try some at home, starting with a fun exercise that I'll call, until somebody stops me, the Fermi flex, after the great Italian physicist Enrico Fermi. In addition to being one of the giants of twentieth-century science, Fermi was a leader of the Manhattan Project during World War II, an assignment that for some reason had its stressful moments. To fortify morale and remyelinate the frayed nerves of his fellow bomb makers, Fermi would throw out quirky mental challenges. How many piano tuners are there in Chicago? he might ask, or, How many pounds of food do you eat in a year? As Fermi saw it, a good physicist, or any good thinker, should be able to devise an ad hoc, stepwise scheme for attacking virtually any problem and coming up with an answer that lies within the vaunted terrain known as "an order of magnitude." In other words, you shouldn't have to multiply or divide your estimate by a factor of ten or more to embrace the real answer. If the real answer is 5,400, you should be able to get an estimate in the range from 1,000 through 9,999; if the answer is 33,000, your Fermi-approved margin extends from 10,000 through 99,999.

Flexible enough, but how can you even begin to approximate the dimensions of an obscure trade like piano tuning in a city with which you have only the barest of airport hub acquaintance? In his admirable book
Fear of Physics,
the fearless physicist Lawrence Krauss shows the way. Chicago is one of the nation's largest cities, he says, which means its population must be up in the multimillion range, but not the 8 million of America's urban heavyweight, New York. Let's give it 4 million. How many households does that amount to? Say four people per dwelling, or some 1 million households. Think about the rate of piano ownership among your acquaintances: maybe 10 percent of the homes you know? So we've got roughly 100,000 Chicago pianos in need of occasional tune-ups. What's "occasional"? Once a year seems like a reasonable guess, at a fee of, say, $75 to $100 per tune-up. Now consider how many pianos a full-time piano tuner must tune to stay solvent. Maybe 2 a day, 10 a week, 400 to 500 a year? So we divide 100,000 by 400 or 500. All conjectures hazarded, we might expect to find a labor force of 200 to 250 pulling strings somewhere in the fabled birthplace of the skyscraper, the well-tailored gangster, and a bland, eponymously named rock band from the 1970s. By the order of his majesty's order of magnitude, Krauss writes, "this estimate, obtained quickly, tells us that we would be surprised to find less than about 100 or more than
about 1,000 tuners." No need for shock therapy: the actual answer is about 150.

My turn. I decided I'd try estimating the number of school buses in my county in Maryland, Montgomery, which extends from the border of Washington, D.C., at the southern edge up to points north near Baltimore. Mainly I was curious about how many buses sit idle during the county's vast number of "snow" days, which in this delusional plow-averse state are declared, not on the basis of verifiable accumulations of the white, fluffy substance called "snow," but rather on the premonition of snow as determined by a single factor: before venturing outside, you must put on something called "a coat."

In any event, how many of those cheery yellow child chariots can Montgomery County claim? From my obsessive scrutiny of election results every November, I happen to know that the county has about 500,000 registered voters. I also know that, given its proximity to our nation's capital, the region is politically plugged in and has a high rate of voter registration, maybe 70 percent, among eligible citizens. So I'd estimate the adult population to be around 650,000, or about 300,000 potential pairs. How many of these adult pairs are between the ages of twenty-five and fifty-five, the demographic likely to have school-age children? Let's say 150,000. And let's say that half of them have children, the most popular number being 2 per couple, with maybe 1.5 of those offspring in school. That gives us 110,000 kids in the Montgomery County school system. Some of those children are in private schools; others live close enough to walk or sniffle piteously enough to get driven. Let's cut the bused population in half, to 55,000. How many little scholars can you pack into one vehicle? Maybe 50? So that brings us down to about 1,100. But before we rest on our guesstimate, we must recall that school buses barrel through multiple routes each morning, which is why the wretched teenagers living next door to me have to be up and out the door to catch their bus by 7:15, while my elementary-school daughter gets to leave seventy minutes later. Assuming two routes a day per vehicle, we might wager that there are some 550 school buses in the Montgomery County public school system. Or at least somewhere between 100 and 1,000.

Consulting the Web page for the Montgomery County school system, I find that it owns about 250 school buses, half of my predicted sum, but still well within an order of magnitude of it. True, you could conclude that I might have saved myself the trouble by consulting the Internet to begin with; but I appreciated the exercise, the thinking through of the different parts of the puzzle—the number of fecund
adults that might surround me, the likelihood of them acting out their fecundity, how many kids are in my daughter's cohort of standardized test–takers, and so forth. Through regular sessions of Fermi flexing, you get a better sense of how the world looks and how the pieces fit together. And while learning to admit that you don't know something is a worthy skill in its own right, better still if you can rally an algorithm to relieve your ignorance. If you're talking to a coworker who tells you his goal is to jog the equivalent of once around the Earth, and you realize with some embarrassment that you don't know or can't recall the circumference of the Earth, and you don't like this pompous coworker enough to give him the satisfaction of asking, Oh, and how far might that be? you can do a quickie estimate. Think about some geo-detail you do know—say, the duration and destination of a very long flight. My husband recently flew nonstop from New York to Singapore aboard Singapore Air; and though he slept for most of the eighteen-hour journey, he did manage to collect goodies like a cute hot-water bottle and a pair of booties with antiskid strips on the bottom. Singapore is very far from America's eastern seaboard, just about halfway around the globe, I'd guess. Jets average some 500 to 600 miles per hour. So 9,000, 11,000, miles to Singapore, and double that for a round-the-world belt of 18,000 to 22,000 miles. The circumference of the Earth, in fact, is 24,902 miles at the equator (or 40,076 kilometers to most earthlings, including those who live at the equator). Our frequent-flier-derived answer, then, is well within the Fermi order of magnitude mandate. Yet jet-setting is one thing; literal globetrotting quite another. Glancing at the generous circumference of your colleague's waistline, which does not bespeak a natural athlete's physique, you smile broadly and wish him Godspeed. Why, a random act of quantitative reasoning has even made you appear kind.

For all the power of quantitative reasoning and probabilistic analysis, Mark Twain, as ever, had a point about statistics: damn, can they lie. One of the finest and funniest popular science books ever written is the 1954 classic
How to Lie with Statistics,
by Darrell Huff, on the theme of how the experts are doing exactly that to you every day. Take the much-bandied and seemingly redoubtable term "statistically significant." Call a result "statistically significant," and it sounds as though there's no arguing the point. "Even some scientists and physicians have been brainwashed into thinking that the magic phrase is the answer to everything," said Alvan Feinstein, a professor of medicine and epidemiology at the Yale University School of Medicine. But what does "statistically significant" signify? Although definitions vary depending on who's bandying, the unadorned phrase generally means that the correlation you the scientist have hit upon—an association between a particular genetic mutation and a disease, for instance—has a probability value, or p value, of 5 percent, which in turn means that there is, at most, a 5 percent chance that your patent-pending correlation was due to chance alone. In other words, there is a 95 percent chance that you are onto something. A "p = 0.05" is the minimum passing grade that, according to scientific convention, renders a result "statistically significant" and eligible for submission to at least a sprinkling of the 20,000 or so research journals published worldwide. Yet consider how easy it is to beat this degree of significance to a senseless blubber. The hypothetical AIDS test discussed earlier would have a p value of 0.05; that's what its "95 percent accuracy" rate is all about. The outcome? A pool of false positives big enough to do laps in. For this reason, many scientists don't feel comfortable with such a lax measure of confidence, and they won't publish until their p values have a couple more zeros to the right of the dot, and the odds of the result being a mere fluke pretty much equal to their chance of, say, winning the Nobel Prize. Twice.

Another slippery statistics term that has found its way into popular usage, and political abusage, is "average." As in: the average tax refund from the president's tax cut program will be $1,500. That sounds pretty decent, until you discover that the statistical "average" doesn't mean the "usual amount" of rebate that the "usual sort" of American family can expect to see. The statistical average, which is also known as the norm, is the statistical
mean,
a number you get by adding up all your quantities and dividing the sum by the number of data points—in this case, the grand total of tax refunds divided by the number of rebate checks cut. The problem with such calculations is how readily they can be skewed by, for example, the inclusion of a few colossal givebacks. If twenty families living on Creston Avenue in the Bronx receive tax refunds of anywhere from $100 to $300 per household, but a family with a floor-through on Manhattan's Gramercy Park gets an IRS mash note worth $70,000, the "average" refund for those twenty-one families would be about $3,500. Gee, thonx, said the Bronx. I feel richer already. Do you mind if I give a Bronx cheer?

A much more revealing data point would be the
median
tax cut, the value you'd see if you laid each of the twenty-one rebate checks in a row from feeblest to fattest and looked at the figure on the midpoint refund—the eleventh check. It would be about $200, a far truer measure of what the average Jones in our sample received than is the obfuscating "average." These days, given the growing gulch between extreme wealth and ordinary income in our country, financial matters often are best explored as medians rather than as averages or norms. If you include the wealth of a few Bill Gateses and Warren Buffetts in any calculus of "income norms," you'll make the whole population look comfortably flush, even as the great majority of families earn considerably less than your stated average or indeed what they might need to cover their monthly Visa bill.

Yet means and medians are not always so mismatched. Many times, they congregate closely beneath the comfortable shade of the celebrated parasol we know as the bell curve. This essential scientific principle unfortunately took on a neocon connotation in the mid-1990s, when Charles Murray and Richard Herrnstein adopted it as the title for their best-selling book about race and IQ. But Bell Curve: The Concept is much deeper and more illuminating than Bell Curve: The Tract. It's extraordinary how much of the world settles into a bell curve when you sally forth to size up its parts. If you were to go into a field of daisies, and measure the heights of, say, three hundred flowers, and mark those heights on a graph, you'd find a few shorties on the left end of the chart, and a few gangly overreachers on the right, but the great majority would amass in the midrange, and the contours of your distribution plot would, yes, ring a bell. The same for measurements you might make of the daisies' leaves, or of the diameter of the yellow centers. You'd have a few outlier examples of any given feature—stubby leaves, moon pie faces—but most would cluster around a central value that, whether you figured it as the mean or the median, would pretty much define the average dimensions of this most fetchingly normative of floral ambassadors.

Other books

No Cherubs for Melanie by James Hawkins
Eyes to the Soul by Dale Mayer
The Greener Shore by Morgan Llywelyn
Electric Blue by Nancy Bush
Taming the VIP Playboy by Katherine Garbera
Deadly Lies by Cynthia Eden
Given by Susan Musgrave
Savage by Nancy Holder