Read Everything Is Obvious Online

Authors: Duncan J. Watts

Everything Is Obvious (26 page)

BOOK: Everything Is Obvious
5.13Mb size Format: txt, pdf, ePub
ads

This seems like an obvious point, but it is widely misunderstood.
19
Advertisers, in fact, often pay a premium to reach customers they think are most likely to buy their products—because they have bought their products (e.g., Pampers) in the past; or because they have bought products in the same category (e.g., a competitor to Pampers); or because their attributes and circumstances make them likely to do so soon (e.g., a young couple expecting their first child). Targeted advertising of this kind is often held up as the quintessence of a scientific approach. But again, at least some of those consumers, and possibly many of them, would have bought the products anyway. As a result, the ads were just as wasted on them as they were on consumers who saw the ads and weren’t interested. Viewed this way, the only ads that matter are those that sway the
marginal
consumer—the one who ends up buying the product, but who wouldn’t have bought it had they not seen the ad. And the only way to determine the effect on marginal consumers is to conduct an experiment in which the decision about who sees the ad and who doesn’t is made randomly.

FIELD EXPERIMENTS

A common objection to running these kinds of randomized experiments is that it can be difficult to do in practice. If you put up a billboard by the highway or place an ad in a magazine, it’s generally impossible to know who sees it—even consumers themselves are often unaware of the ads they have seen. Moreover, the effects can be hard to measure. Consumers may make a purchase days or even weeks later,
by which stage the connection between seeing the ad and acting on it has been lost. These are reasonable objections, but increasingly they can be dealt with, as three of my colleagues at Yahoo!—David Reiley, Taylor Schreiner, and Randall Lewis—demonstrated recently in a pioneering “field experiment” involving 1.6 million customers of a large retailer who were also active Yahoo! users.

To perform the experiment, Reiley and company randomly assigned 1.3 million users to the “treatment” group, meaning that when they arrived at Yahoo!-operated websites, they were shown ads for the retailer. The remaining 300,000, meanwhile, were assigned to the “control” group, meaning that they did not see these ads even if they visited exactly the same pages as the treatment group members. Because the assignment of individuals to treatment and control groups was random, the differences in behavior between the two groups had to be caused by the advertising itself. And because all the participants in the experiment were also in the retailer’s database, the effect of the advertising could be measured in terms of their actual purchasing behavior—up to several weeks after the campaign itself concluded.
20

Using this method, the researchers estimated that the additional revenue generated by the advertising was roughly four times the cost of the campaign in the short run, and possibly much higher over the long run. Overall, therefore, they concluded that the campaign had in fact been effective—a result that was clearly good news both for Yahoo! and the retailer. But what they also discovered was that almost all the effect was for older consumers—the ads were largely ineffective for people under forty. At first, this latter result seems like bad news. But the right way to think about it is that finding out that something doesn’t work is also the first step toward learning what does work. For example, the advertiser
could experiment with a variety of different approaches to appeal to younger people, including different formats, different styles, or even different sorts of incentives and offers. It’s entirely possible that something would work, and it would be valuable to figure out what that is in a systematic way.

But let’s say that none of these attempts is effective. Perhaps the brand in question is just not appealing to particular demographics, or perhaps those people don’t respond to online advertising. Even in that event, however, the advertiser can at least stop wasting money advertising to them, freeing more resources to focus on the population that might actually be swayed. Regardless, the only way to improve one’s marketing effectiveness over time is to first know what is working and what isn’t. Advertising experiments, therefore, should not be viewed as a one-off exercise that either yields “the answer” or doesn’t, but rather as part of an ongoing learning process that is built into all advertising.
21

A small but growing community of researchers is now arguing that the same mentality should be applied not just to advertising but to all manner of business and policy planning, both online and off. In a recent article in
MIT Sloan Management Review
, for example, MIT professors Erik Brynjolfsson and Michael Schrage argue that new technologies for tracking inventory, sales, and other business parameters—whether the layout of links on a search page, the arrangement of products on a store shelf, or the details of a special direct mail offer—are bringing about a new era of controlled experiments in business. Brynjolfsson and Schrage even quote Gary Loveman, the chief executive of the casino company Harrah’s, as saying, “There are two ways to get fired from Harrah’s: stealing from the company, or failing to include a proper control group in your business experiment.” You might find it disturbing that casino operators are ahead of the curve in
terms of science-based business practice, but the mind-set of routinely including experimental controls is one from which other businesses could clearly benefit.
22

Field experiments are even beginning to gain traction in the more tradition-bound worlds of economics and politics. Researchers associated with the MIT Poverty Action Lab, for example, have conducted more than a hundred field experiments to test the efficacy of various aid policies, mostly in the areas of public health, education, and savings and credit. Political scientists have tested the effect of advertising and phone solicitations on voter turnout, as well as the effect of newspapers on political opinions. And labor economists have conducted numerous field experiments to test the effectiveness of different compensation schemes, or how feedback affects performance. Typically the questions these researchers pose are quite specific. Should aid agencies give away mosquito nets or charge for them? How do workers respond to fixed wages versus performance-based pay? Does offering people a savings plan help them to save more? Yet answers to even these modest goals would be useful to managers and planners. And field experiments could be conducted on grander scales as well. For example, public policy analyst Randal O’Toole has advocated conducting field experiments for the National Park Service that would test different ways to manage and govern the national parks by applying them randomly to different parks (Yellowstone, Yosemite, Glacier, etc.) and measuring which ones work the best.
23

THE IMPORTANCE OF LOCAL KNOWLEDGE

The potential of field experiments is exciting, and there is no doubt that they are used far less often than they could be. Nevertheless, it isn’t always possible to conduct experiments.
The United States cannot go to war with half of Iraq and remain at peace with the other half just to see which strategy works better over the long haul. Nor can a company easily rebrand just a part of itself, or rebrand itself with respect to only some consumers and not others.
24
For decisions like these, it’s unlikely that an experimental approach will be of much help; nevertheless, the decisions still have to get made. It’s all well and good for academics and researchers to debate the finer points of cause and effect, but our politicians and business leaders must often act in the absence of certainty. In such a world, the first rule of order is not to let the perfect be the enemy of the good, or as my Navy instructors constantly reminded us, sometimes even a bad plan is better than no plan at all.

Fair enough. In many circumstances, it may well be true that realistically all one can do is pick the course of action that seems to have the greatest likelihood of success and commit to it. But the combination of power and necessity can also lead planners to have more faith in their instincts than they ought to, often with disastrous consequences. As I mentioned in
Chapter 1
, the late nineteenth and early twentieth centuries were characterized by pervasive optimism among engineers, architects, scientists, and government technocrats that the problems of society could be solved just like problems in science and engineering. Yet as the political scientist James Scott has written, this optimism was based on a misguided belief that the intuition of planners was as precise and reliable as mankind’s accumulated scientific expertise.

According to Scott, the central flaw in this “high modernist” philosophy was that it underemphasized the importance of local, context-dependent knowledge in favor of rigid mental models of cause and effect. As Scott put it, applying generic rules to a complex world was “an invitation to
practical failure, social disillusionment, or most likely both.” The solution, Scott argued, is that plans should be designed to exploit “a wide array of practical skills and acquired intelligence in responding to a constantly changing natural and human environment.” This kind of knowledge, moreover, is hard to reduce to generally applicable principles precisely because “the environments in which it is exercised are so complex and non-repeatable that formal procedures of rational decision making are impossible to apply.” In other words, the knowledge on which plans should be based is necessarily
local
to the concrete situation in which it is to be applied.
25

Scott’s argument in favor of local knowledge was in fact presaged many years earlier in a famous paper titled “The Use of Knowledge in Society” by the economist Friedrich Hayek, who argued that planning was fundamentally a matter of aggregating knowledge. Knowing what resources to allocate, and where, required knowing who needed how much of what relative to everyone else. Hayek also argued, however, that aggregating all this knowledge across a broad economy made up of hundreds of millions of people is impossible for any single central planner no matter how smart or well intentioned. Yet it is precisely the aggregation of all this information that markets achieve every day, without any oversight or direction. If, for example, someone, somewhere invents a new use for iron that allows him to make more profitable use of it than anyone else, that person will also be willing to pay more for the iron than anyone else will. And because aggregate demand has now gone up, then all else being equal, so will its price. The people who have less productive uses will therefore buy less iron, while the people who have more productive uses will buy more of it. Nobody needs to know why the price went up, or who it is that suddenly wants more iron—in fact, no one needs to know anything about the
process at all. Rather, it is the “invisible hand” of the market that automatically allocates the limited amount of iron in the world to whomever can make the best use of it.

Hayek’s paper is often held up by free market advocates as an argument that government-designed solutions are always worse than market-based ones, and no doubt there are cases where this conclusion is correct. For example, “cap and trade” policies to reduce carbon emissions explicitly invoke Hayek’s reasoning. Rather than the government instructing businesses on how to reduce their carbon emissions—as would be the case with typical government regulation—it should simply place a cost on carbon by “capping” the total amount that can be emitted by the economy as a whole, and then leave it up to individual businesses to figure out how best to respond. Some businesses would find ways to reduce their energy consumption, while others would switch to alternative sources of energy, and others still would look for ways to clean up their existing emissions. Finally, some businesses would prefer to pay for the privilege of continuing to emit carbon by buying credits from those who prefer to cut back, where the price of the credits would depend on the overall supply and demand—just as in other markets.
26

Market-based mechanisms like cap and trade do indeed seem to have more chance of working than centralized bureaucratic solutions. But market-based mechanisms are not the only way to exploit local knowledge, nor are they necessarily the best way. Critics of cap-and-trade policies, for example, point out that markets for carbon credits are likely to spawn all manner of complex derivatives—like the derivatives that brought the financial system to its knees in 2008—with consequences that may undermine the intent of the policy. A less easily gamed approach, they argue, would be to increase the cost of carbon simply by taxing it, thereby still
offering incentives to businesses to reduce emissions and still giving them the flexibility to decide how best to reduce them, but without all the overhead and complexity of a market.

Another nonmarket approach to harnessing local knowledge that is increasingly popular among governments and foundations alike is the prize competition. Rather than allocating resources ahead of time to preselected recipients, prize competitions reverse the funding mechanism, allowing anyone to work on the problem, but only rewarding solutions that satisfy prespecified objectives. Prize competitions have attracted a lot of attention in recent years for the incredible amount of creativity they have managed to leverage out of relatively small prize pools. The funding agency DARPA, for example, was able to harness the collective creativity of dozens of university research labs to build self-driving robot vehicles by offering just a few million dollars in prize money—far less than it would have cost to fund the same amount of work with conventional research grants. Likewise, the $10 million Ansari X Prize elicited more than $100 million worth of research and development in pursuit of building a reusable spacecraft. And the video rental company Netflix got some of the world’s most talented computer scientists to help it improve its movie recommendation algorithms for just a $1 million prize.

Inspired by these examples—along with “open innovation” companies like Innocentive, which conducts hundreds of prize competitions in engineering, computer science, math, chemistry, life sciences, physical sciences, and business—governments are wondering if the same approach can be used to solve otherwise intractable policy problems. In the past year, for example, the Obama administration has generated shock waves throughout the education establishment by announcing its “Race to the Top”—effectively a prize
competition among US states for public education resources allocated on the basis of plans that the states must submit, which are scored on a variety of dimensions, including student performance measurement, teacher accountability, and labor contract reforms. Much of the controversy around the Race to the Top takes issue with its emphasis on teacher quality as the primary determinant of student performance and on standardized testing as a way to measure it. These legitimate critiques notwithstanding, however, the Race to the Top remains an interesting policy experiment for the simple reason that, like cap and trade, it specifies the “solution” only at the highest level, while leaving the specifics up to the states themselves.
27

BOOK: Everything Is Obvious
5.13Mb size Format: txt, pdf, ePub
ads

Other books

Deke Brolin Rhol by Backus, Doug
Dread Murder by Gwendoline Butler
Newfoundland Stories by Eldon Drodge
A Knight's Reward by Catherine Kean
A Closed Book by Gilbert Adair
Daughter of Sherwood by Laura Strickland
Bloodlines by Lindsay Anne Kendal