Everything Is Obvious (23 page)

Read Everything Is Obvious Online

Authors: Duncan J. Watts

BOOK: Everything Is Obvious
10.94Mb size Format: txt, pdf, ePub
FUTURE SHOCK

No matter how carefully you adhere to this advice, a serious limitation with all prediction methods is that they are only reliable to the extent that the same kind of events will happen in the future as happened in the past, and with the same average frequency.
13
In regular times, for example, credit card
companies may be able to do a pretty good job of predicting default rates. Individual people may be complicated and unpredictable, but they tend to be complicated and unpredictable in much the same way this week as they were last week, and so on average the models work reasonably well. But as many critics of predictive modeling have pointed out, many of the outcomes that we care about most—like the onset of the financial crisis, the emergence of a revolutionary new technology, the overthrow of an oppressive regime, or a precipitous drop in violent crime—are interesting to us precisely because they are
not
regular times. And in these situations some very serious problems arise from relying on historical data to predict future outcomes—as a number of credit card companies discovered when default rates soared in the aftermath of the recent financial crisis.

Even more important, the models that many banks were using to price mortgage-backed derivatives prior to 2008—like the infamous CDOs—now seem to have relied too much on data from the recent past, during which time housing prices had only gone up. As a result, ratings analysts and traders alike collectively placed too low a probability on a nationwide drop in real-estate values, and so badly underestimated the risk of mortgage defaults and foreclosure rates.
14
At first, it might seem that this would have been a perfect application for prediction markets, which might have done a better job of anticipating the crisis than all the “quants” working in the banks. But in fact it would have been precisely these people—along with the politicians, government regulators, and other financial market specialists who also failed to anticipate the crisis—who would have been participating in the prediction market, so it’s unlikely that the wisdom of crowds would have been any help at all. Arguably, in fact, it was precisely the “wisdom” of the crowd that got us into
the mess in the first place. So if models, markets, and crowds can’t help predict black swan events like the financial crisis, then what are we supposed to do about them?

A second problem with methods that rely on historical data is that big, strategic decisions are not made frequently enough to benefit from a statistical approach. It may be the case, historically speaking, that most wars end poorly, or that most corporate mergers don’t pay off. But it may also be true that some military interventions are justified and that some mergers succeed, and it may be impossible to tell the difference in advance. If you could make millions, or even hundreds, of such bets, it would make sense to go with the historical probabilities. But when facing a decision about whether or not to lead the country into war, or to make some strategic acquisition, you cannot count on getting more than one attempt. Even if you could measure the probabilities, therefore, the difference between a 60 percent and 40 percent probability of success may not be terribly meaningful.

Like anticipating black swans, making one-off strategic decisions is therefore ill suited to statistical models or crowd wisdom. Nevertheless, these sorts of decisions have to get made all the time, and they are potentially the most consequential decisions that anyone makes. Is there a way to improve our success here as well? Unfortunately, there’s no clear answer to this question. A number of approaches have been tried over the years, but none of them has a consistently successful track record. In part that’s because the techniques can be difficult to implement correctly, but mostly it’s because of the problem raised in the previous chapter—that there is simply a level of uncertainty about the future that we’re stuck with, and this uncertainty inevitably introduces errors into the best-laid plans.

THE STRATEGY PARADOX

Ironically, in fact, the organizations that embody what would seem to be the
best
practices in strategy planning—organizations, for example, that possess great clarity of vision and that act decisively—can also be the most vulnerable to planning errors. The problem is what strategy consultant and author Michael Raynor calls the strategy paradox. In his book of the same name, Raynor illustrates the paradox by revisiting the case of Sony’s Betamax videocassette, which famously lost out to the cheaper, lower-quality VHS technology developed by Matsushita. According to conventional wisdom, Sony’s blunder was twofold: First, they focused on image quality over running time, thereby conceding VHS the advantage of being able to tape full-length movies. And second, they designed Betamax to be a standalone format, whereas VHS was “open,” meaning that multiple manufacturers could compete to make the devices, thereby driving down the price. As the video-rental market exploded, VHS gained a small but inevitable lead in market share, and this small lead then grew rapidly through a process of cumulative advantage. The more people bought VHS recorders, the more stores stocked VHS tapes, and vice versa. The result over time was near-total saturation of the market by the VHS format and a humiliating defeat for Sony.
15

What the conventional wisdom overlooks, however, is that Sony’s vision of the VCR wasn’t as a device for watching rented movies at all. Rather, Sony expected people to use VCRs to tape TV shows, allowing them to watch their favorite shows at their leisure. Considering the exploding popularity of digital VCRs that are now used for precisely this purpose, Sony’s view of the future wasn’t implausible at all. And if it had come to pass, the superior picture quality of Betamax
might well have made up for the extra cost, while the shorter taping time may have been irrelevant.
16
Nor was it the case that Matsushita had any better inkling than Sony how fast the video-rental market would take off—indeed, an earlier experiment in movie rentals by the Palo Alto–based firm CTI had failed dramatically. Regardless, by the time it had become clear that home movie viewing, not taping TV shows, would be the killer app of the VCR, it was too late. Sony did their best to correct course, and in fact very quickly produced a longer-playing BII version, eliminating the initial advantage held by Matsushita. But it was all to no avail. Once VHS got a sufficient market lead, the resulting network effects were impossible to overcome. Sony’s failure, in other words, was not really the strategic blunder it is often made out to be, resulting instead from a shift in consumer demand that happened far more rapidly than
anyone
in the industry had anticipated.

Shortly after their debacle with Betamax, Sony made another big strategic bet on recording technology—this time with their MiniDisc players. Determined not to make the same mistake twice, Sony paid careful attention to where Betamax had gone wrong, and did their best to learn the appropriate lessons. In contrast with Betamax, Sony made sure that MiniDiscs had ample capacity to record whole albums. And mindful of the importance of content distribution to the outcome of the VCR wars, they acquired their own content repository in the form of Sony Music. At the time they were introduced in the early 1990s, MiniDiscs held clear technical advantages over the then-dominant CD format. In particular, the MiniDiscs could record as well as play, and because they were smaller and more resistant to jolts they were better suited to portable devices. Recordable CDs, by contrast, required entirely new machines, which at the time were extremely expensive.

By all reasonable measures the MiniDisc should have been an outrageous success. And yet it bombed. What happened? In a nutshell, the Internet happened. The cost of memory plummeted, allowing people to store entire libraries of music on their personal computers. High-speed Internet connections allowed for peer-to-peer file sharing. Flash drive memory allowed for easy downloading to portable devices. And new websites for finding and downloading music abounded. The explosive growth of the Internet was not driven by the music business in particular, nor was Sony the only company that failed to anticipate the profound effect that the Internet would have on production, distribution, and consumption of music. Nobody did. Sony, in other words, really was doing the best that anyone could have done to learn from the past and to anticipate the future—but they got rolled anyway, by forces beyond anyone’s ability to predict or control.

Surprisingly, the company that “got it right” in the music industry was Apple, with their combination of the iPod player and their iTunes store. In retrospect, Apple’s strategy looks visionary, and analysts and consumers alike fall over themselves to pay homage to Apple’s dedication to design and quality. Yet the iPod was exactly the kind of strategic play that the lessons of Betamax, not to mention Apple’s own experience in the PC market, should have taught them would fail. The iPod was large and expensive. It was based on closed architecture that Apple refused to license, ran on proprietary software, and was actively resisted by the major content providers. Nevertheless, it was a smashing success. So in what sense was Apple’s strategy better than Sony’s? Yes, Apple had made a great product, but so had Sony. Yes, they looked ahead and did their best to see which way the technological winds were blowing, but so did Sony. And yes, once they made their choices, they stuck to them and executed brilliantly; but
that’s exactly what Sony did as well. The only important difference, in Raynor’s view, was that Sony’s choices happened to be wrong while Apple’s happened to be right.
17

This is the strategy paradox. The main cause of strategic failure, Raynor argues, is not bad strategy, but great strategy that just happens to be wrong. Bad strategy is characterized by lack of vision, muddled leadership, and inept execution—not the stuff of success for sure, but more likely to lead to persistent mediocrity than colossal failure. Great strategy, by contrast, is marked by clarity of vision, bold leadership, and laser-focused execution. When applied to just the right set of commitments, great strategy can lead to resounding success—as it did for Apple with the iPod—but it can also lead to resounding failure. Whether great strategy succeeds or fails therefore depends entirely on whether the initial vision happens to be right or not. And that is not just difficult to know in advance, but impossible.

STRATEGIC FLEXIBILITY

The solution to the strategy paradox, Raynor argues, is to acknowledge openly that there are limits to what can be predicted, and to develop methods for planning that respect those limits. In particular, he recommends that planners look for ways to integrate what he calls strategic uncertainty—uncertainty about the future of the business you’re in—into the planning process itself. Raynor’s solution, in fact, is a variant of a much older planning technique called scenario planning, which was developed by Herman Kahn of the RAND Corporation in the 1950s as an aid for cold war military strategists. The basic idea of scenario planning is to create what strategy consultant Charles Perrottet calls “detailed, speculative, well thought out narratives of ‘future history.’ ”
Critically, however, scenario planners attempt to sketch out a wide range of these hypothetical futures, where the main aim is not so much to decide which of these scenarios is most likely as to challenge possibly unstated assumptions that underpin existing strategies.
18

In the early 1970s, for example, the economist and strategist Pierre Wack led a team at Royal Dutch/Shell that used scenario planning to test senior management’s assumptions about the future success of oil exploration efforts, the political stability of the Middle East, and the emergence of alternative energy technologies. Although the main scenarios were constructed in the relatively placid years of energy production before the oil shocks of the 1970s and the subsequent rise of OPEC—events that definitely fall into the black swan category—Wack later claimed that the main trends had indeed been captured in one of his scenarios, and that the company was as a result better prepared both to exploit emerging opportunities and to hedge against potential pitfalls.
19

Once these scenarios have been sketched out, Raynor argues that planners should formulate not one strategy, but rather a portfolio of strategies, each of which is optimized for a given scenario. In addition, one must differentiate
core
elements that are common to all these strategies from
contingent
elements that appear in only one or a few of them. Managing strategic uncertainty is then a matter of creating “strategic flexibility” by building strategies around the core elements and hedging the contingent elements through investments in various strategic options. In the Betamax case, for example, Sony expected the dominant use of VCRs would be to tape TV shows for the future, but it did have
some
evidence from the CTI experiment that the dominant use might instead turn out to be home movie viewing. Faced with these possibilities, Sony adopted a traditional planning approach, deciding first
which of these outcomes they considered more likely, and then optimizing their strategy around that outcome. Optimizing for strategic flexibility, by contrast, would have led Sony to identify elements that would have worked no matter which version of the future played out, and then to hedge the residual uncertainty, perhaps by tasking different operating divisions to develop higher- and lower-quality models to be sold at different price points.

Raynor’s approach to managing uncertainty through strategic flexibility is certainly intriguing. However, it is also a time-consuming process—constructing scenarios, deciding what is core and what is contingent, devising strategic hedges, and so on—that necessarily diverts attention from the equally important business of running a company. According to Raynor, the problem with most companies is that their senior management, meaning the board of directors and the top executives, spends too much time managing and optimizing their existing strategies—what he calls operational management—and not enough thinking through strategic uncertainty. Instead, he argues that they should devote
all
their time to managing strategic uncertainty, leaving the operational planning to division heads. As he puts it, “The board of directors and CEO of an organization should not be concerned primarily with the short-term performance of the organization, but instead occupy themselves with creating strategic options for the organization’s operating divisions.”
20

Other books

Trapped by Jonas Saul
Superior Women by Alice Adams
Spring Snow by Mishima, Yukio
The War Widows by Leah Fleming
Just After Sunset by King, Stephen
The Shards by Gary Alan Wassner
Rose of the Desert by Roumelia Lane