Liars and Outliers (34 page)

Read Liars and Outliers Online

Authors: Bruce Schneier

BOOK: Liars and Outliers
11.02Mb size Format: txt, pdf, ePub

In a blog post about the topic,
Clay Shirky referred
to the Supreme Court ruling in the Pentagon Papers case that said it's illegal to leak secrets but not illegal to publish leaks:

The legal bargain from 1971 simply does not and cannot produce the outcome it used to. This is one of the things freaking people in the US government out about the long-term change in the media environment—not that the law has changed, but that the world has. Industrial era law, applied to internet-era publishing, might allow for media outlets which exhibit no self-restraint around national sensitivities, because they are run by people without any loyalty to—or, more importantly, need of—national affiliation to do their jobs.

Foreign journalists pose a similar problem. The U.S. government has much less leverage to pressure
El Pais
or Al Jazeera to change its coverage than it does with the
New York Times.
That mattered less before the Internet could bring all those news sources to everyone so easily.

This unmooring of institutions from nationality is upending many societal pressures; things that used to work no longer do. We saw the same dynamic in international corporations, which can more easily skirt national laws by moving between different countries.

Now to the final change, which is organization behavior. In addition to allowing organizations to grow in size, and therefore power, and facilitating new types of organizational structures, information technology is also changing how organizations act.

There have been many books and articles discussing how corporations today are putting short-term stock prices above all
other business considerations
, including company health and long-term shareholder value. I've read lots of explanations for this change. That executives' bonuses are based on short-term numbers. That stocks are used more for short-term “bets” than for long-term investments. That mutual funds and complex index options further remove investors from the companies they invest in. And that
investors have access
to more information faster—and can act on that information faster.

You get what you measure,
6
and things like short-term profitability are much easier to measure than abstract concepts like long-term viability, or intangibles like customer satisfaction or reputation. An important facilitator for this dynamic—I don't know whether it's a cause or not—is information technology. Improved information technology makes the short-term numbers easier to monitor, so investors monitor them much more closely than ever before. This continuous monitoring makes them easier to optimize. We are better able to predict what a company's balance sheet will look like next week, and because we're so quick to trade one company for another, we care much less what it will look like in five years. This necessarily changes how investing works and how organizations behave: and the two are locked in a positive-feedback loop.

All these effects of ever-faster information technology affect other organizations at every scale, from the smallest groups to the entire world.

Modern large and technological trade-offs between group interest and competing interest are what social planners call
wicked problems
. These are problems that are difficult (or impossible) to solve because of incomplete, poorly understood, contradictory, or changing requirements; because of complex interdependencies; and because of their uniqueness and novelty. Examples include global climate change, AIDS and pandemics in general, nuclear waste, terrorism and homeland security, drug trafficking and other international smuggling, and national healthcare. All of those problems involve societal pressures, and all of their solutions involve coercing people into following group norms ahead of other competing interests.

But—and this is important—all of the big societal pressure problems are about more than trust and security. They're interdependent with other societal dilemmas. They're interdependent with other societal systems. They have moral, social, economic, and political dimensions. Their solutions involve answering questions about how society organizes itself, the role of national and international government, the extent of individual liberties, and what sort of outcomes are optimal and desirable. And these aspects of the problems are far more important, and difficult, than the trust aspects. It's not simply a matter of implementing the best societal pressures to induce broad cooperation; everything else matters more. The geopolitics that results in terrorism matter much more than any particular security measure against terrorists. The politics in which multinational corporations thrive matter much more than the societal pressures to ensure those corporations cooperate. The politics surrounding drug laws, tax laws, laws protecting civil liberties, and our social safety net matter much more than the societal pressures to ensure that those laws are followed. Look back to the figure in Chapter 15; the “constraints” and the “other considerations” are more important than the primary loop.

Here's one example. In 2011, science fiction author Charles Stross gave a talk on the
ubiquity of data
that's coming in the near future, from technologies like genetic mapping, “lifeblogging”—the audio and video recording of everything that happens to you—sensors on everyone and everything. Nothing he said required anything more than mild extrapolation. And then he talked about the issues that society is going to have to wrestle with once this data exists:

Is losing your genomic privacy an excessive price to pay for surviving cancer and evading plagues? (Broad analysis of everyone's genetic data will result in significant new understanding about disease, and a flurry of medical results that will significantly benefit everyone. At the same time, an individual's genetic data is both personal and private—even more so when companies start using it to prejudge people.)

Is compromising your sensory privacy through lifeblogging a reasonable price to pay for preventing malicious impersonation and apprehending criminals? (Lifeblogs have the potential to be a valuable police tool, not just by allowing victims to record crimes, but in the incidental recording of events in the background that later could be instrumental in identifying criminals.)

Is letting your insurance company know exactly how you steer and hit the gas and brake pedals, and where you drive, an acceptable price to pay for cheaper insurance? (Once insurance companies have all of this data, they could more easily offer differing insurance policy to different types of drivers.)

These are all societal dilemmas about how to balance group interest with self-interest. But before figuring out what kind of societal pressures to deploy to solve the problem, society first has to agree what the group interest is. We can't start talking about what kind of societal pressures to set up to prevent people from keeping their genome secret, or protecting the privacy of their lifeblog, or limiting access to their car's “black box” data, until we agree on what it means to cooperate and what it means to defect in these situations. It's difficult to solve societal dilemmas while society itself is changing so quickly.

This isn't the first time technological change has caused social changes that forced us to rethink society, and it won't be the last. The trick will be getting societal pressure right in a society that's moving so fast that getting it wrong is an increasingly dangerous option. This means getting faster and better at setting societal pressure knobs. It means setting them right the first time, and then correcting them quickly in response to feedback, delays, and technological changes. To that end, here is a list of principles for designing effective societal pressures:

  • Understand the societal dilemma.
    Not just what the group interest is, but what the group norm is, what the competing norms are, how the societal dilemma relates to other societal dilemmas, what the acceptable scope of defection is, and so on. A lot of ineffective societal pressures come from not understanding the true problem.
  • Consider all four societal pressures.
    It's common to believe that one is enough: that reputation obviates the need for laws, or that a good security system is sufficient to enforce compliance. It's rare that this is true, and effective societal pressure usually involves all four categories, though not necessarily in equal measure. Considering all four will indicate how resources might be most effectively spent.
  • Pay attention to scale.
    The scale of the societal dilemma influences how effective each of the four societal pressures will be. Noticing the scale, and noticing when the scale changes, is vital.
  • Foster empathy and community, increasing moral and reputational pressures.
    In our large, anonymous society, it's easy to forget moral and reputational pressures and concentrate on legal pressure and security systems. This is a mistake; even though our informal social pressures fade into the background, they're still responsible for most of the cooperation in society.
  • Use security systems to scale moral and reputational pressures.
    The two social pressures work best on the small scale, but security systems can enhance them to work at much larger scales. They don't work the same way, and the security systems are themselves open to attack. Still, we can't simply replace moral and reputational pressures with institutional pressures, so it is important to use technology in this way.
  • Harmonize institutional pressures across related technologies.
    There shouldn't be one law for paper mail and another for e-mail, or one law for telephone conversations and another for Internet telephony. This sort of thing used to work back when technology changed slowly. Now, by the time the legal system grinds through the process of creating a law, it may already be technologically obsolete. We need to make laws technologically invariant. This won't be easy, but we need to try.
  • Ensure that financial penalties account for the likelihood that a defection will be detected.
    As I discussed in Chapter 13, a financial penalty that is too low can easily become a cost of doing business. If we expect a fine to be an effective societal pressure, it needs to be more expensive than the risk of defecting and paying it.
  • Choose general and reactive security systems.
    Just as we need to make laws technologically invariant, we need to make security systems defector-invariant. That is, we need to concentrate on the broad motivations for defection, rather than on blocking specific tactics, to prevent defectors from working around security systems. One example is counterterrorism, where society is much
    better off spending
    money on intelligence, investigation, and emergency response than on preventing specific terrorist threats, like bombs hidden in shoes or underwear.
  • Reduce concentrations of power.
    Power, whether it's concentrated in government, corporations, or non-government organizations, brings with it the ability to defect. The greater the power, the greater the scope of defection.
    7
    One of the most important things society can do to reduce the risk of catastrophic defection is to reduce the amount of power held by individual actors in key positions.
  • Require transparency—especially in corporations and government institutions.
    Transparency minimizes the principal–agent problem and ensures the maximum effect of reputational pressures. In our complex society, we can't monitor most societal dilemmas directly. We need to rely on others—proxies—to do the work for us. Checks and balances are the most powerful tool we have to facilitate this, and transparency is the best way to ensure that checks and balances work. A corollary of this is that society should not suppress information about defectors, their tactics, and the overall scope of defection.

We're currently in a period of history where technology is changing faster than it ever has. The worry is that if technology changes too fast, the defectors will be able to innovate so much faster than society can that the imbalances become even greater—increased scope of defection leading to an even more increased scope of defection—which can cause large societal failures. Think of what would happen to the Red Queen Effect if the stoats evolved faster than the rabbits: they would become significantly faster than the rabbits, then eat all the rabbits, and then all starve (assume there's no other prey). Defectors in societal dilemmas can have the same effect if they evolve too quickly: they overwhelm the cooperators, which means there are no more cooperators, and the defectors themselves lose. Remember, parasites need society to be there in order to benefit from defecting; and being a parasite is a successful strategy only if you don't take too many resources from your host.

On the other hand, we're also in a period of history where the ability for large-scale cooperation is greater than it ever has been before. In 2011, law professor
Yochai Benkler
published a book that is in many ways a companion volume to this one:
The Penguin and The Leviathan: How Cooperation Triumphs Over Self-Interest.
Benkler writes that the Internet can and has enabled cooperation on a scale never seen before, and that politics—backed by science—is ready to embrace this new cooperation:

Other books

Enduring Service by Regina Morris
Opium by Colin Falconer
The Byram Succession by Mira Stables
Nerds Are Freaks Too by Koko Brown
The Auction by Eve Vaughn
Bird Lake Moon by Kevin Henkes
Cranford by Elizabeth Gaskell