Authors: Bruce Schneier
The US government’s desire for unfettered surveillance has already affected how the
Internet works. When surveillance becomes multinational and cooperative, those needs
will increasingly take precedence over others. And the architecture choices network
engineers make to comply with government surveillance demands are likely to be around
for decades, simply because it’s easier to keep doing the same things than to change.
By putting surveillance ahead of security, the NSA ensures the insecurity of us all.
COLLATERAL DAMAGE FROM CYBERATTACKS
As nations continue to hack each other, the Internet-using public is increasingly
part of the collateral damage. Most of the time we don’t know the details, but sometimes
enough information bubbles to the surface that we do.
Three examples: Stuxnet’s target was Iran, but the malware accidentally infected over
50,000 computers in India, Indonesia, Pakistan, and elsewhere, including computers
owned by Chevron, and industrial plants in Germany; it may have been responsible for
the failure of an Indian satellite in 2010. Snowden claims that the NSA accidentally
caused an Internet blackout in Syria in 2012. Similarly, China’s Great Firewall uses
a technique called DNS injection to block access to certain websites; this technique
regularly disrupts communications having nothing to do with China or the censored
links.
The more nations attack each other through the global Internet—whether to gain intelligence
or to inflict damage—the more civilian networks will become collateral damage.
HARM TO NATIONAL INTERESTS
In Chapter 9, I discussed how the NSA’s activities harm US economic interests. It
also harms the country’s political interests.
Political scientist Ian Bremmer has argued that public revelations of the NSA’s activities
“have badly undermined US credibility with many of its allies.” US interests have
been significantly harmed on the world stage, as one country after another has learned
about our snooping on its citizens or leaders: friendly countries in Europe, Latin
America, and Asia. Relations between the US and Germany have been particularly strained
since it became public that the NSA was tapping the cell phone of German chancellor
Angela Merkel. And Brazil’s president Dilma Rousseff turned down an invitation to
a US state dinner—the first time any world leader did that—because she and the rest
of her country were incensed at NSA surveillance.
Much more is happening behind the scenes, over more private diplomatic channels. There’s
no soft-pedaling it; the US is undermining its global stature and leadership with
its aggressive surveillance program.
T
he harms from mass surveillance are many, and the costs to individuals and society
as a whole disproportionately outweigh the benefits. We can and must do something
to rein it in. Before offering specific legal, technical, and social proposals, I
want to start this section with some general principles. These are universal truths
about surveillance and how we should deal with it that apply to both governments and
corporations.
Articulating principles is the easy part. It’s far more difficult to apply them in
specific circumstances. “Life, liberty, and the pursuit of happiness” are principles
we all agree on, but we only need to look at Washington, DC, to see how difficult
it can be to apply them. I’ve been on many panels and debates where people on all
sides of this issue agree on general principles about data collection, surveillance,
oversight, security, and privacy, even though they disagree vehemently on how to apply
those principles to the world at hand.
SECURITY A
N
D PRIVACY
Often the debate is characterized as “security versus privacy.” This simplistic view
requires us to make some kind of fundamental trade-off between the
two: in order to become secure, we must sacrifice our privacy and subject ourselves
to surveillance. And if we want some level of privacy, we must recognize that we must
sacrifice some security in order to get it.
It’s a false trade-off. First, some security measures require people to give up privacy,
but others don’t impinge on privacy at all: door locks, tall fences, guards, reinforced
cockpit doors on airplanes. And second, privacy and security are fundamentally aligned.
When we have no privacy, we feel exposed and vulnerable; we feel less secure. Similarly,
if our personal spaces and records are not secure, we have less privacy. The Fourth
Amendment of the US Constitution talks about “the right of the people to be
secure
in their persons, houses, papers, and effects” (italics mine). Its authors recognized
that privacy is fundamental to the security of the individual.
Framing the conversation as trading security for privacy leads to lopsided evaluations.
Often, the trade-off is presented in terms of monetary cost: “How much would you pay
for privacy?” or “How much would you pay for security?” But that’s a false trade-off,
too. The costs of insecurity are real and visceral, even in the abstract; the costs
of privacy loss are nebulous in the abstract, and only become tangible when someone
is faced with their aftereffects. This is why we undervalue privacy when we have it,
and only recognize its true value when we don’t. This is also why we often hear that
no one wants to pay for privacy and that therefore security trumps privacy absolutely.
When the security versus privacy trade-off is framed as a life-and-death choice, all
rational debate ends. How can anyone talk about privacy when lives are at stake? People
who are scared will more readily sacrifice privacy in order to feel safer. This explains
why the US government was given such free rein to conduct mass surveillance after
9/11. The government basically said that we all had to give up our privacy in exchange
for security; most of us didn’t know better, and thus accepted the Faustian bargain.
The problem is that the entire weight of insecurity is compared with the incremental
invasion of privacy. US courts do this a lot, saying things on the order of, “We agree
that there is a loss of privacy at stake in this or that government program, but the
risk of a nuclear bomb going off in New York is just too great.” That’s a sloppy characterization
of the trade-off. It’s not
the case that a nuclear detonation is impossible if we surveil, or inevitable if we
don’t. The probability is already very small, and the theoretical privacy-invading
security program being considered could only reduce that number very slightly. That’s
the trade-off that needs to be considered.
More generally, our goal shouldn’t be to find an acceptable trade-off between security
and privacy, because we can and should maintain both together.
SECURITY OVER SURVEILLANCE
Security and surveillance are conflicting design requirements. A system built for
security is harder to surveil. Conversely, a system built for easy surveillance is
harder to secure. A built-in surveillance capability in a system is insecure, because
we don’t know how to build a system that only permits surveillance by the
right
sort of people. We saw this in Chapter 11.
We need to recognize that, to society as a whole, security is more critical than surveillance.
That is, we need to choose a secure information infrastructure that inhibits surveillance
instead of an insecure infrastructure that allows for easy surveillance.
The reasoning applies generally. Our infrastructure can be used for both good and
bad purposes. Bank robbers drive on highways, use electricity, shop at hardware stores,
and eat at all-night restaurants, just like honest people. Innocents and criminals
alike use cell phones, e-mail, and Dropbox. It rains on the just and the unjust.
Despite this, society continues to function, because the honest, positive, and beneficial
uses of our infrastructure far outweigh the dishonest, negative, and harmful ones.
The percentage of the drivers on our highways who are bank robbers is negligible,
as is the percentage of e-mail users who are criminals. It makes far more sense to
design all of these systems for the majority of us who need security from criminals,
telemarketers, and sometimes our own governments.
By prioritizing security, we would be protecting the world’s information flows—including
our own—from eavesdropping as well as more damaging attacks like theft and destruction.
We would protect our information flows from governments, non-state actors, and criminals.
We would be making the world safer overall.
Tor is an excellent example. It’s free open-source software that you can use to browse
anonymously on the Internet. First developed with funding from the US Naval Research
Laboratory and then from the State Department, it’s used by dissidents all over the
world to evade surveillance and censorship. Of course, it’s also used by criminals
for the same purpose. Tor’s developers are constantly updating the program to evade
the Chinese government’s attempts to ban it. We know that the NSA is continually trying
to break it, and—at least as of a 2007 NSA document disclosed by Snowden—has been
unsuccessful. We know that the FBI was hacking into computers in 2013 and 2014 because
it couldn’t break Tor. At the same time, we believe that individuals who work at both
the NSA and the GCHQ are anonymously helping keep Tor secure. But this is the quandary:
Tor is either strong enough to protect the anonymity of both those we like and those
we don’t like, or it’s not strong enough to protect the anonymity of either.
Of course, there will never be a future in which no one spies. That’s naïve. Governments
have always spied, since the beginning of history; there are even a few spy stories
in the Old Testament. The question is which sort of world we want to move towards.
Do we want to reduce power imbalances by limiting government’s abilities to monitor,
censor, and control? Or do we allow governments to have increasingly more power over
us?
“Security over surveillance” isn’t an absolute rule, of course. There are times when
it’s necessary to design a system for protection from the minority of us who are dishonest.
Airplane security is an example of that. The number of terrorists flying on planes
is negligible compared with the number of nonterrorists, yet we design entire airports
around those few, because a failure of security on an airplane is catastrophically
more deadly than a terrorist bomb just about anywhere else. We don’t (yet) design
our entire society around terrorism prevention, though.
There are also times when we need to design appropriate surveillance into systems.
We want shipping services to be able to track packages in real time. We want first
responders to know where an emergency cell phone call is coming from. We don’t use
the word “surveillance” in these cases, of course; we use some less emotionally laden
term like “package tracking.”
The general principle here is that systems should be designed with the minimum surveillance
necessary for them to function, and where surveillance is
required they should gather the minimum necessary amount of information and retain
it for the shortest time possible.
TRANSPARENCY
Transparency is vital to any open and free society. Open government laws and freedom
of information laws let citizens know what the government is doing, and enable them
to carry out their democratic duty to oversee its activities. Corporate disclosure
laws perform similar functions in the private sphere. Of course, both corporations
and governments have some need for secrecy, but the more they can be open, the more
we can knowledgeably decide whether to trust them. Right now in the US, we have strong
open government and freedom of information laws, but far too much information is exempted
from them.
For personal data, transparency is pretty straightforward: people should be entitled
to know what data is being collected about them, what data is being archived about
them, and how data about them is being used—and by whom. And in a world that combines
an international Internet with country-specific laws about surveillance and control,
we need to know where data about us is being stored. We are much more likely to be
comfortable with surveillance at any level if we know these things. Privacy policies
should provide this information, instead of being so long and deliberately obfuscating
that they shed little light.
We also need transparency in the algorithms that judge us on the basis of our data,
either by publishing the code or by explaining how they work. Right now, we cannot
judge the fairness of TSA algorithms that select some of us for “special screening.”
Nor can we judge the IRS’s algorithms that select some of us for auditing. It’s the
same with search engine algorithms that determine what Internet pages we see, predictive
policing algorithms that decide whom to bring in for questioning and what neighborhoods
to patrol, or credit score algorithms that determine who gets a mortgage. Some of
this secrecy is necessary so people don’t figure out how to game the system, but much
of it is not. The EU Data Protection Directive already requires disclosure of much
of this information.