Read Surveillance or Security?: The Risks Posed by New Wiretapping Technologies Online
Authors: Susan Landau
With fifteen years of experience with a public Internet, we have experience that the DARPA engineers did not.20 NSF's Future Internet Design, a
program consisting of a variety of research projects, is focusing on everything from the underlying infrastructure of the network to architectures
for the applications. It is examining social issues, seeking to address economic trade-offs, privacy, and security concerns up front in system design.
Some researchers are also considering what an Internet for the "next three
billion" might look like, examining access, literacy, and diversity for those
not connected to the network.
Testing some of the new ideas may be difficult. It is possible to use an
experimental test bed, but then one lacks real data. Using a real system to
test experimental protocols is problematic, since network administrators
are loath to disrupt their systems.
One piece of research is an innovative platform, OpenFlow. Modern
switches and routers have flow tables, tables with rules as to where traffic
should be routed, for implementing firewalls, NATs, and so on. OpenFlow
virtualizes this, allowing the possibility of running multiple switching
systems over a single physical switch. A network administrator can partition traffic going through flow tables into production and research traffic
(by default, traffic is production unless otherwise noted). By using the
experimental rules, the research traffic enables new routing techniques to
be tested. For example, this might be routes that minimize the number of
hops or that control traffic coming to or going from a particular node.21
One early application of OpenFlow, Ethane, allowed the establishment
of a "controller" that approves the communication flow and picks the
path. Ethane provided security through enforcing rules such as "VoIP phones are not allowed to communicate with laptops."" Ethane was not
intended for the general Internet, but rather for enterprise networks, where
such control is quite appropriate. Experimental work indicated that Ethane
could be successful in controlling networks with ten thousand machines,
the size of a small enterprise.23
Researchers moved on to networks with software-defined rules controlling routing, access control, energy management, and other features. The
idea behind this is that packet forwarders should be simple, commoditized
pieces of hardware controlled by software. These Software Defined Networks (SDNs) have already been deployed in data centers; the owners
appreciate the reduced costs (SDNs enable the use of less expensive hardware) and the flexibility afforded by implementing features in software.
The intent is to employ SDNs everywhere: data centers, enterprises, college
campuses-even homes.
That different portions of the network should enforce differing security
rules is not a change from the current Internet. What is new is the clarity
of the proposed security model. This simplicity makes it easier to implement than the ad hoc security protections that have grown up for the
public Internet. Underlying SDN is a security model that private networks
should connect with public networks at very clear junctures. By segregating
insecure end hosts-a problem that is unlikely to disappear24-within the
private network, the security model isolates problems, a very useful
outcome. SDN, which does enforcement from the switch level, is one of
many possible security models; there is also work looking at controls at
the information-dissemination level.
The original Internet communication model let any device communicate
with any other. But in a world in which SCADA and DoD systems are connected to the network, that makes no sense. Some webservers like Amazon
.com or whitehouse.gov should be fully accessible to the public Internet.
These will be open to attack, since anyone can connect to them. As David
Clark has pointed out, "They should not be used for any task (such as hosting
confidential information) that makes them valuable as a platform (e.g., for
exfiltration) if they are infiltrated. Machines that host valuable information
should not also be used for any roles where they need to connect to
unknown persons. Any machine used for this sort of task should be embedded in a strong identity/authentication/authorization architecture."25
The idea that the sensitivity of the data on a device should be coupled
to the broadness of who may access that device is certainly not new (it
is, for example, common practice for members of the national-security
community). Such a model of communication requires a clear partitioning of the "internetwork": some portion will carefully screen all packets in and
out of their piece, requiring attribution and strong authentication, while
other portions of the network will be open and broadly available, and others
will be somewhere in the middle on a sliding scale. To achieve such a model
requires many changes to the way we design systems.26 It means applying
identity mechanisms and attribution more broadly than has been done
heretofore. What it does not mean is deploying identity and attribution
across the network. For the same reason that websites such as Amazon.com
and whitehouse.gov should be open to anyone, there is no reason for
such sites to seek identity or attribution information from those who access
them.
Partitioning the network-a situation that already exists with corporate
and government firewalls that allow some types of access but not othersinterposes intermediaries in a communication path, either to ensure identity, authentication, or authorization or to examine the communication
itself to ensure it is not carrying damaging malware. Interposing such
intermediaries may itself create problems. The presence of such intermediaries may simplify attacking the communications themselves.27 As we
saw with DPI originally implemented for CALEA compliance, the same DPI
technology can be used for purposes other than originally intended. While
it will not be simple to implement such partitioning and doing so will
create problems, it is possible to have a future Internet that preserves the
ability to innovate and communicate broadly while nonetheless enabling
the network to function more securely.
Technical solutions to the wiretapping conundrum are less clear. We
have learned some lessons that it would be useful to apply. After the
passage of the Protect America Act, a number of researchers warned that
the warrantless collection at the AT&T San Francisco switching office risked
exploitation by opponents as well as overcollection, and that new vulnerabilities would be created by the collection of the CDR information.28
These predictions came to pass-though not necessarily there and then.
The Greek wiretapping case showed that wiretapping capabilities built into
communications infrastructures can and will be exploited by opponents.
The United States warrantless wiretapping program resulted in overcollection.29 When communications service personnel were placed at an FBI
location, the lack of genuine two-organization control on surveillance
resulted in the service-provider personnel giving the FBI customer data
without legal oversight.30 In short, making wiretapping easy from a technical point of view makes wiretapping without proper legal authorization
easy. We know, from the efforts carriers are making to use DPI for business purposes, that security services, such as examining packets, should be
doing those services for the purpose of security only. Finally, we have learned
through the case of Vodafone Greece and the problems with the Cisco
architecture that it is easy for wiretapping systems to be subverted, leading
to exploitation by unauthorized parties.31
As security researchers have understood for over a hundred years, security mechanisms need public vetting. This may seem odd to nonprofessionals, but trusted insiders and other nation-states will be in positions to
find the flaws in interception systems. Public protection means that the
interception system must be as well vetted as possible and that means
public visibility of the system. Cisco should be lauded for having allowed
their system to be open to public view.32 One suggestion for improving the
situation is that the sale of legally authorized interception systems for
domestic use should be predicated on the systems being made public to
enable examination. Note that such a requirement will have no effect on
the NSA, which fields its own systems.
The combination of the increasing use of mobile communications, the
increasing complexity of communications infrastructure, and the difficulty
of correctly building complex systems makes it very hard to get communications interception 100 percent right. But getting these systems right is
crucial. Theft of U.S. corporate and manufacturing secrets as a result of an
interception system subverted by a foreign nation is too high a price to
pay for catching one more drug dealer or even for stopping a terrorist
intent on setting off bombs at a shopping mall.33 This is not a business
argument; this is a national-security issue. For this reason, communications interception systems should not be deployed domestically unless
they have been shown to be secure.
11.3 Economics: A Necessary Part of the Solution
Economic costs, or the lack of them, play an important role in cyber insecurity. In computer systems, network effect-the fact that the value of a
system increases as more people use the system34-and software compatibility issues means being first to market is remarkably important. Since
economic losses due to poor security often accrue not to the system vendor
or operator, but elsewhere, this puts security on the back burner. In this
sense, security is a system externality; those who develop and purchase the
system are not affected by the security weaknesses. Rather it is a third party
that has not been involved in the economic transaction for the system
that suffers from any problems.
That creates an apparent pressure point for cybersecurity. Whoever
bears the risk strongly affects whether the system is configured securely.
Ross Anderson provides a compelling example of this concerning ATM
fraud:
In the United States, the first "phantom withdrawal" case was decided in favor of
the customer, leading to Regulation E and its limits on customer liability for unauthorized transactions. In the United Kingdom, initial cases went the other way;
banks got away for years with claiming "our systems are secure so if your PIN was
used it's your fault." This created the obvious moral hazard, leading banks to be
careless about ATM security, and ultimately [there was] an avalanche of ATM fraud
in 1992-1994.35
When risk is placed appropriately, it provides incentive to improve
security. As University of California economics professor Hal Varian has
written, "Liability should be assigned to the parties best able to manage
the risk.i36
However, the liability "solution" to the security problem is, in fact, quite
difficult to implement. To hold vendors and operators responsible for
inflicted harms, one has to show that security breaches came from product
design or system implementation and that the system departed from an
appropriate standard of care.37 As one might expect, the notion of imposing liability for security breaches is controversial. Many worry that it would
harm innovation. A difficult issue to resolve is determining metrics by
which to measure whether a system design and implementation has been
appropriately secured. These battles have been raging for a number of years
now without resolution.38
In the case at hand, the issue is not of securing a general-purpose operating system, but of securing a significantly simpler system designed to
legally intercept interception. Thus the arguments against liability-threats
to innovation, inability to develop metrics for security-are not really
at issue.
Currently wiretap law provides both criminal and civil liability against
interception. (There are exemptions for legally authorized interceptions in
criminal and foreign-intelligence investigations, if the interception is being
performed by the service provider for quality control, if one of the parties
to the call permits the interception,39 or in the case of trespass, if the owner
of the computer being used in the trespass authorizes the interceptions of
the trespasser's communications being made to, through, or from the
owner's machine.40) It would be appropriate to make communications
providers liable if due care has not been exercised in the development or
implementation of an interception system that has been misused and allowed eavesdropping on a user. This could force more rigorous security
evaluations of proposed systems for communications interception. It could
help keep badly designed systems out of use. The DCS3000 architecture
that still included MD5, a hash function with known insecurities, would
presumably not pass muster. A rigorous public vetting might have uncovered the security flaws in the 2004 Cisco architectural design before its
deployment.
Under current practice, it is the purchasers and developers of interception systems who make the decisions about security. When an interception
system is subverted, the person who has been wiretapped on pays the price
for the system's problems. There are problems with this liability proposal,
the greatest of which is how someone would know that their communications had been intercepted as a result of a faulty implementation of a legal
system. The point, however, is that transferring liability to those in a position to manage the risk helps incentivize securing interception systems.
Such transfer of risk would not stop all types of warrantless wiretapping.
In particular, the PSP ran at the request of the U.S. government. It is likely
that the wiretapping done under the program would not fall under an
"unsecured" interception category. However, a transfer of risk that would
put communications providers under legal responsibility for ensuring that
interception systems are secure seems a rather obvious and necessary
requirement for the communications systems on which society relies.