Read Surveillance or Security?: The Risks Posed by New Wiretapping Technologies Online
Authors: Susan Landau
Different sets of vulnerabilities are exploited in this type of attack: the
website has vulnerabilities allowing malware to be installed in its html
code, and this malware installs other software on a vulnerable network
host. Such a dual-level attack reflects a different level of sophistication than
previous attacks on user machines.
One might ask how users' machines came to be so vulnerable to subversion. The answer starts with the fact that decades ago when personal
computers were first designed, they were standalone machines. The model
was that the user should have full access to all functions on the machine.
This has distinct advantages; in particular, it allows the user to do anything
she wants on the machine. But such a model becomes problematic when
the machine is networked. If the user can do anything she wants on her
machine, then if her computer is not properly secured, anyone else having
access to her machine can also do anything they want to it. Add to this
situation the fact that securing computers is not easy. Computer operating
systems are highly complex systems. It is very difficult to completely eliminate mistakes in the millions of lines of code in them, but it is possible to do better than present systems. NASA's shuttle software, for example, has
a rather remarkable record."
The business model for high tech conspires against security. Being first
to market is extremely important, and security concerns have often been
relegated to a backwater to be fixed in "version 2.0." Often version 2.0
never comes. In any case, by the time it does, it is often too late-too many
machines with poor security paradigms have already been purchased and
deployed.
One might imagine that somewhere on the network it would be possible
to examine packets before they arrive at a vulnerable machine and thus
stop attacks before they start. Firewalls are a step in this direction. A firewall
is a device interposed between an internal network (e.g., home, university,
corporate, etc.) and the rest of the network; it filters traffic based on a set
of rules defined by the user. Its job can be to prevent traffic to or from a
certain IP address (though this can be defeated by IP address spoofing) or
to prevent certain applications from transferring data.67 Some firewalls
block the file transfer protocol, while others have been known to block
such applications as YouTube.68 Firewalls are useful in stopping the spread
of known worms and viruses, but are less useful in preventing unknown
bad programs from entering a user's machine.
Firewalls interpose a censor between the user and the communication
and break the Internet communications model that allows any endpoint
to send a message to any other without first having an "introduction."
Despite that, such censors are being deployed. One is the "Great Firewall
of China," which examines IP addresses and blocks incoming and outgoing
packets to China on that basis." Although the censorship is not perfect,
it is sufficient to disrupt human rights activities. Such censorship has also
been documented closer to home. In 2005 Canada's second largest telecommunications company blocked its subscribers and smaller ISPs that
depended on the network from reaching the site of the Telecommunications Workers Union.70
It may well be appropriate to use intrusive packet inspection or censorship to prevent network attacks such as DDoS, yet clearly the potential for
abuse using such monitoring is high. I return to this issue later.
3.6 The Security Problems Are Inherent
The list above of Internet security holes is not exhaustive-indeed, the
nature of the problem is that new vulnerabilities continue to be uncovered-but the description captures the essence of the problem. Security issues are inherent in any fully open packet-switching network with smart
hosts. Whenever a data-manipulating device is sufficiently multipurpose
so as to be programmable (in other words, to be a computer), such a device
will have flaws and be a security risk. And whenever a computer connects
to a network, the machine will be at risk from other computers on the
network and the whole network itself will be at risk.
Unless the endpoint hosts are fully secured, they leave the network in
a highly vulnerable state. The fact is, however, that the security of users'
machines is in a terrible state; most machines are unpatched and open to
attack. We are in a situation in which the very strength of the Internet-a
network connecting smart endpoints-creates its weakness. The network
hosts can be compromised, with the Internet providing the delivery system
for compromise.
Here the Internet architecture comes into play. TCP/IP is about "conversations." You can secure the channels over which the TCP/IP communications occur, but the layered nature of the Internet means that that
information within packets does not leak into other layers of the network.
Van Jacobson described it this way: "Channels are secured, but not data,
so there's no way to know if what you get is complete, consistent, or even
what you asked for."" There is no way for the network to know what the
content looks like until it reaches the endpoint, a user's computer.
Into this mix comes a large population with diverse interests (including
developing many applications that the original Internet designers had
never considered). One gets the enormous burst of creativity that has
produced the Internet post the mid-1990s: This creative energy is what
Harvard law professor Jonathan Zittrain terms the "generative Internet":
the network's ability to produce unprompted change because of its "large,
varied, and uncoordinated audience."" The generative Internet provides a
large panoply of services, from ecommerce and ecollaboration to social
networks. One does not necessarily obtain secure applications.
The peer-to-peer nature of the network further complicates control.
Many users are familiar with the client/server model, in which a "client"
initiates an action, such as accessing a web page or requesting a file,
and the "server" provides that service. Internet architecture supports a
peer-to-peer network, in which nodes function as both clients and servers
to other nodes (in other words, as peers). The peer-to-peer model relies on
the robust connectivity of the Internet and is extremely efficient for file
distribution, whether for illegal copyrighted music or legal downloads
(such as music under Creative Commons licenses)," or some open-source
operating systems.
Skype is an example of a peer-to-peer program that takes security seriously. The program encrypts all calls end to end. This prevents computers
routing the call from eavesdropping on the conversation as well as preventing the call itself from corrupting any machines through which it travels.74
Such careful attention to security is not the norm for peer-to-peer systems.
Underlying peer-to-peer systems is the idea that the user is accessing
useful information from an unknown source. While the server is also
something of an unknown entity in a client/server interaction, it is generally the case that servers are better protected than random nodes on the
network. Because the average user does not know and does not check what
is being downloaded onto her system, it is entirely possible that a malicious
node on the peer-to-peer system has included a virus among its shared
files. And because the average user does not know how peer-to-peer applications work, and does not know to protect her own machine, many of
the user's files can be "shared" while on a peer-to-peer connection. In 2007,
a U.S. House of Representatives committee examined possible consequences of using a peer-to-peer file-sharing program:
We used the most popular P2P [peer-to-peer] program, LimeWire, and ran a series
of basic searches. What we found was astonishing: personal bank records and tax
forms, attorney/client communications, the corporate strategies of Fortune 500
companies, confidential corporate accounting documents, internal documents from
political campaigns, government emergency response plans, and even military operations orders.75
The risks created by peer-to-peer file sharing have raised concerns in
Congress. The circulation of copyrighted material via peer-to-peer networks
has induced some to propose controls to eliminate P2P file sharing. While
the intent is that such schemes apply only to application-layer peer-to-peer
networking (rather than IP layer routing), experience indicates that such
legislation would sow confusion in the networking world. In any case,
laws restricting P2P file sharing undoubtedly would be disruptive to the
development of many beneficial P2P applications. Such legislation is rarely
proposed by anyone who understands why the network works so very well.
3.7 Attribution and Authentication
One idea that often seems attractive and that has periodically been proposed is that all Internet communications include attribution. Packets
would authenticate themselves before being received by an endpoint; in
some proposals, network users would also authenticate themselves before
using the network.76 While this would not preclude anonymous network communication," it would certainly make such forms of communication
more difficult.
In fact, attribution is quite complex, and several problems are being
mixed together. We might want to know the IP address of the host that
initiated the DDoS attack, identify an originator's email address for attacks
carried out by email (e.g., phishing), establish the physical location of the
source of an attack, or identify the individual who launched the attack .71
These differing needs argue for different types of attribution: machine,
human, digital identity.
Packet-level attribution identifies the machine, but not the user. While
packet attribution might specify which machine is launching a DDoS
attack, it does nothing to establish who was actually responsible for the
DDoS attack being on the machine-or even who was using the machine
at the time it launched the attack. Application-level authentication could
identify the user, but not the machine. It can do so only if the form of
user authentication can be trusted. We are far from such a situation. Username/password is easily spoofed. Even systems requiring such biometrics
as fingerprints can be scammed; in 2002 researchers showed that a jelly
print of a fingerprint can fool a fingerprint reader.79 So if someone with
access to a good copy of your fingerprint also had your laptop, they would
be able to open files that are protected by your fingerprint. The real complication for attribution is that the type of attribution varies with the type
of entity for which we are seeking the attribution.
A further complication is that in many cases, a level of anonymity for
the network host is more appropriate. This may be for privacy or security
reasons, or it may simply be an artifact of network design. For example,
while it may be possible to identify the machine initiating the connection,
it is very difficult to identify the machine being accessed. That is because
when so many users want to simultaneously access a site, the content is
often transparently "mirrored" on other sites.80 The user who requested a
connection is typically unaware that this substitution has occurred and that
her machine is receiving packets from a different IP address. Thus a system
design that has a "whitelist" of IP addresses may refuse content from a site
that has the very content it is seeking, a confused response to a complex
networking solution. A host that requires attribution before it will accept
packets and that has certain expectations as to the source of the packets
must take into account the myriad legitimate disruptions that might occur;
this is exceedingly hard to do, and especially difficult to do dynamically.
Attribution was a natural fit for the telephone network, which originally
did not attempt to establish connections between two entities that had not yet met (telemarketing was not big in the 1890s). But attribution is
not a natural fit for the Internet, which is many networks at once. The
current Internet is open: users can link to the New Zealand government,
to Amazon.co.uk, to Google, and even, apparently, to the U.S. electric
power grid.81 While there is every reason for the subnetwork controlling
the power grid to prevent open connections-it would be appropriate for
the subnetwork controlling the power grid to seek attribution and strong
forms of authentication for every packet that transits into that networkthere is little reason why the New Zealand government, Amazon.co.uk, or
Google should seek to limit connections by insisting on packet source
attribution or user authentication first.
3.8 Efforts to Secure Communications
Securing communications can take several different tacks. There are efforts
to protect against electrical, mechanical, or acoustic emissions that may
reveal information about the communication. Computers leak electromagnetic radiation, which reveals information about the data being processed.
This has been known for quite some time and systems with sensitive data
have metal shields to prevent such leaks. The U.S. government program to
provide such emission security is called TEMPEST.82
The entire communications link may be encrypted. This prevents an
interceptor from gaining any information on communications transiting
the link and was done on a number of AT&T trunk lines in the 1970s (New
York, Washington, and San Francisco) in order to protect against Soviet
interception.83 The encryption is between switches, which permits routing
and interception at the switches.
Individual messages may be encrypted end to end, from the sender to
the receiver. This is what Skype does. It has the advantage of completely
protecting the communication contents from any interception but, unlike
link encryption, it has the disadvantage that addressing information is
visible to interceptors. Secure web browsing using https is another example
of communications using end-to-end encryption.