Cyber War: The Next Threat to National Security and What to Do About It (14 page)

Read Cyber War: The Next Threat to National Security and What to Do About It Online

Authors: Richard A. Clarke,Robert K. Knake

Tags: #General, #Computers, #Technology & Engineering, #Political Science, #Security, #United States, #Political Freedom & Security, #Cyberterrorism, #Political Process, #Law Enforcement, #International Security, #Information warfare, #Military Science, #Terrorism, #Prevention

BOOK: Cyber War: The Next Threat to National Security and What to Do About It
3.43Mb size Format: txt, pdf, ePub

The 2003 blackout lasted a few long hours for most people, but even without anyone trying to prolong the effect it lasted four days in some places. In Auckland, New Zealand, in 1998 the damage
from overloading power lines triggered a blackout and kept the city in the dark for five weeks. If a control system sends too much power down a high-tension line, the line itself can be destroyed and initiate a fire. In the process, however, the surge of power can overwhelm home and office surge protectors and fry electronic devices, computers to televisions to refrigerators, as happened recently in my rural county during a lightning storm.

The best example, however, of how computer commands can cause things to destroy themselves may be electric generators. Generators make electricity by spinning, and the number of times they spin per minute creates power in units expressed in a measurement called Hertz. In the United States and Canada, the generators on most subgrids spin at 60 Megahertz. When a generator is started, it is kept off the grid until it gets up to 60 MHz. If it is connected to the grid at another speed, or if its speed changes very much while on the grid, the power from all of the other generators on the grid spinning at 60 MHz will flow into the slower generator, possibly ripping off its turbine blades.

To test whether a cyber warrior could destroy a generator, a federal government lab in Idaho set up a standard control network and hooked it up to a generator. In the experiment, code-named Aurora, the test’s hackers made it into the control network from the Internet and found the program that sends rotation speeds to the generator. Another keystroke and the generator could have severely damaged itself. Like so much else, the enormous generators that power the United States are manufactured when they are ordered, on the just-in-time delivery principle. They are not sitting around, waiting to be sold. If a big generator is badly damaged or destroyed, it is unlikely to be replaced for months.

Fortunately, the Federal Electric Regulatory Agency in 2008 finally required electric companies to adopt some specific cyber security measures and warned that it would fine companies for
noncompliance up to one million dollars a day. No one has been fined yet. The companies have until sometime in 2010 to comply. Then the commission promises it will begin to inspect some facilities to determine if they are compliant. Unfortunately, President Obama’s “Smart Grid” initiative will cause the electric grid to become even more wired, even more dependent upon computer network technology.

The same way that a hand can reach out from cyberspace and destroy an electric transmission line or generator, computer commands can derail a train or send freight cars to the wrong place, or cause a gas pipeline to burst. Computer commands to a weapon system may cause it to malfunction or shut off. What a cyber warrior can do, then, is to reach out from cyberspace, causing things to shut down or blow up, things like the power grid, or a thousand other critical systems, things like an opponent’s weapons.

The design of the Internet, flaws in software and hardware, and allowing critical machines to be controlled from cyberspace, together, these three things make cyber war possible. But why haven’t we fixed these problems by now?

C
HAPTER
F
OUR
THE DEFENSE FAILS

T
hus far we have seen evidence that there have been “trial runs” at cyber war, mostly using primitive denial of service attacks. We have seen how the United States, China, Russia, and others are investing heavily in cyber war units. We have imagined what the first few minutes of a devastating, full-scale cyber attack on the U.S. would look like. And we have walked through what it is about cyber technology and its uses that makes such a devastating attack possible.

Why hasn’t anybody done anything to fix these vulnerabilities? Why are we placing such emphasis on our ability to attack others, rather than giving priority to defending ourselves against such an attack? People have tried to create a cyber war defense for the U.S. Obviously they have not succeeded. In this chapter we’ll review what efforts have been made to defend against cyber war (and
cyber crime, and cyber espionage) and see why they have been such an unmitigated failure. Strap yourself in, we are first going to move quickly through twenty years of efforts in the U.S. to do something about cyber security. Then we will talk about why it hasn’t worked.

INITIAL THOUGHTS AT THE PENTAGON

In the early 1990s the Pentagon began to worry about the vulnerability created by reliance on new information systems to conduct warfare. In 1994, something called the “Joint Security Commission” that was set up by DoD and the intelligence community focused on the new problem introduced by the spread of networked technology. The commission’s final report got three important concepts right:

  • “Information systems technology…is evolving at a faster rate than information systems security technology.”
  • “The security of information systems and networks [is] the major security challenge of this decade and possibly the next century and…there is insufficient awareness of the grave risks we face in this arena.”
  • The report also noted that the increased dependence in the private sector on information systems made the nation as a whole, not just the Pentagon, more vulnerable.

These three points are all true and even more relevant today. A prescient
Time
magazine article from 1995 demonstrates the point that cyber war and domestic vulnerabilities were subjects to which Washington was alerted fifteen years ago. We keep rediscovering this wheel. In the 1995 story, Colonel Mike Tanksley waxed poetic about how in a future conflict with a lesser power the United States would force our enemy to submit without our ever having
fired a shot. Using hacker techniques that were then only possible in the movies, Colonel Tanksley described how America’s cyber warriors would take down the enemy’s phone system, destroy the routing system for the country’s rail line, issue phony commands to the opposing military, and take over television and radio broadcasts to flood them with propaganda. In the fantasy scenario that Tanksley describes, the effect of using these tactics would end the conflict before it even starts.
Time
magazine reported that a logic bomb “would remain dormant in an enemy system until a predetermined time, when it would come to life and begin eating data. Such bombs could attack, for example, computers that run a nation’s air defense system or central bank.” The article told readers that the CIA had a “clandestine program that would insert booby-trapped computer chips into weapons systems that a foreign arms manufacturer might ship to a potentially hostile country—a technique called ‘chipping.’” A CIA source told the reporters how it was done, explaining, “You get into the arms manufacturer’s supply network, take the stuff off-line briefly, insert the bug, then let it go to the country…. When the weapons system goes into a hostile situation, everything about it seems to work, but the warhead doesn’t explode.”

The
Time
article was a remarkable piece of journalism that captured both complicated technical issues and the resulting policy problems long before most people in government understood anything about them. On the cover it asked: “The U.S. rushes to turn computers into tomorrow’s weapons of destruction. But how vulnerable is the homefront?” That question is as pertinent today as it was then, and, remarkably, the situation has changed very little. “An infowar arms race could be one the US would lose because it is already so vulnerable to such attacks,” the writers conclude. “Indeed,” they continue, “the cyber enhancements that the military is banking on for its conventional forces may be chinks in America’s armor.” So by the mid-1990s journalists were seeing that the Pentagon and the
intelligence agencies were excited about the possibility of creating cyber war capabilities, but doing so would create a double-edged sword, one that could be used against us.

MARCHING INTO THE MARSH

Timothy McVeigh and Terry Nichols woke a lot of people up in 1995. Their inhumane attack in Oklahoma City, killing children at a day care center and civil servants at their desks, really got to Bill Clinton. He delivered an especially moving eulogy near the site of the attack. When he came back to the White House, I met with him, along with other White House staff. He was thinking conceptually, as he often does. Society was changing. A few people could have significant destructive power. People were blowing things up in the U.S., not just in the Middle East. What if the truck bomb had been aimed at the stock market, or the Capitol, or some building whose importance we didn’t even recognize? We were becoming a more technological nation, but in some ways that also was making us a more fragile nation. At the urging of Attorney General Janet Reno, Clinton appointed a commission to look at our vulnerability as a nation to attacks on important facilities.

Important facilities got translated into bureaucratese as “critical infrastructure,” a phrase that continues, and continues to confuse, today. The new panel got the moniker Presidential Commission on Critical Infrastructure Protection (PCCIP). Not surprisingly, then, most people referred to it using the name of its Chairman, retired Air Force General Robert Marsh. The Marsh Commission was a full-time endeavor for a large panel and a professional staff. They held meetings throughout the country and talked to experts in numerous industries, universities, and government agencies. What they came back with in 1997 was not what we expected. Rather than focusing
on right-wingers like McVeigh and Nichols or al Qaeda terrorists like those who had attacked the World Trade Center in 1993, Marsh sounded a loud alarm about the Internet. Noting what was then a recent trend, the Marsh Commission said that important functions from rail to banking, from electricity to manufacturing were all being connected to the Internet and yet that network of networks was completely insecure. By hacking from the Internet, an attacker could shut down or damage “critical infrastructure.”

Raising the prospect of nation-states creating “information war” attack units, Marsh called for a massive effort to protect the U.S. He identified the chief challenge as being the role of the private sector, which owned most of what counted as “critical infrastructure.” Industries were wary of the government regulating them to promote cyber security. Instead of doing that, Marsh called for a “public-private partnership,” heightened awareness, sharing of information, and research into more secure designs.

I was disappointed, although in time I came to understand that General Marsh was right. As the senior White House official in charge of security and counterterrorism issues, I had hoped for a report that would have helped me get the funding and structure I needed to deal with al Qaeda and others. Instead, Marsh was talking about computers, which was not my job. My close friend Randy Beers, then Special Assistant to the President for Intelligence and the man who had been shepherding the Marsh Commission for the White House, walked next door to my office (with its twenty-foot-high ceiling and great view of the National Mall), plunked himself down in a chair, and announced, “You have to take over critical infrastructure. I can’t do it because of the Clipper chip.”

The Clipper chip had been a plan, developed in 1993 by NSA, in which the government would require anyone in the U.S. using encryption to install a chip that would let NSA listen in, with a court order. Privacy, civil liberties, and technology interest groups
united in vehement opposition. For some reason, they did not trust that NSA would only listen in when they had a warrant (which, under George W. Bush, later proved to be true). The Clipper chip got killed by 1996, but it had left a lot of distrust between the growing information technology (IT) industry and the U.S. intelligence community. Beers, being an intelligence guy, felt he could not gain the trust of the IT industry. So he dumped it in my lap. Moreover, he had already wired that decision with the National Security Advisor, Sandy Berger, who asked me to write a Presidential Decision Document stating our policy on the issue, and putting me in charge of it.

The result was a clear statement of the problem and our goal, but within a structure with limitations that prevented us from achieving it. The problem was that “because of our military strength, future enemies…may seek to harm us…with non-traditional attacks on our infrastructure and information systems…capable of significantly harming both our military power and our economy.” So far so good. The goal was that “any interruptions or manipulations of critical functions must be brief, infrequent, manageable, geographically isolated, and minimally detrimental.” Pretty good stuff.

But how to do it? By the time every agency in government had watered the decision down, it read: “The incentives that the market provides are the first choice for addressing the problem of critical infrastructure protection…. [We will consider] regulation only in the event of a material failure of the market…[and even then] agencies will identify alternatives to direct regulation.” I got a new title in the Decision Document, but it would not fit on a business card: “National Coordinator for Security, Infrastructure Protection, and Counter-terrorism.” Little wonder the media used the term “czar” no one could remember the real title. The Decision Document made clear, however, that the czar could not direct anyone to do anything. The Cabinet members had been adamant
about that. No regulation and no decision-making authority meant little potential for results.

Nonetheless, we set off to work with the private sector and with government agencies. The more I worked on the issue, the more concerned I became. Marsh had not really been alarmist, I came to appreciate; he and his commission had actually understated the problem. Our work on the Y2K computer glitch (the fact that most software could not roll over from 1999 to 2000 and might, therefore simply freeze up) greatly added to my understanding of just how much everything was rapidly becoming dependent upon computer-controlled systems and networks connected in some way to the Internet. In the 2000 federal budget, I was able to add $2 billion for improved cyber security efforts, but it was a fraction of what was needed.

By 2000, we had developed a National Plan for Information Systems Protection, but there was still no willingness in the government to attempt to regulate the industries that ran the vulnerable critical infrastructure. To highlight the ideological correctness of the decision to avoid regulation, I used the phrase “eschew regulation” in the decision document, mimicking Maoist rhetoric. (Mao had directed, “dig tunnels deeper, bury food everywhere, eschew hegemonism.”) No one saw the irony. Nor would the Cabinet departments even do enough to protect their own networks, as called for in the Presidential Directive. Thus, the plan was toothless. It did, however, make clear to industry and to the public what the stakes were. Bill Clinton’s cover letter left no doubt that the IT revolution had changed how the economy and national defense were done. From turning on the lights, to calling 911, to boarding an aircraft, we now relied upon computer-driven systems. A “concerted attack” on the computers of an important economic sector would “have catastrophic results.” This was not a theoretical potential; rather, “we know the threat is real.” Opponents that had relied on “bombs and bullets”
could now use “a laptop…[as] a weapon capable of…enormous damage.”

I added in a cover letter of my own that “More than any other nation, America is dependent upon cyberspace.” Cyber attacks could “crash electric grids…transportation systems…financial institutions. We know other governments are developing that capability.” So were we, but I didn’t say that.

SIX FUNNY NAMES

During those initial years of my focusing on cyber security there were six major incidents that convinced me that this was a serious problem. First, in 1997, I worked with NSA on a test of the Pentagon’s cyber security in an exercise the military called “Eligible Receiver.” Within two days, our attack team had penetrated the classified command network and was in position to issue bogus orders. I stopped the exercise early. The Deputy Defense Secretary was shocked at the Pentagon’s vulnerability and ordered all components to buy and install intrusion detection systems. They quickly discovered that there were thousands of attempts a day to hack into DoD networks. And those were the ones they knew about.

In 1998, during a crisis with Iraq, someone hacked into the unclassified DoD computers that were needed to manage the U.S. military buildup. The FBI gave the attack the appropriate name “Solar Sunrise” (it was a wake-up call for many). After a few days of panic, the attackers were discovered to be not Iraqi but Israeli. Specifically, a teenager in Israel and two more in California had proved how poorly secured our military logistics network was.

In 1999, an Air Force base noticed something odd about its computer network. The Air Force called the FBI, which called NSA. What emerged was that huge amounts of data were being exfiltrated
from the research files at the airbase. Indeed, gigantic amounts of data were being shipped out from a lot of computers in the Defense network and from many data systems in the national nuclear laboratories of the Energy Department. The FBI case file for this one was called “Moonlight Maze,” which also turned out to be apt because no one could throw much light on what was happening other than to say the data was being sent through a long series of stops in many countries before ending up somewhere. The two deeply disturbing aspects of this were that the computer security specialists could not stop the data from being stolen, even when they knew about the problem, and no one was really sure where it all was going (although some people later publicly attributed the attack to Russians). Every time new defenses were put in place, the attacker beat them. Then, one day, the attacks stopped. Or, more likely, they started attacking in a way we could not see.

Other books

The Porcelain Dove by Sherman, Delia
Proserpine and Midas by Mary Shelley
Away With the Fairies by Twist, Jenny
The Shift of Numbers by Warrington, David
Just The Way You Are by Barbara Freethy
Wish You Were Here by Jodi Picoult