Authors: Fred Kaplan
McConnell pushed hard for the Clipper Chipâhe made it his top priorityâbut it was doomed from the start. First, it was expensive: a phone with a Clipper Chip installed would cost more than a thousand dollars. Second, the two-key procedure was baroque. (Dorothy Denning, one of the country's top cryptologists, took part in a simulated exercise. She obtained the key from Treasury, but after driving out to Quantico, learned that the person from NIST had picked up the wrong key. They couldn't unlock the encryption.) Finally, there was the biggest obstacle: very few people trusted the Clipper Chip, because very few people trusted the intelligence agencies. The
revelations of CIA and NSA domestic surveillance, unleashed by Senator Frank Church's committee in the mid-1970s, were still a fresh memory. Nearly everyoneâeven those who weren't inclined to distrust spy agenciesâsuspected that the NSA had programmed the Clipper Chip with a
secret
back door that its agents could open, then listen to phone conversations, without going through Treasury, NIST, or any legal process.
The Clipper Chip ended with a whimper. It was McConnell's well-intentioned, but misguided, effort to forge a compromise between personal privacy and national securityâand to do so openly, in the public eye. The next time the NSA created or discovered back doors into data, it would do so, as it had always done, under the cloak of secrecy.
O
N
April 19, 1995, a small gang of militant anarchists, led by Timothy McVeigh, blew up a federal office building in Oklahoma City, killing 168 people, injuring 600 more, and destroying or damaging 325 buildings across a sixteen-block radius, causing more than $600 million in damage. The shocking thing that emerged from the subsequent investigation was just how easily McVeigh and his associates had pulled off the bombing. It took little more than a truck and a few dozen bags of ammonium nitrate, a common chemical in fertilizers, obtainable in many supply stores. Security around the building was practically nonexistent.
The obvious question, in and out of the government, was what sorts of targets would get blown up next: a dam, a major port, the Federal Reserve, a nuclear power plant? The damage from any of those hits would be more than simply tragic; it could reverberate through the entire economy. So how vulnerable were they, and what could be done to protect them?
On June 21, Bill Clinton signed a Presidential Decision Directive, PDD-39, titled “U.S. Policy on Counterterrorism,” which, among other things, put Attorney General Janet Reno in charge of a “cabinet
committee” to reviewâand suggest ways to reduceâthe vulnerability of “government facilities” and
“critical national infrastructure.”
Reno turned the task over to her deputy, Jamie Gorelick, who set up a Critical Infrastructure Working Group, consisting of other deputies from the Pentagon, CIA, FBI, and the White House. After a few weeks of meetings, the group recommended that the president appoint a commission, which in turn held hearings and wrote a report, which culminated in the drafting of another presidential directive.
Several White House aides, who figured the commission would come up with new ways to secure important physical structures, were startled when more than half of its report and recommendations dealt with the vulnerability of computer networks and the urgent need for what it called “cyber security.”
The surprise twist came about because key members of the Critical Infrastructure Working Group and the subsequent presidential commission had come from the NSA or the Navy's super-secret black programs and were thus well aware of this new aspect of the world.
Rich Wilhelm, the NSA director of information warfare, was among the most influential members of the working group. A few months before the Oklahoma City bombing, President Clinton had put Vice President Al Gore in charge of overseeing the Clipper Chip; Mike McConnell sent Wilhelm to the White House as the NSA liaison on the project. The chip soon died, but Gore held on to Wilhelm and made him his intelligence adviser, with a spot on the National Security Council staff. Early on at his new job, Wilhelm told some of his fellow staffers about the discoveries he'd made at Fort Meade, especially those highlighting the vulnerability of America's increasingly networked society. He wrote a memo on the subject for Clinton's national security adviser, Anthony Lake, who signed it with his own name and passed it on to the president.
When Jamie Gorelick put together her working group, it was natural
that Wilhelm would be on it. One of its first tasks was to define its title, to figure out which infrastructures were
critical
âwhich sectors were vital to the functioning of a modern society. The group came up with a list of eight: telecommunications, electrical power, gas and oil, banking and finance, transportation, water supply, emergency services, and “continuation of government” in the event of war or catastrophe.
Wilhelm pointed out that all of these sectors relied, in some cases heavily, on computer networks. Terrorists wouldn't need to blow up a bank or a rail line or a power grid; they could merely disrupt the computer network that controlled its workings, and the result would be the same. As a result, Wilhelm argued, “critical infrastructure” should include not just physical buildings but the stuff of what would soon be called cyberspace.
Gorelick needed no persuading on this point. As deputy attorney general, she served on several interagency panels, one of which dealt with national security matters. She co-chaired that panel with the deputy director of the CIA, who happened to be Bill Studeman, the former NSA director (and Bobby Ray Inman protégé). In his days at Fort Meade, Studeman had been a sharp advocate of counter-C2 warfare; at Langley he was promoting the same idea, now known as information warfare, both its offensive and its defensive sidesâAmerica's ability to penetrate the enemy's networks and the enemy's ability to penetrate America's.
Studeman and Gorelick met to discuss these issues every two weeks, and his arguments had resonance. Before her appointment as deputy attorney general, Gorelick had been general counsel at the Pentagon, where she heard frequent briefings on hackings of defense contractors and even of the Defense Department. Now, at the Justice Department, she was helping to prosecute criminal cases of hackers who'd penetrated the computers of banks and manufacturers. One year before Oklahoma City, Gorelick had helped draft the Computer
Crime Initiative Action Plan, to boost the Justice Department's expertise in
“high-tech matters,” and had helped create the Information Infrastructure Task Force Coordinating Committee.
These ventures weren't mere hobbyhorses; they were mandated by the Justice Department's caseload.
In recent times, a Russian crime syndicate had hacked into Citibank's computers and stolen $10 million, funneling it to separate accounts in California, Germany, Finland, and Israel. A disgruntled ex-employee of an emergency alert network, covering twenty-two states, crashed the system for ten hours. A man in California gained control of the computer running local phone switches, downloaded information about government wiretaps on suspected terrorists, and posted the information online. Two teenage boys, malicious counterparts to the hero of
WarGames
, hacked into the computer network at an Air Force base in Rome, New York; one of the boys later sneered that military sites were the easiest to hack on the entire Internet.
From all thisâher experiences as a government lawyer,
the interagency meetings with Studeman, and now the discussions with Rich Wilhelm on the working groupâGorelick was coming to two disturbing conclusions. First, at least in this realm, the threats from criminals, terrorists, and foreign adversaries were all the same: they used the same means of attack; often, they couldn't be distinguished. This wasn't a problem for the Department of Justice or Defense alone; the whole government had to deal with it, and, since most computer traffic moved along networks owned by corporations, the private sector had to help find, and enforce, solutions, too.
Second, the threat was wider and deeper than she'd imagined. Looking over the group's list of “critical” infrastructures, and learning that they were all increasingly controlled by computers, Gorelick realized, in a jaw-drop moment, that a coordinated attack by a handful of technical savants, from just down the street or the other side of the globe, could devastate the nation.
What nailed this new understanding was a briefing by the Pentagon's delegate to the working group, a retired Navy officer named Brenton Greene, who had recently been named to a new post, the director for infrastructure policy, in the office of the undersecretary of defense.
Greene had been involved in some of the military's most highly classified programs. In the late 1980s and early 1990s, he was a submarine skipper on beyond-top-secret spy missions. After that, he managed Pentagon black programs in a unit called the J Department, which developed emerging technologies that might give America an advantage in a coming war.
One branch of J Department worked on “critical-node targeting.” The idea was to analyze the infrastructures of every adversary's country and to identify the key targetsâthe smallest number of targets that the American military would have to destroy in order to make a huge impact on the course of a war. Greene helped to develop another branch of the department, the Strategic Leveraging Project, which focused on new ways of penetrating and subverting foreign adversaries' command-control networksâthe essence of information warfare.
Working on these projects and seeing how easy it was, at least in theory, to devastate a foreign country with a few well-laid bombs or electronic intrusions, Greene realizedâas had several others who'd journeyed down this path before himâthe flip side of the equation: what we could do to them, they could do to us. And Greene was also learning that America was far more vulnerable to these sorts of attacksâespecially information attacksâthan any other country on the planet.
In the course of his research, Greene came across a 1990 study by the U.S. Office of Technology Assessment, a congressional advisory group, called
Physical Vulnerability of Electric Systems to Natural Disasters and Sabotage
. In its opening pages, the authors revealed which power stations and switches, if disabled, would take down huge chunks of
the national grid. This was a public document, available to anyone who knew about it.
One of Greene's colleagues in the J Department told him that, soon after George Bush entered the White House in January 1989, Senator John Glenn showed the study to General Brent Scowcroft, Bush's national security adviser. Scowcroft was concerned and asked a Secret Service officer named Charles Lane to put together a small teamâno more than a half dozen technical analystsâto do a separate study. The team's findings were so disturbing that Scowcroft shredded all of their work material. Only two copies of Lane's report were printed. Greene obtained one of them.
At this point, Greene concluded that he'd been working the wrong side of the problem: protecting America's infrastructure was more vitalâas he saw it, more urgentâthan seeking ways to poke holes in foreign infrastructures.
Greene knew Linton Wells, a fellow Navy officer with a deep background in black programs, who was now military assistant to Walter Slocombe, the undersecretary of defense for policy. Greene told Wells that Slocombe should hire a director for infrastructure policy. Slocombe approved the idea. Greene was hired.
In his first few months on the new job, Greene worked up a briefing on the “interdependence” of the nation's infrastructure, its concentration, and the commingling of one segment with the othersâhow disabling a few “critical nodes” (a phrase from J Department) could severely damage the country.
For instance, Greene knew that the Bell Corporation distributed a CD-ROM listing all of its communications switches worldwide, so that, say, a phone company in Argentina would know how to connect circuits for routing a call to Ohio. Greene looked at this guide with a different question in mind: where were all the switches in the major American cities? In each case he examined, the switches wereâfor reasons of economic efficiencyâconcentrated at just a
couple of sites. For New York City, most of them were located at two addresses in Lower Manhattan: 140 West Street and 104 Broad Street. Take out those two addressesâwhether with a bomb or an information warfare attackâand New York City would lose almost all of its phone service, at least for a while. The loss of phone service would affect other infrastructures, and on the cascading would go.
Capping Greene's briefing, the CIAâwhere Bill Studeman was briefly acting directorâcirculated a classified report on the vulnerability of SCADA systems. The acronym stood for Supervisory Control and Data Acquisition. Throughout the country, again for economic reasons, utility companies, waterworks, railway linesâvast sectors of critical infrastructureâwere linking one local stretch of the sector to another, through computer networks, and controlling all of them remotely, sometimes with human monitors, often with automated sensors. Before the CIA report, few on the working group had ever heard of SCADA. Now, everyone realized that they were probably just scratching the surface of a new danger that came with the new technology.
Gorelick wrote a memo, alerting her superiors that the group was expanding the scope of its inquiry,
“in light of the breadth of critical infrastructures and the multiplicity of sources and forms of attack.” It was no longer enough to consider the likelihood and impact of terrorists blowing up critical buildings. The groupâand, ultimately, the presidentâalso had to consider “threats from other sources.”
What to call these “other” threats?
One word was floating around in stories about hackings of one sort or another: “cyber.” The word had its roots in “cybernetics,” a term dating back to the mid-nineteenth century, describing the closed loops of information systems. But in its present-day context of computer networks, the term stemmed from William Gibson's 1984 science-fiction novel,
Neuromancer
, a wild and eerily prescient tale of murder and mayhem in the virtual world of “cyberspace.”