Dark Territory (8 page)

Read Dark Territory Online

Authors: Fred Kaplan

BOOK: Dark Territory
6.81Mb size Format: txt, pdf, ePub

It went on: “Today, a computer can cause switches or valves to open and close, move funds from one account to another, or convey a military order almost as quickly over thousands of miles as it can from next door, and just as easily from a terrorist hideout as from an office cubicle or a military command center.” These “cyber attacks” could be “combined with physical attacks” in an effort “to paralyze or panic large segments of society, damage our capability to respond to incidents (by disabling the 911 system or emergency communications, for example), hamper our ability to deploy conventional military forces, and otherwise limit the freedom of action of our national leadership.”

The report eschewed alarmism; there was no talk here of a “cyber Pearl Harbor.” Its authors allowed up front that they saw “no evidence of an
impending
cyber attack which could have a debilitating effect on the nation's critical infrastructure.” Still, they added, “this is no basis for complacency,” adding, “The capability to do harm—
particularly through information networks—is real; it is growing at an alarming rate; and we have little defense against it.”

This was hardly the first report to issue these warnings; the conclusions reached decades earlier by Willis Ware, and adopted as policy (or attempted policy) by the Reagan administration's NSDD-145, had percolated through the small, still obscure community of technically minded officials. In 1989, eight years before General Marsh's report, the National Research Council released a study titled
Growing Vulnerability of the Public Switched Networks
, which warned that
“a serious threat to communications infrastructure is developing” from “natural, accidental, capricious, or hostile agents.”

Two years after that, a report by the same council, titled
Computers at Risk
, observed,
“The modern thief can steal more with a computer than with a gun. Tomorrow's terrorist may be able to do more damage with a keyboard than with a bomb.”

In November 1996, just eleven months before the Marsh Report came out, a Defense Science Board task force on “Information Warfare-Defense” described the
“increasing dependency” on vulnerable networks as “ingredients in a recipe for a national security disaster.” The report recommended more than fifty actions to be taken over the next five years, at a cost of $3 billion.

The chairman of that task force, Duane Andrews, had recently been the assistant secretary of defense for command, control, communications, and intelligence—the Pentagon's liaison with the NSA. The vice chairman was Donald Latham, who, at the ASD(C3I) twelve years earlier, had been the driving force behind Reagan's NSDD-145, the first presidential directive on computer security. In his preface, Andrews was skeptical, bordering on cynical, that the report would make a dent. “I should also point out,” he wrote, “that this is the third consecutive year a DSB Summer Study or Task Force has made similar recommendations.”

But unlike those studies, the Marsh Report was the work of a
presidential
commission. The commander-in-chief had ordered it into being; someone on his staff would read its report; maybe the president himself would scan the executive summary; there was, in short, a
chance
that policy would sprout from its roots.

For a while, though, there was nothing: no response from the president, not so much as a meeting or a photo op with the chairman. A few months later, Clinton briefly alluded to the report's substance in a commencement address at the Naval Academy.
“In our efforts to battle terrorism and cyber attacks and biological weapons,” he said, “all of us must be extremely aggressive.” That was it, at least in public.

But behind the scenes, at the same time that Marsh and his commissioners were winding up their final hearings, the Pentagon and the National Security Agency were planning a top secret exercise—a simulation of a cyber attack—that would breathe life into Marsh's warnings and finally, truly, prod top officials into action.

CHAPTER 4
ELIGIBLE RECEIVER

O
N
June 9, 1997, twenty-five members of an NSA “Red Team” launched an exercise called Eligible Receiver, in which they hacked into the computer networks of the Department of Defense, using only commercially available equipment and software. It was the first high-level exercise testing whether the U.S. military's leaders, facilities, and global combatant commands were prepared for a cyber attack. And the outcome was alarming.

Eligible Receiver was the brainchild of Kenneth Minihan, an Air Force three-star general who, not quite a year and a half earlier, had succeeded Mike McConnell as director of the NSA. Six months before then, in August 1995, he'd been made director of the Defense Intelligence Agency, the culmination of a career in military intel. He didn't want to move to Fort Meade, especially after such a short spell at DIA. But the secretary of defense insisted: the NSA directorship was more important, he said, and the nation needed Minihan at its helm.

The secretary of defense was Bill Perry, the weapons scientist who, back in the Carter administration, had coined and defined
“counter command-control warfare”—the predecessor to “information warfare”—and, before then, as the founding president of ESL, Inc. had built many of the devices that the NSA used in laying the grounds for that kind of warfare.

Since joining the Clinton administration, first as deputy secretary of defense, then secretary, Perry had kept an eye on the NSA, and he didn't like what he saw. The world was rapidly switching to digital and the Internet, yet the NSA was still focused too much on telephone circuits and microwave signals. McConnell had tried to make changes, but he lost focus during his Clipper Chip obsession.

“They're broken over there,” Perry told Minihan. “You need to go fix things.”

Minihan had a reputation as an “out-of-the-box” thinker, an eccentric. In most military circles, this wasn't seen as a good thing, but Perry thought he had the right style to shake up Fort Meade.

For a crucial sixteen-month period, from June 1993 until October 1994, Minihan had been commander at Kelly Air Force Base, sprawled out across an enclave called Security Hill on the outskirts of San Antonio, Texas, home to the Air Force Information Warfare Center. Since 1948, four years before the creation of the NSA, Kelly had been the place where, under various rubrics, the Air Force did its own code-making and code-breaking.

In the summer of 1994, President Clinton ordered his generals to start planning an invasion of Haiti. The aim, as authorized in a U.N. Security Council resolution, was to oust the dictators who had come to power through a coup d'état and to restore the democratic rule of the island-nation's elected president, Jean-Bertrand Aristide. It would be a multipronged invasion, with special operations forces pre-positioned inside the country, infantry troops swarming onto the island from several approaches, and aircraft carriers offering support offshore in the Caribbean. Minihan's task was to come up with a way for U.S. aircraft—those carrying troops and
those strafing the enemy, if necessary—to fly over Haiti without being detected.

One of Minihan's junior officers in the Information Warfare Center had been a “demon-dialer” in his youth, a technical whiz kid—not unlike the Matthew Broderick character in
WarGames
—who messed with the phone company, simulating certain dial tones, so he could make long-distance calls for free. Faced with the Haiti challenge, he came to Minihan with an idea. He'd done some research: it turned out that Haiti's air-defense system was hooked up to the local telephone lines—and he knew how to make all the phones in Haiti busy at the same time. There would be no need to attack anti-aircraft batteries with bombs or missiles, which might go astray and kill civilians. All that Minihan and his crew had to do was to tie up the phone lines.

In the end, the invasion was called off. Clinton sent a delegation of eminences—Jimmy Carter, Colin Powell, and Sam Nunn—to warn the Haitian dictators of the impending invasion; the dictators fled; Aristide returned to power without a shot fired. But Minihan had woven the demon-dialer's idea into the official war plan; if the invasion had gone ahead, that was how American planes would have eluded fire.

Bill Perry was monitoring the war plan from the outset. When he learned about Minihan's idea, his eyes lit up. It resonated with his own way of thinking as a pioneer in electronic countermeasures. The Haiti phone-flooding plan was what put Minihan on Perry's radar screen as an officer to watch—and, when the right slot opened up, Perry pushed him into it.

Something else about Kelly Air Force Base caught Perry's attention. The center didn't just devise clever schemes for offensive attacks on adversaries; it also, in a separate unit, devised a clever way to detect, monitor, and neutralize information warfare attacks that adversaries might launch on America. None of the other
military services, not even the Navy, had designed anything nearly so effective.

The technique was called Network Security Monitoring, and it was the invention of a computer scientist at the University of California at Davis named Todd Heberlein.

In the late 1980s, hacking emerged as a serious nuisance and an occasional menace.
The first nightmare case occurred on November 2, 1988, when, over a period of fifteen hours, as many as six thousand UNIX computers—about one tenth of all the computers on the Net, including those at Wright-Patterson Air Force Base, the Army Ballistic Research Lab, and several NASA facilities—went dead and stayed dead, incurably infected from some outside source. It came to be called the “Morris Worm,” named after its perpetrator, a Cornell University grad student named Robert T. Morris Jr. (To the embarrassment of Fort Meade, he turned out to be the son of Robert Morris Sr., chief scientist of the NSA Computer Security Center. It was the CSC that traced the worm to its culprit.)

Morris had meant no harm. He'd started hacking into the Net, using several university sites as a portal to hide his identity, in order to measure just how extensive the network was. (At the time, no one knew.) But he committed a serious mistake: the worm interrogated several machines repeatedly (he hadn't programmed it to stop once it received an answer), overloading and crashing the systems. In the worm's wake, many computer scientists and a few officials drew a frightening lesson: Morris had shown just how easy it was to bring the system down; had that been his intent, he could have wreaked much greater damage still.

As a result of the Morris Worm, a few mathematicians developed programs to detect intruders, but these programs were designed to protect individual computers.
Todd Heberlein's innovation was designing intrusion-detection software to be installed on an open
network
, to which any number of computers might be connected. And
his software worked on several levels. First, it checked for anomalous activity on the network—for instance, key words that indicated someone was making repeated attempts to log on to an account or trying out one random password after another. Such attempts drew particular notice if they entered the network with an MIT.edu address, since MIT, the Massachusetts Institute of Technology, was famous for letting anyone and everyone dial in to its terminal from anyplace on the Net and was thus a favorite point of entry for hackers. Anomalous activities would trigger an alert. At that point, the software could track data from the hacker's session, noting his IP address, how long he stayed inside the network, and how much data he was extracting or transferring to another site. (This “session date” would later be called “metadata.”) After this point, if the hacker's sessions raised enough suspicion to prompt further investigation, Heberlein's software could trace their full
contents
—what the hacker was doing, reading, and sending—in real time, across the whole network that the software was monitoring.

Like many hackers and counter-hackers of the day, Heberlein had been inspired by Cliff Stoll's 1989 book,
The Cuckoo's Egg
. (
A junior officer, who helped adapt Heberlein's software at the Air Force Information Warfare Center, wrote a paper called “50 Lessons from the First 50 Pages of
The Cuckoo's Egg
.”) Stoll was a genial hippie and brilliant astronomer who worked at Lawrence Berkeley National Laboratory, as the computer system's administrator. One day, he discovered a seventy-five-cent error in the lab's phone bill, traced its origins out of sheer curiosity, and wound up uncovering an East German spy ring attempting to steal U.S. military secrets, using the Berkeley Lab's open site as a portal. Over the next several months, relying entirely on his wits, Stoll essentially invented the techniques of intrusion detection that came to be widely adopted over the next three decades. He attached a printer to the input lines of the lab's computer system, so that it typed a transcript of the attacker's activities.
Along with a Berkeley colleague, Lloyd Bellknap, he built a “logic analyzer” and programmed it to track a specific user: when the user logged in, a device would automatically page Stoll, who would dash to the lab. The logic analyzer would also cross-correlate logs from other sites that the hacker had intruded, so Stoll could draw a full picture of what the hacker was up to.

Heberlein updated Stoll's techniques, so that he could track and repel someone hacking into not only a single modem but also a computer network.

Stoll was the inspiration for Heberlein's work in yet another sense. After Stoll gained fame for catching the East German hacker, and his book scaled the best-seller list, Lawrence Livermore National Laboratory—the more military-slanted lab, forty miles from Berkeley—exploited the headlines and requested money from the Department of Energy to create a “network security monitoring” system. Livermore won the contract, but no one there knew how to build such a system. The lab's managers reached out to Karl Levitt, a computer science professor at UC Davis. Levitt brought in his star student, Todd Heberlein.

Other books

What Once Was Lost by Kim Vogel Sawyer
Last Rites by Kim Paffenroth
The Dashing Miss Fairchild by Emily Hendrickson
Crown of Dust by Mary Volmer
The Camp-out Mystery by Gertrude Chandler Warner
Saving Summer by J.C. Isabella
The Sugar Queen by Sarah Addison Allen
Mrs. Poe by Lynn Cullen