Read Worm: The First Digital World War Online
Authors: Mark Bowden
The idea was called “packet-switching.” The concept apparently came almost simultaneously to two cold war scientists: Donald Davies, working at Britian’s National National Physical Laboratory; and an American immigrant scientist from Poland named Paul Baran, at the RAND Corporation in the late 1960s. Both researchers were trying to invent a new, more robust communications network. Baran was specifically tasked with designing one that might withstand a nuclear attack. Davies was just just looking for an improvement over the existing telephone switching networks, but there is little doubt that experience of prolonged German aerial bombardment during World War II lurked somewhere in the back of his mind. Traditional phone networks had critical trunk lines and central switching stations that, if destroyed, could effectively short-circuit the entire network. Both Baran and Davies wanted a system that could survive such blows, that could not be
taken out
. The alternative that seemed to work best was modeled after the human brain.
Neurologists knew that after severe head injuries, the brain began to power up alternative neural pathways that avoided areas of damaged or destroyed cells. Often patients completely recovered functions that, at first glance, might have seemed hopelessly lost. The brain seemed to possess enough built-in redundancy to compensate for even seemingly catastrophic blows; abandoning the most direct pathway would not have worked for telephone grids, because the farther the message traveled through the network’s wires and switches, and the more times its direction shifted, the more degraded the signal became. Digital messages, on the other hand, messages composed in the ones and zeros of computer object code, never degraded. They could bounce around indefinitely without losing their integrity, and still arrive pristine. There was another advantage to the digital approach. Since messages were broken down by the computer into long lists of ones and zeros, why not break them down into smaller bits, or “packets,” and then reassemble them at the end point? That way even a message as simple as an email might take dozens of different pathways to its destination. It was more like teleportation than simple transmission: You disassembled the data into many distinct packets; cast them out on the network, where each packet found its own way; and then reassembled the data at the end point, all in microseconds, which are perceived by humans as real time. No delay. Diagrams of the proposed “packet-switching” network looked more like drawings of interlinked brain cells than a road map or a telephone grid. Such a network required minimal central planning, because each new computer node that connected just enlarged and strengthened the web. You could not destroy such a network easily, because even if you managed to take out a large chunk, traffic would automatically seek out surviving nodes.
This gave the Internet an especially hardy nature—a fact that buttressed the anarchic theology of the techno-utopians. But it was not invulnerable, as the massive 2002 DDoS attack demonstrated. The system’s root servers were critical, because all Internet traffic relied on at least one of the thirteen. If you could mount a sufficiently powerful assault, it was theoretically possible to overwhelm all thirteen and bring even this very resilient global network to a dead stop. It would take a mighty computer to mount an attack like that, or one very, very large botnet. By the turn of the century, botnets were the coming thing . . .
. . . and
they were getting easier to make.
In the beginning, networks were created by wiring computers together manually, but as the infrastructure of the Internet solidified, interconnection was a given. Almost all computers today are connected to a network, even if only to their local ISP. So if you were clever enough to make all the computers on a network work together, you could effectively assemble yourself a supercomputer. There was even a poorly guarded infrastructure already in place to facilitate such work. Techies had long been using Internet Relay Chat (IRC) channels to maintain constant real-time dialogue with colleagues all over the world. IRC offered a platform for global communication that was controlled from a single point, the channel’s manager, and was used to host open-ended professional discussions, laboratory projects, and teleconferences before desktop applications for such things became widely known or available. Members of a group could use the channel to communicate directly and privately to one another but could also broadcast messages to the entire membership. Some of the earliest benign “bots” were crafted by IRC channel controllers to automatically monitor or manage discussion. The idea wasn’t completely new. Computer operators had long written programs to automate routine tasks on their networks. These early bots were useful and harmless. In the late 1970s, a Massachusetts researcher named Bob Thomas created a silly worm he called “Creeper,” which would display a message on infected machines: “I’m the Creeper, catch me if you can!” Creeper was more frog than worm. It hopscotched from target to target, removing itself from each computer as it jumped to the next. It was designed just to show off a little, and to make people laugh.
But even those engaged in noble pursuits sometimes don’t play nice. Chat room members sometimes chose to commandeer these channels, to, in effect, become alternate controllers. One very effective way to hijack an IRC channel (to, in effect, create a botnet) was to bypass individual computer operators with a worm that could infect all the machines. The author seeded the network with his code and linked them to himself. The official manager of the channel would have no idea his network had been hijacked. The usurper could then marshal the power of the network to mount a DDoS attack against those with whom he disagreed or of whom he disapproved, or he could simply explore the network all he wished, collecting information from individual computers, spying, or issuing commands of his own. It was a tool ready-made for more nefarious purposes.
On Wednesday, November 4, 1988, as voters went to the polls nationwide to choose Vice President George H. W. Bush over Governor Michael Dukakis of Massachusetts for the White House, a headline in the
New York Times
read:
“VIRUS” IN MILITARY COMPUTERS
DISRUPTS SYSTEMS NATIONWIDE
The writer, John Markoff, reported:
In an intrusion that raises questions about the vulnerability of the nation’s computers, a Department of Defense network has been disrupted since Wednesday by a rapidly spreading “virus” program apparently introduced by a computer science student.
. . . By late yesterday afternoon computer experts were calling the virus the largest assault ever on the nation’s computers.
“The big issue is that a relatively benign software program can virtually bring our computing community to its knees and keep it there for some time,” said Chuck Cole, deputy computer security manager at Lawrence Livermore Laboratory in Livermore, Calif., one of the sites affected by the intrusion. “The cost is going to be staggering.”
For those inclined to conspiracy theories, it was noted with particular interest that the twenty-three-year-old author of the “virus,” Robert Tappan Morris, a Cornell University graduate student, was the son of the chief scientist at the National Computer Security Center, a division of the National Security Agency. The younger Morris had grown up playing with computers. Typical of those in the hacking community, he had a fluency with networks and network security (such as it existed at that time, which is to say, nearly always not at all). By all accounts, he cooked up the worm on his own. Markoff reported that the grad student’s creation had clogged computer networks nationwide; in 1988, these networks still mostly belonged to the military, corporations, and universities. Cliff Stoll, then working as a computer security expert at Harvard University, told the newspaper, “There is not one system manager who is not tearing his hair out. It is causing enormous headaches.”
The managers were annoyed, certainly, but also clearly impressed. More than one programmer described the Morris Worm as “elegant.” It consisted of only ninety-nine lines of code, and had a number of clever ways to invade computers, one of them by causing a
buffer overflow
(remember that technique?) in a file-sharing application of the ARPANET. Morris launched his worm from an IP address at Harvard University to cover his tracks at Cornell, expecting it to evade detection in the computers it infected. As smart as it was, the worm had a fatal flaw. In an effort to protect itself from being flushed out of a network, the code was designed to reproduce itself wantonly, and, much to Morris’s dismay, ended up spiraling out of control. When he realized that it was running amok, he said he tried to send out instructions to kill it, but the networks were so jammed with his worm’s traffic that the corrective could not get out.
Once it malfunctioned, Morris never tried to evade responsibility. He was later convicted under a new Federal Computer Fraud and Abuse Act, fined $10,000, and sentenced to three years of probation and four hundred hours of community service. Perhaps a more lasting punishment has been lifelong notoriety, a quasi-hero status among those who admire acts of cybervandalism. He is today an associate professor at MIT, and insists he had intended nothing more than to quietly infect computers in order to count them. Prosecutors charged that he had, in fact, designed the worm to “attack” computers owned by Sun Microsystems, Inc., and the Digital Equipment Corporation, two of the institutions hardest hit.
Prior to this stunning event, some in the tech field had differentiated between viruses and worms by classifying the former as malicious, the latter as beneficial. On purpose or not, Morris’s worm confirmed its destructive potential. Geoffrey Goodfellow, president of Anterior Technology, Inc., told Markoff, “It was an accident waiting to happen. We deserved it. We needed something like this to bring us to our senses. We have not been paying too much attention to protecting ourselves.” This kind of lament was becoming common . . . and we would hear it again.
As the Internet began to more fully congeal in the following decade, and as the personal computer became as commonplace in American homes as the TV set, malware preyed primarily on the explosive success of email. Having learned to invade computers and propagate over networks, malware creators were no longer content to demonstrate their ability to infect and spread; they were now intent on writing malware that could actually accomplish something. Making computers easy to use and linking them together had many wonderful effects, but it also created an ocean of suckers. Worms and viruses exploited the naïveté of new computer users, who readily fell victim to “Trojan horses,” usually emails enticing them to open unsolicited attachments. One of the worst was the Melissa virus, so dubbed because its author, David L. Smith, admired a lap dancer by that name. It was inadvertently released from a sexually oriented website, and was designed initially to distribute pornographic images. It worked by attaching itself to a Microsoft Word document, and once downloaded by a single user, would raid email files for new targets and begin mass-mailing itself. Melissa rapidly clogged networks worldwide. Smith was arrested, served twenty months in prison, and was fined $5,000.
The email attachment technique is still used today, but peaked with the I Love You virus in 2000, which arrived as a mysterious email with a compelling come-on, “I Love You,” and invited recipients to open the attached missive from an unknown admirer. It preyed on curiosity, loneliness, and vanity, and once invited into a computer, like Melissa, sought out email files for new targets. This virus, designed by two programming students in the Philippines, was crafted to steal passwords from victims’ personal files, but failed, so it fell more into the category of malicious mischief than theft. It resulted in an estimated $5.5 billion in damage, infecting as many as fifty million computers in a single day. Like the malware to come, it exploited a known vulnerability in Microsoft’s operating system. It was the attack that prompted the software giant to get serious about protecting itself, and came, coincidentally, at precisely the time when Microsoft was hiring T. J. Campana. The success of these email virsuses made computer users more wary about opening unsolicited attachments, and helped create the lucrative antivirus industry.
Melissa and I Love You gave way to what Phil Porras calls “the Era of the Massive Worms.” There would continue to be very successful email viruses, notably one named after tennis star Anna Kournikova, which took advantage of her fetching image to lure computer users into opening a “picture,” but heightened security measures and pickier computer users gradually forced greater stealth.
Worms needed no human help. One of the first big ones was Code Red, which appeared on July 13, 2001, and was so called because the researchers who discovered it happened to be drinking a soda by that name. It created a buffer overflow in Windows Operating Systems by generating long strings of the letter N, thereby overflowing the buffer and hijacking the storing program. It defaced websites by displaying the triumphant message, “HELLO! Welcome to
http://www.WORM.com
! Hacked by Chinese!” It was soon so widespread that the phrase “Hacked by Chinese!”entered the language, appropriated by victorious online game players to lord it over defeated opponents. The author of Code Red was never caught, but most clues again pointed to the Philippines. It was thought to have infected 359,000 computers.
Code Red was followed by a succession of worms— Slammer, Blaster, Nimda, Sasser, and others—which increasingly focused on vulnerabilities Microsoft had already patched. But the Era of the Massive Worms effectively ended when the software giant released Service Pack 2 in 2004, buttoning up the operating system as never before. It marked the end of the naive early period of Internet development, which was defined by the happy notion that freely sharing all information would save the world. This belief still has its fierce adherents—Wiki- Leaks comes to mind—but the average computer user had learned his lesson by 2004. Microsoft, at least, had noticed snakes in the garden. Whereas Windows initially had been designed to have a strictly hospitable disposition, happily opening whatever packet of data came knocking, Service Pack 2 regarded anything inbound as a threat.