Collected Essays (52 page)

Read Collected Essays Online

Authors: Rudy Rucker

BOOK: Collected Essays
6.51Mb size Format: txt, pdf, ePub

The original plan for the ENIAC was that it would be used to rapidly calculate the trajectories traveled by shells fired at different elevation angles at different air temperatures. When the project was funded in 1943, these trajectories were being computed either by the brute force method of firing lots of shells, or by the time-consuming methods of having office workers carry out step-by-step calculations of the shell paths according to differential equations.

As it happened, World War II was over by the time ENIAC was up and running, so ENIAC wasn’t actually ever used to compute any ballistic trajectories. The first computation ENIAC carried out was a calculation to test the feasibility of building a hydrogen bomb. It is said that the calculation used an initial condition of one million punch cards, with each punch card representing a single “mass point.” The cards were run though ENIAC, a million new cards were generated, and the million new cards would served as input for a new cycle of computation. The calculation was a numerical solution of a complicated differential equation having to do with nuclear fusion. You might say that the very first electronic computer program was a simulation of an H-bomb explosion. A long way from the Eccentric Anomaly of Mars.

The Von Neumann Architecture

The man who had the idea of running the H-bomb program on the ENIAC was the famous mathematician John von Neumann. As well as working in the weapons laboratory of Los Alamos, New Mexico, von Neumann was also consulting with the ENIAC team, which consisted of Mauchly, Eckert, and a number of others.

Von Neumann helped them draw up the design for a new computer to be called the EDVAC (for Electronic Discrete Variable Automatic Computer). The EDVAC would be distinguished from the ENIAC by having a better memory, and by having the key feature of having an easily changeable stored program. Although the ENIAC read its input data off of punch cards, its program could only be changed by manually moving the wires on a plugboard and by setting scores of dials. The EDVAC would allow the user to feed in the program
and
the data on punch cards. As von Neumann would later put it:

Conceptually we have discussed…two different forms of memory: storage of numbers and storage of orders. If, however, the orders to the machine are reduced to a numerical code and if the machine can in some fashion distinguish a number from an order, the memory organ can be used to store both numbers and orders. [Arthur Burks, Herman Goldstine, and John von Neumann, “Preliminary Discussion of the Logical Design of an Electronic Computing Instrument,” reprinted in John von Neumann,
Collected Works
, (Macmillan ).]

Von Neumann prepared a document called “First Draft of a Report on the EDVAC,” and sent it out to a number of scientists in June, 1945. Since von Neumann’s name appeared alone as the author of the report, he is often credited as the sole inventor of the modern stored program concept, which is not strictly true. The stored program was an idea which the others on the ENIAC team had also thought of—not to mention Charles Babbage with his Analytical Engine! Be that as it may, the name stuck, and the design of all the ordinary computers one sees is known as “the von Neumann architecture.”

Even if this design did not spring full-blown from von Neumann’s brow alone, he was the first to really appreciate how powerful a computer could be if it used a stored program, and he was an eminent enough man to exert influence to help bring this about. Initially the idea of putting both data and instructions into a computer’s memory seemed strange and heretical, not to mention too technically difficult.

The technical difficulty with storing a computer’s instructions is that the machine needs to be able to access these instructions very rapidly. You might think this could be handled by putting the instructions on, say, a rapidly turning reel of magnetic tape, but it turns out that a program’s instructions are not accessed by a single, linear read-through as would be natural for a tape. A program’s execution involves branches, loops and jumps; the instructions do not get used in a fixed serial order. What is really needed is a way to store all of the instructions in memory in such a way that any location on the list of instructions can be very rapidly accessed.

The fact that the ENIAC used such a staggering number of vacuum tubes raised the engineering problems of its construction to a pyramid-of-Cheops or man-on-the-moon scale of difficulty. That it worked at all was a great inspiration. But it was clear that something was going to have to be done about using all those tubes, especially if anyone wanted to store a lengthy program in a computer’s memory.

Mercury Memory

The trick for memory storage that would be used in the next few computers was almost unbelievably strange, and is no longer widely remembered: bits of information were to be stored as sound waves in tanks of liquid mercury. These tanks or tubes were also called “mercury delay lines.” A typical mercury tube was about three feet long and an inch in diameter, with a piezoelectric crystal attached to each end. If you apply an oscillating electrical current to a piezoelectric crystal it will vibrate; conversely, if you mechanically vibrate one of these crystals it will emit an oscillating electrical current. The idea was to convert a sequence of zeroes and ones into electrical oscillations, feed this signal to the near end of a mercury delay line, let the vibrations move through the mercury, have the vibrations create an electrical oscillation coming out of the far end of the mercury delay line, amplify this slightly weakened signal, perhaps read off the zeroes and ones, and then, presuming that continued storage was desired, feed the signal back into the near end of the mercury delay line. The far end was made energy-absorbent so as not to echo the vibrations back towards the near end.

How many bits could a mercury tube hold? The speed of sound (or vibrations) in mercury is roughly a thousand meters per second, so it takes about one thousandth of a second to travel the length of a one meter mercury tube. By making the vibration pulses one millionth of a second long, it was possible to send off about a thousand bits from the near end of a mercury tank before they started arriving at the far end (there to be amplified and sent back through a wire to the near end). In other words, this circuitry-wrapped cylinder of mercury could remember 1000 bits, or about 128 bytes. Today, of course, it’s common for a memory chip the size of your fingernail to hold many millions of bytes.

A monkey wrench was thrown into the EDVAC plans by the fact that Eckert and Mauchly left the University of Pennsylvania to start their own company. It was the British scientist Maurice Wilkes who first created a stored-program machine along the lines laid down by the von Neumann architecture. Wilkes’s machine, the EDSAC (for Electronic Delay Storage Automatic Calculator, where “Delay Storage” refers to the mercury delay lines used for memory), began running at Cambridge University in May 1949. Thanks to the use of the mercury memory tanks, the EDSAC needed only 3,000 vacuum tubes.

In an email to me, the mathematician John Horton Conway recalled:

As an undergraduate [at Cambridge University] I saw the mercury delay lines in the old EDSAC machine they had there. The mercury was in thick-walled glass tubes between 6 and 8 feet long, and continually leaked into the containing trays below. Nobody then (late ‘50s) seemed unduly worried about the risks of mercury poisoning.

UNIVAC

Although Eckert and Mauchly were excellent scientists, they were poor businessmen. After a few years of struggle, they turned the management of their struggling computer company over to Remington-Rand (now Sperry-Rand). In 1952, the Eckert-Mauchly division of Remington-Rand delivered the first commercial computer systems to the National Bureau of Standards. These machines were called UNIVAC (for Universal Automatic Computer). The UNIVAC had a console, some tape readers, a few cabinets filled with vacuum tubes and a bank of mercury delay lines the size of a china closet. This mercury memory held about one kilobyte and it cost about half a million dollars.

The public became widely aware of the UNIVAC during the night of the presidential election of 1952: Dwight Eisenhower vs. Adalai Stevenson. As a publicity stunt, Remington-Rand arranged to have Walter Cronkite of CBS report a UNIVAC’s prediction of the election outcome based on preliminary returns—the very first time this now common procedure was done. With only seven percent of the vote in, UNIVAC predicted a landslide victory for Eisenhower. But Remington-Rand’s research director Arthur Draper was afraid to tell this to CBS! The pundits had expected a close election with a real chance of Stevenson’s victory, and UNIVAC’s prediction seemed counterintuitive. So the Draper had the Remington-Rand engineers quickly tweak the UNIVAC program to make it predict the expected result, a narrow victory by Eisenhower. When, a few hours later, it became evident that Eisenhower would indeed sweep the electoral college, Draper went on TV to improve UNIVAC’s reputation by confessing his subterfuge. One moral here is that a computer’s predictions are only as reliable as its operator’s assumptions.

UNIVACs began selling to businesses in a small way. Slowly, the giant IBM corporation decided to get into the computer business as well. Though their machines were not as good as the UNIVACs, IBM had a great sales force, and most businesses were in the habit of using IBM calculators and punch card tabulating machines. In 1956, IBM had pulled ahead, with 76 IBM computers installed vs. 46 UNIVACs.

Six Generations Of Computers

The 1950s and 1960s were the period when computers acquired many of their unpleasant associations. They were enormously expensive machines used only by large businesses and the government. The standard procedure for running a program on one of these machines was to turn your program into lines of code and to use a key punch machine to represent each line of code as a punch card. You would submit your little stack of punch cards, and when a sufficient number of cards had accumulated, your program would be run as part of a batch of programs. Your output would be a computer-printed piece of paper containing your output or, perhaps more typically, a series of cryptic error messages.

The history of computers from the 1950s to the 1970s is usually discussed in terms of four generations of computers.

The first generation of commercial computers ran from 1950 to about 1959. These machines continued to use vacuum tubes for their most rapid memory, and for the switching circuits of their logic and arithmetic units. The funky old mercury delay line memories were replaced by memories in which each bit was stored by a tiny little ring or “core” of a magnetizable compound called ferrite. Each core had three wires running through it, and by sending pulses of electricity through the wires, the bit in the core could be read or changed. Tens of thousands of these washer-like little cores would be woven together into a cubical “core stack” several inches on a side.

The second generation of computers lasted from 1959 to 1963. During this period, computers used transistors instead of vacuum tubes. By now the vast majority of computers were made by IBM, but one of the most famous second generation computers was the first PDP (Programmed Data Processor) model from the Digital Equipment Corporation. The PDP-1 was of key importance because it was the first machine which people could use in real time. That is, instead of waiting a day to get your batch-processed answers back, you could program the PDP-1 and get answers back right away via the electric typewriter. It also had a screen capable of displaying a dozen or so characters at a time.

The third generation of computers began with the IBM 360 series of computers in 1964. The first of these machines used “solid logic technology” in which several distinct electronic components were soldered together on a ceramic substrate. Quite soon, this kludge was replaced by small scale integrated circuits, in which a variety of electronic components were incorporated as etched patterns on a single piece of silicon. (A “kludge” is an ungainly bit of hardware or computer code.) Over the decade leading up to 1975, the integrated circuits got more and more intricate, morphing into what became called VLSI or “very large scale integrated” circuits.

The fourth generation of computers began in 1975, when VLSI circuits got so refined that a computer’s complete logical and arithmetic processing circuits could fit onto a single chip known as a microprocessor. A microprocessor is the heart of each personal computer or workstation, and every year a new, improved crop of them appears, not unlike Detroit’s annual new lines of cars.

Although computer technology continues to advance as rapidly as ever, people have dropped the talk about generations. The “generation of computer” categorization became devalued and confused. On the one hand, there was a lot of meaningless hype on the part of people saying they were out to “invent the fifth generation computer”—the Japanese computer scientists of the 1980s were particularly fond of the phrase. And on the other hand the formerly dynastic advance of computing split up into a family tree of cousins. Another reason for the demise of the “generation” concept is that rather than radically changing their design, microprocessor chips keep getting smaller and faster via a series of incremental rather than revolutionary redesigns.

One might best view the coming of the decentralized personal computers and desktop workstations as an ongoing fifth generation of computers. The split between the old world of mainframes and the new world of personal computers is crucial. And if you want to push the generation idea even further, it might make sense to speak of the widespread arrival of networking and the Web as a late 1990s development which turned all of the world’s computers into one single sixth generation computer—a new planet-wide system, a whole greater than its parts.

Moloch And The Hackers

Though it was inspired by Fritz Lang’s
Metropolis
and the silhouette of the Sir Francis Drake Hotel against the 1955 San Francisco night skyline, the “Moloch” section of Allen Ginsberg’s supreme Beat poem “Howl” also captures the feelings that artists and intellectuals came to have about the huge mainframe computers such as UNIVAC and IBM:

Other books

Leading Lady by Lawana Blackwell
Young Lord of Khadora by Richard S. Tuttle
Untamed (Wolf Lake) by Kohout, Jennifer
Between Friends by Debbie Macomber
Hunted by James Alan Gardner
Jamintha by Wilde, Jennifer;
Heartbreak Highway 1 by Harper Whitmore