The Computers of Star Trek (2 page)

BOOK: The Computers of Star Trek
8.45Mb size Format: txt, pdf, ePub
ads
The Vulcan science officer, Mr. Spock, with his totally logical approach to life, and Dr. McCoy, the emotional humanist, represented the extreme cases for and against computer technology. Captain Kirk occupied the middle ground, relying on both Spock and McCoy for guidance. Clearly, a balanced path between the ideologies was the right choice.
For example, in “The Conscience of the King,” Kirk suspects that an actor, Anton Karidian, is in reality the notorious Kodos the Executioner, a mass murderer presumed dead for the past twenty years. Spock and Kirk use the ship's computer to conduct a thorough investigation of the massacre for which Kodos was responsible. They soon discover that most of the witnesses to the disaster have died under mysterious circumstances over the years. Each death took place when Karidian's theater company was nearby.
Kirk refuses to be swayed by circumstantial evidence. Instead, he orders a sound scan made of the actor's voice and instructs the computer to compare the recording with one of Kodos' speeches. The computer states that the two voices are identical.
Despite this evidence, the captain still refuses to act. He's unwilling to trust a machine's judgment when a man's life is at stake. So much for twenty-third-century technology.
When Captain Kirk is brought to trial for killing one of his crew in “Court Martial,” the Board of Review studying the computer transcripts has no such qualms. They know beyond doubt that computers don't lie. However, Kirk's lawyer, Samuel T. Cogley,
puts his trust in law books instead of computer records because, he tells Kirk, the books contain much more detailed information. Cogley's defense of Kirk consists of angrily declaring that computer evidence isn't enough to prove a man guilty. He argues his point “in the name of humanity fading before the machine.”
This mistrust of computer technology is a recurring theme of the original series. In “The Return of the Archons,” Kirk and his crew battle Landru, a giant thinking machine that's frozen a planet's society without change for thousands of years to protect the inhabitants from any possible danger. The theme is similar in “The Apple,” in which the immense computer, Vaal, maintains an Eden-like environment for a handful of servants who in return keep it supplied with raw materials. In “The Ultimate Computer,” a near-omniscient computer is given control of the
Enterprise
during space-war games to prove it can outperform the ship's human crew. Resulting, of course, in disaster.
 
When
Star Trek
first appeared, in 1966, computers were less than a quarter-century old. Even in the mid-1970s, those of us who were teenage programmers weren't allowed in the “computer room.” We gave our coding sheets to an engineer who sat behind a bullet-proof glass window. Behind him, huge machines hummed and roared, churned giant magnetic tapes, and spat hardcopies from clanking printers onto the floor. More than anything, we wanted to get behind the glass window, touch the computers, and see how they really worked. But only the elite, the engineers of the computer room, were allowed to serve the god-like machines. The original series computers shared this mystical quality. Massive and unpredictable, they exerted their powers in accordance with inhuman, universal laws.
The first generation of computers (1945—1956), developed during World War II, were basically huge collections of on/off
switches. When grouped together, these switches represented numbers that were then manipulated to solve mathematical problems. Computers were used by the United States Navy, for instance, to create ballistic charts for aiming artillery. The actual switches for these computers were vacuum tubes—large glass tubes in which electric current passed freely between metal wires.
On
was when electrons were flowing in the tube.
Off
was when they were not. On paper these two positions were represented by the numbers 1 and 0. Each switch was called a bit.
Eight bits grouped together—forming a sequence of zeros and ones—were called a byte. A row of eight bits could form 2
8
or 256 unique strings of ones and zeros. There were enough bytes to represent an entire alphabet, as well as numeric digits and punctuation marks. Bytes soon became the standard unit of measurement of computer storage.
Because bytes represent such a small amount of information, computer storage is usually described in kilobytes (2
10
or 1024 bytes), megabytes (1024 kilobytes = 2
20
= 1,048,576 bytes), or gigabytes (1024 megabytes). A gigabyte contains approximately ten billion bits, or individual switches. The world of computers involves very large numbers. Today, computer storage is escalating into the terabytes—trillions of bytes.
Working with first-generation computers, engineers were able to perform detailed mathematical calculations by using hundreds or sometimes thousands of vacuum tubes. ENIAC, completed in 1946 by scientists at the University of Pennsylvania, contained over 18,000 vacuum tubes and 70,000 resistors, and used over 160 kilowatts of electricity each time it was turned on. With each tube representing one bit, ENIAC thus had a capacity of 18,000 bits.
These primitive vacuum-tube computers were huge. They filled large buildings, generated intense heat, and consumed vast amounts of energy. Running ENIAC dimmed the lights in a large
section of Philadelphia. Immense computers were a staple of science fiction of this period. The Krell computer in the movie
Forbidden Planet
was a first-generation computer, as was the computer in Colossus:
The Forbin Project.
The bigger the machine, the more powerful the electronic brain. Or so it seemed.
The year 1948 marked the invention of the transistor by three scientists at Bell Labs. Transistors were solid-state semiconductors that could do the work of a vacuum tube. Basically, transistors are tiny electrical components with a base, collector, and emitter connection. The voltage between the base and emitter determines whether electricity flows or is blocked between the emitter and the collector. In essence, a transistor is no more than a miniature on/off switch, dependent on electrical current. Transistors are just thousandths of an inch wide, and they completely revolutionized electronics.
The switch from vacuum tubes to transistors led to what was called the second generation of computers (1956—1963). These machines were much smaller, faster, and more energy efficient.
Equally important, second-generation computers used stored programs, in which the instructions to run the machine for a certain function were inside the computer's memory and could quickly be replaced by another set of instructions for a different function. First-generation computers could not solve more than one type of problem without placing instruction sequences into the computer along with the numeric data. Stored programs made computers versatile.
Another advance in second-generation computers was the development of programming languages. These languages, including COBOL and FORTRAN, replaced the zeros and ones (the binary code) of first-generation machines with words, numbers, and instructions. Developing specific programs for these machines led to the development of the software industry.
The computers featured on the original series are first- and second-generation machines—projected three hundred years into the future. Thus on the original
Enterprise
, specific computers handle specific problems—the ship has a library computer, a science computer, a translator computer, and a computer used for navigation. “Futuristic” means they work at incredible speeds and contain vast amounts of information. Many of them are extremely large. Like their primitive ancestors, when pressed to their limits, the machines tend to overheat. Landru, for example, self-destructs in a thick cloud of smoke.
Though larger and faster than the computers of the 1960s, the original series computers display little imagination or innovation in their basic design. Many of them print answers in machine language that have to be translated. Although most original series computers understand English (and even translate languages from alien cultures into English), most can't handle simple graphic displays. They are artificially intelligent in that they understand questions, but they are extremely limited in extrapolating data and reaching conclusions. These computers represent the future as envisioned through a narrow tunnel from the past.
The problem of computer overheating was solved in the real world by the development of a third generation of computers (1964—1971), which used silicon chips for transistors. The first integrated circuits, invented in 1958, combined three transistors on a single chip. This was quickly followed by the packing of tens, hundreds, and later thousands of transistors onto one chip. The smaller the transistor, the less distance electricity had to travel and the faster it worked. As component size shrank and more and more transistors were squeezed onto a single chip, computers became faster and smaller.
Third-generation computers also featured operating systems, which allowed a machine to run a number of different programs
at the same time. Second-generation machines had only been able to work on one problem after another. In third-generation machines, the operating system acted as a central program that monitored and managed all operations of the computer. For the first time, computers were able to do multiple tasks simultaneously, which greatly increased their problem-solving speed.
As integrated circuits spread, the main direction in computer technology was smaller, faster, cheaper. The fourth generation of computers (1971—now) began with one of the breakthrough inventions of the twentieth century, the microprocessor. In 1971, the Intel 4004 chip contained all the computer's components (CPU, memory, and input/output controls). This first microprocessor contained 2,300 transistors and performed about 60,000 calculations in a second. It was manufactured in quantity and then separately programmed for all types of functions.
Soon, computers were everywhere—in televisions, automobiles, watches, microwave ovens, coffeemakers, toys, cash registers, airplanes, telephone systems, electric power grids, and stock market tickers.
Steady improvements in photolithography—the method used to etch circuits onto chips—pushed component sizes even smaller, resulting in faster computer speeds. The smaller the component, the faster a signal traveled between transistors. Large-scale integration (LSI) fit hundreds of transistors onto a chip about half the size of a dime. In the 1980s, VLSI (very large-scale integration) fit hundreds of thousands of components onto a chip. ULSI (ultra-large-scale integration) increased that number into the millions. In 1995, approximately 3.1 million transistors could be fit onto a single square-inch chip (Intel's Pentium chip). Modern microprocessors contain as many as twenty million transistors and perform hundreds of millions of calculations per second. A computer with the power of ENIAC, with its 18,000
vacuum tubes, could today fit onto a chip smaller than the period that finishes this sentence.
Industry experts estimate that there are more than 15 billion microprocessors in use today. Without them, telephones would still have rotary dials, TVs would have knobs instead of remotes, ATMs wouldn't exist, and thousands of other facets of modern life wouldn't work. Nor would an unmanned probe have walked on Mars, sending us pictures of another planet's landscape.
Equally important, microprocessors enabled computer companies to manufacture computers for home use. In 1981, IBM introduced its Personal Computer (PC) for the home, office, and schools. Today, along with at least half a billion PCs, we have lap-tops and handheld computers: Palm Pilots, Newtons, a multitude of tiny computers that netsurf for us.
On the original
Star Trek
, the communicators looked like today's handheld computers. The etch-a-sketch-sized pads used by Captain Kirk to sign instructions, letters, and invoices (while he ogled Yeoman Rand in her miniskirt and cracked jokes about “the pleasures of shore leave”) were larger and clunkier than today's powerful handheld computers. But the use of those pads by Kirk and crew exhibited an astonishing foresight.
Kirk and his crew also used what look like today's desktop PCs to access databases, communicate with each other, and analyze sensor information. But the most amazing example of the original series' foresight is that crewmembers routinely gave each other data on disks that look exactly like today's floppy disks.
While much of the original series reflected the machines and cultural paranoia of the 1960s, the show also provided a remarkable glimpse of technology in the 1980s. Looking twenty-years ahead is a far cry from looking 300 years into the future, of course, but it's probably the best that can be expected.
Just as Kirk's computers reflected the thinking of the 1960s, the
TNG, VGR
, and
DS9
computers reflect today's thinking. They incorporate much of today's best computer science research: redundant architectures, neural nets, top-down as well as bottom-up artificial intelligence, nanotechnology, and virtual reality.
This creates some problems. For example, the
Trek
computers are outlandish in design and concept. They supposedly run faster than the speed of light, which defies the laws of physics. Though starships travel at warp speed, they actually are warping space, using the fourth-dimensional curvature of space time to achieve faster-than-light (FTL) speeds. Nothing in this theory (which is discussed at great length in
The Physics of Star Trek
by Lawrence M. Krauss [Basic Books, 1995] and is speculative at best) justifies the concept of electrons in circuits moving at FTL speeds.
The computers have a redundant architecture to handle system failures, yet constantly fail. They enable holographic doctors to hit humans and to fall in love. The
Deep Space Nine
computer is so argumentative and obstinate that Chief O'Brien must put it into manual override to save the space station from blowing up. Yet the same computer requires constant supervision, repair, and instructions from human engineers; in other words, it's not particularly intelligent by today's standards.
BOOK: The Computers of Star Trek
8.45Mb size Format: txt, pdf, ePub
ads

Other books

Murder Misread by P.M. Carlson
City Of Lies by R.J. Ellory
The Mouse That Roared by Leonard Wibberley
Taming the Lion by Elizabeth Coldwell
Lone Wolfe by Kate Hewitt
Sara's Child by Susan Elle
Cleopatra the Great by Joann Fletcher
In the Forest by Edna O'Brien
In the Clear by Tamara Morgan