Read Collected Essays Online

Authors: Rudy Rucker

Collected Essays (28 page)

BOOK: Collected Essays
11.73Mb size Format: txt, pdf, ePub
ads

Seek Ye The Gnarl!

New dimensional CA hacks are possible, new and marketable techniques of parallel programming are lying around waiting to be found, both in the form of individual CA structures and in the form of wholly different rules.

CA structures are labile and can be bred in three senses: one can collide and interface different local patterns within the framework of a fixed CA rule, one can combine globally different CA rules (or ideas about them) to produce wholly new ecologies, or one can “gene-splice” the logic of successful rules. Then, like Alexander von Humboldt in the Americas, one botanizes and zoologizes and mineralizes, looking for whatever artificially alive information structures can be found in the new worlds. As always, both
top-down
and
bottom-up
approaches are viable. We use
bottom-up
to find new ecologies and their flora and fauna. We use
top-down
to seed a given instance of a particular ecology with the sort of gene-tailored computer agents we want to breed.

In my own
bottom-up
searches I begin simply by hoping that my programs will display interesting output for a long time. Then I begin to hope that my programs will be robust under varying initial conditions, and that they will be reactive in anthropomorphizable ways. Once the program is, at this very rudimentary level, artificially alive, I may cast about for applications in some practical domain.

As I mentioned above, I think the most productive near-term applications of CAs are to image generation and image processing. A cycle or two of an averaging CA rule, for instance, can be used for easy image cleanup, munching down all stray “turd bits.” This technique, known as “convolution” in the literature, is used every day by NASA’s massively parallel computer in Beltsville, Maryland, to process terabyte arrays of satellite photo data. Present-day designers of the paint and graphics packages commonly put CA-based rules into their image processor toolboxes. Several Photoshop plug-ins, for instance, use CAs.

CAs have still not been sufficiently exploited for original image generation. How about a logo that instead of being chrome is matte and luminous, with a smooth curved surface made of tiny moving mosaics of light, light-bits that form crawling dirty CA gliders, or that shudder in psychedelic washes of color? These are what the expressive “flickercladding” skins of the boppers and moldies look like in my A-Life science fiction
Ware
novels.

Many simulation applications exist as well. The idea is to find a CA rule that looks like something you want to model. If you are lucky there will be some common underlying mathematics between the two. Some rules, for instance, are difference method solutions of the Laplacian equation which models both diffusion of chemicals and heat flow. Wave motions can be modeled as well. (Since
CA Lab
, my students and I developed a cellular automata package specifically designed for physical simulation. This is the
CAPOW
software for simulating 1-D and 2-D continuous valued cellular automata. It’s available for free download from my
site
.)

A final application of CAs is to encryption. Either a CA can serve as a cheap source of “essentially random” encryption bits, or the whole message can be fed to a reversible CA. Stephen Wolfram actually patented the one-dimensional rule with “Wolfram code 30” as part of an encryption scheme. (Stephen Wolfram, U.S. Patent Number 4,691,291, “Random Sequence Generators”, granted September 1, 1987.)

But to recapitulate,
the real reason for studying CAs is to promote Artificial Life
. The most important use for cellular automata will be as “universes” or “arenas” in which to evolve better fractals, bots, virtual ants, neural nets and expert agents, using gene-splicing, mutation, and our own “divine interventions” to achieve a rapid and dramatic evolution in these parallel processes. CA workers need your help in accomplishing the manifest destiny of mankind: to pass the torch of life and intelligence on to the computer. There are no more than a few hundred active workers in the CA field today. Twenty-first century technology will need thousands more!

History of Cellular Automata: Von Neumann to Gosper

Cellular automata were invented in the late 1940s by Stanislaw Ulam (1909 - 1984) and John von Neumann (1903 - 1957). One can say that the “cellular” comes from Ulam, and the “automata” from von Neumann.

Ulam was primarily a mathematician. He invented the Monte Carlo simulation technique, the highly infinite “measurable cardinals” of set theory, and he made contributions to analysis and number theory. With Edward Teller, Ulam was the co-inventor of the hydrogen bomb. Von Neumann was a still more wide-ranging mathematician. He did work in set theory, in the foundations of quantum mechanics, in economics, and in game theory. In addition, von Neumann greatly influenced the logical architecture of the first electronic computers.

In the late 1940s, von Neumann gave some ground-breaking lectures on the topic of whether or not it would ever be possible for a machine, or “automaton,” to reproduce itself.

Usually a machine makes something much simpler than itself—consider a huge milling machine turning out bolts. Could a machine possibly fabricate machines as complicated as itself? Or is there some extra-mechanical magic to self-reproduction? To simplify the problem, von Neumann suggested that we suppose that our robots or automata are made up of a small number of standardized parts:

I will introduce as elementary units neurons, a “muscle,” entities which make and cut fixed contacts, and entities which supply energy, all defined with about that degree of superficiality with which [the theory of neural networks] describes an actual neuron. If you describe muscles, connective tissues, “disconnecting tissues,” and means of providing metabolic energy…you probably wind up with something like ten or twelve or fifteen elementary parts.” [John von Neumann, “Theory and Organization of Complicated Automata,” reprinted in his
Theory of Self-Reproducing Automata
, (University of Illinois Press).]

Using the idea of machines made up of multiple copies of a small number of standardized elements, von Neumann posed his question about robot self-reproduction as follows.

Can one build an aggregate out of such elements in such a manner that if it is put into a reservoir, in which there float all these elements in large numbers, it will then begin to construct other aggregates, each of which will at the end turn out to be another automaton exactly like the original one? [John von Neumann, “The General and Logical Theory of Automata,” reprinted in his
Collected Works
, (Macmillan)
.
]

This weird scenario prefigures a scene in Kurt Vonnegut’s
Sirens of Titan
, where an unhappy robot tears himself apart and floats the pieces in a lake. Using techniques of mathematical logic, von Neumann was able to deduce that such self-reproduction should in fact be possible. His proof hinged on the idea that an automaton could have a blueprint for building itself, and that in self-reproduction, two steps would be necessary: (1) to make an exact copy of the blueprint, and (2) to use the blueprint as instructions for making a copy of the automaton. The role of the blueprint is entirely analogous to the way DNA is used in biological self-reproduction, for here the DNA is both copied and used as instructions for building new proteins.

The complexity of a reservoir full of floating machine parts hindered von Neumann from making his proof convincing. The next step came from Stanislaw Ulam, who was working with von Neumann at Los Alamos during those years. Ulam’s suggestion was that instead of talking about machine parts in a reservoir, von Neumann should think in terms of an idealized space of cells that could hold finite state-numbers representing different sorts of parts.

Ulam’s first published reference to this idea reads as follows:

An interesting field of application for models consisting of an infinite number of interacting elements may exist in the recent theories of automata. A general model, considered by von Neumann and the author, would be of the following sort: Given is an infinite lattice or graph of points, each with a finite number of connections to certain of its “neighbors.” Each point is capable of a finite number of “states.” The states of neighbors at time Tn induce, in a specified manner, the state of the point at time Tn+1. One aim of the theory is to establish the existence of subsystems which are able to multiply, i.e., create in time other systems identical (“congruent”) to themselves. [Stanislaw Ulam, “Random Processes and Transformations,” reprinted in his
Sets, Numbers and Universes
, (MIT Press).]

By 1952, von Neumann had completed a description of such a self-reproducing “cellular automaton” which uses 29 states. Von Neumann’s CA work was not published during his lifetime; it seems that once he saw the solution, he became distracted and moved on to other things. Ulam continued working on a number of simpler cellular automata, publishing several papers on them during the early 1960s.

The next big event in CA history occurred in 1970. In his popular
Mathematica
l Games column, Martin Gardner wrote about how John Horton Conway, a mathematician at the University of Cambridge, had discovered a fascinating two-dimensional cellular automaton so rich in patterns and behavior that it was known as “Life.” Conway’s vague initial goal had been to find a cellular automaton rule in which simple patterns could grow to a large size, but in which it was not clear whether any patterns could grow forever.

“Conway conjectures that no pattern can grow without limit. Put another way, any configuration with a finite number of counters cannot grow beyond a finite upper limit to the number of counters on the field. This is probably the deepest and most difficult question posed by the game. Conway has offered a prize of $50 to the first person who can prove or disprove the conjecture before the end of the year. One way to disprove it would be to discover patterns that keep adding counters to the field: a gun (a configuration that repeatedly shoots out moving objects such as the glider), or a puffer train (a configuration that moves about leaves behind a trail of ‘smoke’).” [Martin Gardner, “Mathematical Games: The fantastic combinations of John Conway’s new solitaire game Life,” (
Scientific American
, October 1970).]

The prize was won a month later by William Gosper and five fellow hackers at MIT; they sent Martin Gardner a telegram with the coordinates of the cells to turn on to make a glider gun. Steven Levy’s
Hackers
has a good section about Gosper and the early excitement over Life among the users of the PDP-6 computer at the MIT Artificial Intelligence Project. Levy has a nice quote from Gosper, telling how he saw Life as a way to

“…basically do new science in a universe where all the smart guys haven’t already nixed you out two or three hundred years ago. It’s your life story if you’re a mathematician: every time you discover something neat, you discover that Gauss or Newton knew it in his crib. With Life you’re the first guy there, and there’s always fun stuff going on. You can do everything from recursive function theory to animal husbandry. There’s a community of people who are sharing their experiences with you. And there’s the sense of connection between you and the environment. The idea of where’s the boundary of a computer. Where does the computer leave off and the environment begin?” [
Steven Levy,
Hackers: Heroes of the Computer Revolution,
(Doubleday).
]

One must remember that 1970 was still the Dark Ages of computing; Conway himself ran his Life simulations by marking the cells with checkers or flat Othello counters. For Gosper and his team to get Life to run on a monitor at all was a nontrivial feat of hacking—it was a new thing to do with a computer. After Gardner’s second column on Life, the game became something of a mania among computer users. By 1974, an article about Life in
Time
could complain that “millions of dollars in valuable computer time may have already been wasted by the game’s growing horde of fanatics.”

More and more intricate Life patterns were found all through the ‘70s, and by 1980, Conway had enough Life machinery at hand to publish a detailed proof that Life can be used to simulate any digital computation whatsoever. The significance of Conway’s proof is not that he shows that
some
cellular automaton can act as a universal computer, for von Neumann already proved this; and for that matter Alvy Ray Smith’s Stanford dissertation of 1970 describes a universal
one-dimensional
CA computer. (Smith later founded the computer graphics company Pixar.) The significance of Conway’s proof is that he shows that the specific rule called Life can itself act as a universal computer.

A number of people at MIT began studying cellular automata other than Life during the 1970s. One the most influential figures there was Edward Fredkin. Although he himself held no higher degrees, Fredkin was a professor associated with the MIT Laboratory for Computer Science, and he directed a number of dissertations on Cellular Automata.

BOOK: Collected Essays
11.73Mb size Format: txt, pdf, ePub
ads

Other books

Eight Winter Nights by Laura Krauss Melmed
Lover's Lane by Jill Marie Landis
La prueba by Carmen Gurruchaga
The Boy No One Loved by Casey Watson