Authors: Rudy Rucker
This carries over to the natural world. Many naturally occurring processes are not only gnarly, they’re capable of behaving like any other kind of computation. Wolfram feels that this behavior is very common, and he formulates this notion in the claim that he calls the
Principle of Computational Equivalence (PCE)
: Almost all processes that are not obviously simple can be viewed as computations of equivalent sophistication.
If the PCE is true, then, for instance, a leaf fluttering in the breeze outside my window is as computationally rich a system as my brain. I seem to be a fluttering leaf? Some scientists find this notion an affront. Personally, I find serenity in accepting that the flow of my thoughts and moods is a gnarly computation that’s fundamentally the same as a cloud, a flame, or a fluttering leaf. It’s soothing to realize that my mind’s processes are inherently uncontrollable. Looking at the waving branches of trees calms me down.
But rather than arguing for the full PCE, I think it’s worthwhile to formulate a slightly weaker claim, which I call the
Principle of Computational Unpredictability (PCU)
:Most naturally occurring complex computations are unpredictable
.
In the PCU, I’m using “unpredictable” in a specific computer-science sense; I’m saying that a computation is unpredictable if there’s no fast shortcut way to predict its outcomes. If a computation is unpredictable and you want to know what state it’ll be in after, say, a million steps, you pretty much have to crunch out those million steps to find out what’s going to happen.
Traditional science is all about finding shortcuts. Physics 101 teaches students to use Newton’s laws to predict how far a cannonball will travel when shot into the air at a certain angle and with a certain muzzle-velocity. But, as I mentioned above, in the case of a real object moving through the air, if we want to get full accuracy in describing the object’s motions, we need to take the turbulent flow of air into account. At least at certain velocities, flowing fluids are known to produce computationally complex patterns—think of the bumps and ripples that move back and forth along the lip of a waterfall, or of eddies of milk stirred into coffee. So an earthly object’s motion will often be carrying out a gnarly computation, and these computations are unpredictable—meaning that the only certain way to get a really detailed prediction of an artillery shell’s trajectory through the air is to simulate the motion every step of the way. The computation performed by the physical motion is unpredictable in the sense of not being reducible to a quick shortcut method. (By the way, simulating trajectories was the very purpose for which the U. S. funded the first electronic computer, ENIAC, in 1946, the same year in which I was born.)
Physical laws provide, at best, a recipe for how the world might be computed in parallel particle by particle and region by region. But—unless you have access to some so-far-unavailable ultra-super computer that simulates reality faster than the world does itself—the only way to actually learn the results is to wait for the actual physical process to work itself out. There is a fundamental gap between T-shirt physics equations and the unpredictable gnarl of daily life.
Some SF Thought Experiments
One of the nice things about science fiction is that it lets us carry out thought experiments. Mathematicians adopt axioms and deduce the consequences. Computer scientists write programs and observe the results of letting the programs run. Science fiction writers put characters into a world with arbitrary rules and work out what happens.
Science fiction is a powerful futurological tool because, in practice, there are no quick shortcuts for predicting the effects of new technological developments. Only if you place the new tech into a fleshed-out fictional world and simulate the effects in your novelistic reality can you get a clear image of what might happen.
This relates to the ideas I’ve been talking about. We can’t predict in advance the outcomes of naturally occurring gnarly systems; we can only simulate (with great effort) their evolution step by step. In other words, when it comes to futurology, only the most trivial changes to reality have easily predictable consequences. If I want to imagine what our world will be like one year after the arrival of, say, soft plastic robots, the only way to get a realistic vision is to fictionally simulate society’s reactions during the intervening year.
These days I’ve been working on a fictional thought experiment about using natural systems to replace conventional computers. My starting point is the observed fact that gnarly natural systems compute much faster than our supercomputers. Although in principle, a supercomputer can simulate a given natural process, such simulations are at present very much slower than what nature does. It’s a simple matter of resources: a natural system is inherently parallel, with all its parts being updated at once. And a ordinary sized object is made up of something on the order of an octillion atoms (that’s ten to the 27th power) . Naturally occurring systems update their states much faster than our digital machines can model the process is That’s why existing computer simulations of reality are still rather crude.
(Let me insert a deflationary side-remark on the Singularity that’s supposed to occur when intelligent computers begin designing even more intelligent computers and so on. Perhaps the end result of this kind of process
won’t
be a god. Perhaps it’ll be something more like a wind-riffled pond, a campfire, or a fly buzzing around your backyard. Nature is, after all, already computing at the maximum possible flop.)
Now let’s get into my own thought experiment. If we could harness a natural system to act as a computer for us, we’d have what you might call a
paracomputer
that totally outstrips anything that our man-made beige buzzing desktop machines can do. I say “paracomputer” not “computer” to point out the fact that this is a
natural object
which behaves like computer, as opposed to being a high-tech totem that we clever monkeys made. Wolfram’s PCE suggests that essentially any gnarly natural process could be used as a paracomputer.
A natural paracomputer would be powerful enough to be in striking range of predicting other natural systems in real time or perhaps even a bit faster than real time. The problem with our naturally-occurring paracomputers is that they’re not set up for the kinds of tasks we like to use computers for—like predicting the stock-market, rendering Homer Simpson, or simulating nuclear explosions.
To make practical use of paracomputers we need a solution to what you might call the
codec
or coding-decoding problem. If you want to learn something specific from a simulation, you have to know how to code your data into the simulation and how to decode it back out. Like suppose you’re going to make predictions about the weather by reading tea-leaves. To get concrete answers, you
code
today’s weather into a cup of tea, which you’re using as a paracomputer. You swirl the cup around, drink the tea, look at the leaves, and
decode
the leaf pattern into tomorrow’s weather. Codec.
This is a subtle point, so let me state it again. Suppose that you want to simulate the market price of a certain stock, and that you have all the data and equations to do it, but the simulation is so complicated that it requires much more time than the real-time period you want to simulate. And you’d like to turn this computation into, say, the motions of some wine when you pour it back and forth between two glasses. You know the computational power is there in the moving wine. But where’s the codec? How do you feed the market trends into the wine? How do you get the prediction numbers out? Do you drink the paracomputer?
Finding the codec that makes a given paracomputer useful for a particular task is a hard problem, but once you have the codec, your paracomputer can solve things very fast. But how to find the codec? Well, let’s use an SF cheat, let’s suppose that one of the characters in our thought experiment is, oh, a mathematical genius who creates a really clever algorithm for rapidly finding codecs that are, if not perfect, at least robust enough for practical use.
So now suppose that we’re able, for instance, to program the wind in the trees and use it as a paracomputer. Then what? For the next stage of my thought experiment, I’m thinking about a curious real-world limitative result that could come into play. This is the Margolus-Levitin theorem, which says that there’s some maximum computational rate that any limited region of spacetime can perform at any given energy level. (See for instance Seth Lloyd’s paper, “The Computational Capacity of the Universe”.) The limit is pretty high—some ten-to-the-fiftieth bit-flips per second on a room-temperature laptop—but SF writers love breaking limits.
In the situation I’m visualizing, a couple of crazy mathematicians (some things never change!) make a paracomputer from a vibrating membrane, use clever logic to find desired codecs, and set the paracomputer to predicting it’s
own
outputs. I expect the feedback process to produce an ever-increasing amount of computation within the little paracomputer. The result is that the device is on the point of violating the Margolus-Levitin limit, and perhaps the way the universe copes with this is by bulging out a big extra hump of spacetime in the vicinity of the paracomputer. And this hump acts as—a tunnel to a higher universe inhabited by, of course, super-intelligent humanoid cockroaches and carnivorous flying cone shell mollusks!
Now let’s turn the hard-SF knob up to eleven. Even if we had natural paracomputers, we’d still be limited by the PCU, the principle that most naturally occurring computations are unpredictable. Your paracomputers can speed things up by a linear factor because they’re so massively parallel. Nevertheless, by the PCU, most problems would resist being absolutely crushed by clever shortcuts. The power of the paracomputer may indeed let you predict tomorrow’s weather, but eventually the PCU catches up with you. You still can’t predict, say, next week’s weather. Even with a paracomputer you might be able to approximately predict a person’s activities for half an hour, but not to a huge degree of accuracy, and certainly not out to a time several months away. The PCU makes prediction impossible for extended periods of time.
Now, being a science-fiction writer, when I see a natural principle, I wonder if it could fail. Even if it’s a principle such as the PCU that I think is true. (An inspiration here is a story by Robert Coates, “The Law,” in which the law of averages fails. The story first appeared in the
New Yorker
of Nov 29, 1947, and can also be found in Clifton Fadiman’s
The Mathematical Magpie
.)
So now let’s suppose that, for their own veiled reasons, the alien cockroaches and cone shells teach our mathematician heroes some amazing new technique that voids the PCU! This notion isn’t utterly inconceivable. Consider, for instance, how drastically the use of language speeds up the human thought process. Or the way that using digital notion speeds up arithmetic. Maybe there’s some thought tool we’ve never even dreamed of that can in fact crush any possible computation into a few quick chicken-scratches on the back of a business card. So our heroes learn this trick and they come back to spread the word.
And then we’ve got a world where the PCU fails. This is a reality where we can rapidly predict all kinds of things arbitrarily far into the future: weather, moods, stocks, health. A world where people have oracles. SF is all about making things immediate and tactile, so let’s suppose that a oracle is like a magic mirror. You look into it and ask it a question about the future, and it always gives you the right answer. Nice simple interface. What would it be like to live in a world with oracles?
I’m not sure yet. I’m still computing the outcome of this sequence of thought experiments—the computation consists of writing an SF novel called
Mathematicians in Love
.
How Gnarly Computation Ate My Brain
I got my inspiration for universal automatism from two computer scientists: Edward Fredkin and Stephen Wolfram. In the 1980s Fredkin (see digitalphilosophy.org) began saying that the universe is a particular kind of computation called a cellular automaton (CA for short). The best-known CA is John Conway’s Game of Life, but there are lots of others. I myself have done research involving CAs, and have perpetrated two separate free software packages for viewing them.
Wolfram is subtler than Fredkin; he doesn’t say that the universe is a cellular automaton. Wolfram feels that the most fundamental secret-of-the-life type computation should instead be something like a set of rules for building up a network of lines and dots. He’s optimistic about finding the ultimate rule; recently I was talking to him on the phone and he said he had a couple of candidates, and was trying to grasp what it might mean to say that the secret of the universe might be some particular rule with some particular rule number. Did someone say 42?
I first met Wolfram at the Princeton Institute for Advanced Study in 1984; I was a freelancer writing an article about cellular automata destined for, as chance would have it,
Isaac Asimov’s Science Fiction Magazine
(April, 1987). You might say that Wolfram converted me on the spot. I moved to Silicon Valley, retooled , and became a computer science professor at San Jose State University (SJSU), also doing some work as a programmer for the computer graphics company Autodesk. I spent the last twenty years in the dark Satanic mills of Silicon Valley. Originally I thought I was coming here as a kind of literary lark—like an overbold William Blake manning a loom in Manchester. But eventually I went native on the story. It changed the way I think.
For many years, Wolfram promised to publish a book on his ideas, and finally in 2002 he published his monumental
A New Kind of Science
, now readable in its entirety online. I like this book exceedingly; I think it’s the most important science book of our generation. At one point, my SJSU grad students and I even created a website for it.