The Beginning of Infinity: Explanations That Transform the World (25 page)

BOOK: The Beginning of Infinity: Explanations That Transform the World
12.5Mb size Format: txt, pdf, ePub

Fortunately, the limitation that the information being processed must be digital does not detract from the universality of digital computers – or of the laws of physics. If measuring the goats in whole numbers of inches is insufficient for a particular application, use whole numbers of
tenths
of inches, or billionths. The same holds for all other applications: the laws of physics are such that the behaviour of any physical object – and that includes any other computer – can be simulated with any desired accuracy by a universal digital computer. It is just a matter of approximating continuously variable quantities by a sufficiently fine grid of discrete ones.

Because of the necessity for error-correction,
all
jumps to universality occur in digital systems. It is why spoken languages build words out of a finite set of elementary sounds: speech would not be intelligible if it were analogue. It would not be possible to repeat, nor even to remember, what anyone had said. Nor, therefore, does it matter that universal writing systems cannot perfectly represent analogue information such as tones of voice. Nothing can represent those perfectly. For the same reason, the sounds themselves can represent only a finite number of possible meanings. For example, humans can distinguish between only about seven different sound volumes. This is roughly reflected in standard musical notation, which has approximately seven different symbols for loudness (such as
p
,
mf
,
f
, and so on). And, for the same reason, speakers can only
intend
a finite number of possible meanings with each utterance.

Another striking connection between all those diverse jumps to universality is that they all happened on Earth. In fact all known jumps to universality happened under the auspices of human beings – except one, which I have not mentioned yet, and from which all the others, historically, emerged. It happened during the early evolution of life.

Genes in present-day organisms replicate themselves by a complicated and very indirect chemical route. In most species they act as templates for forming stretches of a similar molecule, RNA. Those then act as programs which direct the synthesis of the body’s constituent chemicals,
especially enzymes, which are
catalysts
. A catalyst is a kind of constructor – it promotes a change among other chemicals while remaining unchanged itself. Those catalysts in turn control all the chemical production and regulatory functions of an organism, and hence define the organism itself, crucially including a process that makes a copy of the DNA. How that intricate mechanism evolved is not essential here, but for definiteness let me sketch one possibility.

About four billion years ago – soon after the surface of the Earth had cooled sufficiently for liquid water to condense – the oceans were being churned by volcanoes, meteor impacts, storms and much stronger tides than today’s (because the moon was closer). They were also highly active chemically, with many kinds of molecules being continually formed and transformed, some spontaneously and some by catalysts. One such catalyst happened to catalyse the formation of some of the very kinds of molecules from which it itself was formed. That catalyst was not alive, but it was the first hint of life.

It had not yet evolved to be a well-targeted catalyst, so it also accelerated the production of some other chemicals, including variants of itself. Those that were best at promoting their own production (and inhibiting their own destruction) relative to other variants became more numerous. They too promoted the construction of variants of themselves, and so evolution continued.

Gradually, the ability of these catalysts to promote their own production became robust and specific enough for it to be worth calling them replicators. Evolution produced replicators that caused themselves to be replicated ever faster and more reliably.

Different replicators began to join forces in groups, each of whose members specialized in causing one part of a complex web of chemical reactions whose net effect was to construct more copies of the entire group. Such a group was a rudimentary organism. At that point, life was at a stage roughly analogous to that of non-universal printing, or Roman numerals: it was no longer a case of each replicator for itself, but there was still no universal system being customized or programmed to produce specific substances.

The most successful replicators may have been RNA molecules. They have catalytic properties of their own, depending on the precise sequence of their constituent molecules (or bases, which are similar to
those of DNA). As a result, the replication process became ever less like straightforward catalysis and ever more like programming – in a language, or genetic code, that used bases as its alphabet.

Genes are replicators that can be interpreted as instructions in a genetic code. Genomes are groups of genes that are dependent on each other for replication. The process of copying a genome is called a living organism. Thus the genetic code is also a language for specifying organisms. At some point, the system switched to replicators made of DNA, which is more stable than RNA and therefore more suitable for storing large amounts of information.

The familiarity of what happened next can obscure how remarkable and mysterious it is. Initially, the genetic code and the mechanism that interpreted it were both evolving along with everything else in the organisms. But there came a moment when the code stopped evolving yet the organisms continued to do so. At that moment the system was coding for nothing more complex than primitive, single-celled creatures. Yet virtually all subsequent organisms on Earth, to this day, have not only been based on DNA replicators but have used exactly the same alphabet of bases, grouped into three-base ‘words’, with only small variations in the meanings of those ‘words’.

That means that, considered as a language for specifying organisms, the genetic code has displayed phenomenal reach. It evolved only to specify organisms with no nervous systems, no ability to move or exert forces, no internal organs and no sense organs, whose lifestyle consisted of little more than synthesizing their own structural constituents and then dividing in two. And yet the same language today specifies the hardware and software for countless multicellular behaviours that had no close analogue in those organisms, such as running and flying and breathing and mating and recognizing predators and prey. It also specifies engineering structures such as wings and teeth, and nanotechnology such as immune systems, and even a brain that is capable of explaining quasars, designing other organisms from scratch, and wondering why it exists.

During the entire evolution of the genetic code, it was displaying far less reach. It may be that each successive variant of it was used to specify only a few species that were very similar to each other. At any rate, it must have been a frequent occurrence that a species embodying
new knowledge was specified in a new variant of the genetic code. But then the evolution stopped, at a point when it had already attained enormous reach. Why? It looks like a jump to some sort of universality, does it not?

What happened next followed the same sad pattern that I have described in other stories of universality: for well over a billion years after the system had reached universality and stopped evolving, it was
still
only being used to make bacteria. That means that the reach that we can now see that the system had was to remain unused for longer than the system itself had taken to evolve from non-living precursors. If intelligent extraterrestrials had visited Earth at any time during those billion years they would have seen no evidence that the genetic code could specify anything significantly different from the organisms that it had specified when it first appeared.

Reach always has an explanation. But this time, to the best of my knowledge, the explanation is not yet known. If the reason for the jump in reach was that it was a jump to universality, what was the universality? The genetic code is presumably not universal
for specifying life forms
, since it relies on specific types of chemicals, such as proteins. Could it be a universal constructor? Perhaps. It does manage to build with inorganic materials sometimes, such as the calcium phosphate in bones, or the magnetite in the navigation system inside a pigeon’s brain. Biotechnologists are already using it to manufacture hydrogen and to extract uranium from seawater. It can also program organisms to perform constructions outside their bodies: birds build nests; beavers build dams. Perhaps it would it be possible to specify, in the genetic code, an organism whose life cycle includes building a nuclear-powered spaceship. Or perhaps not. I guess it has some lesser, and not yet understood, universality.

In 1994 the computer scientist and molecular biologist Leonard Adleman designed and built a computer composed of DNA together with some simple enzymes, and demonstrated that it was capable of performing some sophisticated computations. At the time, Adleman’s DNA computer was arguably the fastest computer in the world. Further, it was clear that a
universal
classical computer could be made in a similar way. Hence we know that, whatever that other universality of the DNA system was, the universality of computation had also been
inherent in it for billions of years, without ever being used – until Adleman used it.

The mysterious universality of DNA as a constructor may have been the first universality to exist. But, of all the different forms of universality, the most significant physically is the characteristic universality of people, namely that they are universal explainers, which makes them universal constructors as well. The effects of that universality are, as I have explained, explicable only by means of the full gamut of fundamental explanations. It is also the only kind of universality capable of transcending its parochial origins: universal computers cannot really be universal unless there are people present to provide energy and maintenance – indefinitely. And the same is true of all those other technologies. Even life on Earth will eventually be extinguished, unless people decide otherwise. Only people can rely on themselves into the unbounded future.

TERMINOLOGY

The jump to universality
   The tendency of gradually improving systems to undergo a sudden large increase in functionality, becoming universal in some domain.

MEANINGS OF ‘THE BEGINNING OF INFINITY’ ENCOUNTERED IN THIS CHAPTER

– The existence of universality in many fields.

– The jump to universality.

– Error-correction in computation.

– The fact that people are universal explainers.

– The origin of life.

– The mysterious universality to which the genetic code jumped.

SUMMARY

All knowledge growth is by incremental improvement, but in many fields there comes a point when one of the incremental improvements in a system of knowledge or technology causes a sudden increase in
reach, making it a universal system in the relevant domain. In the past, innovators who brought about such a jump to universality had rarely been seeking it, but since the Enlightenment they have been, and universal explanations have been valued both for their own sake and for their usefulness. Because error-correction is essential in processes of potentially unlimited length, the jump to universality only ever happens in digital systems.

7
Artificial Creativity

Alan Turing founded the theory of classical computation in 1936 and helped to construct one of the first universal classical computers during the Second World War. He is rightly known as the father of modern computing. Babbage deserves to be called its grandfather, but, unlike Babbage and Lovelace, Turing did understand that artificial intelligence (AI) must in principle be possible because a universal computer is a universal simulator. In 1950, in a paper entitled ‘Computing Machinery and Intelligence’, he famously addressed the question:
can a machine think?
Not only did he defend the proposition that it can, on the grounds of universality, he also proposed a test for whether a program had achieved it. Now known as the Turing test, it is simply that a suitable (human) judge be unable to tell whether the program is human or not. In that paper and subsequently, Turing sketched protocols for carrying out his test. For instance, he suggested that both the program and a genuine human should separately interact with the judge via some purely textual medium such as a teleprinter, so that only the thinking abilities of the candidates would be tested, not their appearance.

Turing’s test, and his arguments, set many researchers thinking, not only about whether he was right, but also about how to pass the test. Programs began to be written with the intention of investigating what might be involved in passing it.

In 1964 the computer scientist Joseph Weizenbaum wrote a program called
Eliza
, designed to imitate a psychotherapist. He deemed psychotherapists to be an especially easy type of human to imitate because the program could then give opaque answers about itself, and only ask questions based on the user’s own questions and statements. It was a remarkably simple program. Nowadays such programs are popular
projects for students of programming, because they are fun and easy to write. A typical one has two basic strategies. First it scans the input for certain keywords and grammatical forms. If this is successful, it replies based on a template, filling in the blanks using words in the input. For instance, given the input
I hate my job
, the program might recognize the grammar of the sentence, involving a possessive pronoun ‘my’, and might also recognize ‘hate’ as a keyword from a built-in list such as ‘love/hate/like/dislike/want’, in which case it could choose a suitable template and reply:
What do you hate most about your job?
If it cannot parse the input to that extent, it asks a question of its own, choosing randomly from a stock pattern which may or may not depend on the input sentence. For instance, if asked
How does a television work?
, it might reply,
What is so interesting about “How does a television work?”?
Or it might just ask,
Why does that interest you?
Another strategy, used by recent internet-based versions of
Eliza
, is to build up a database of previous conversations, enabling the program simply to repeat phrases that other users have typed in, again choosing them according to keywords found in the current user’s input.

Other books

Hot Secrets by Lisa Marie Rice
Devil to Pay by C. Northcote Parkinson
Gusanos de arena de Dune by Kevin J. Anderson Brian Herbert
Nothing but Trouble by Tory Richards
Bound by Her by Fox, Danielle
Margaret & Taylor by Kevin Henkes