Read Eight Little Piggies Online
Authors: Stephen Jay Gould
Coppinger has raised guarding and herding dogs together from babyhood. They show little difference in behavior until puberty. Herders then develop the standard traits of adulthood—border collies begin to stalk, while retrievers and pointers live up to their names. But the guarders develop no new patterns and simply retain their youthful traits. Thus, a valuable set of features can be recruited together because they already exist as the normal form and behavior of juvenile dogs. Patterns of growth are rich reservoirs, not sterile strictures.
One tradition of argument identifies neoteny with all that is good and kind—“Except ye be converted, and become as little children, ye shall not enter into the kingdom of heaven.” Yet I resist any facile transference between natural realities and human hopes if only because the dark side of social utility should teach us caution in proposing analogies.
Neoteny certainly has its dark side in social misconstruction. Konrad Lorenz, who, to put it as kindly as possible, made his life in Nazi Germany more comfortable by tailoring his views on animal behavior to the prevailing orthodoxy, often argued during the early 1940s that civilization is the analogue of domestication. Domestic animals are often neotenous; neotenous animals retain the flexibility of youth and do not develop the instinctive and healthy aversion that mature creatures feel toward deformed and unworthy members of their race. Since humans have therefore lost this instinctive power to reject the genetically harmful, Lorenz defended Nazi racial and marriage laws as a mirror of nature’s mature ways.
Still, I cannot help noting, since dogs are descended from wolves, and humans really are neotenous in both form and behavior (without justifying Lorenz’s fatuous and hateful reveries), that the neoteny of sheep-guarding dogs does fulfill, in a limited sense, one of the oldest and most beautiful of all prophecies: “The wolf also shall dwell with the lamb…and a little child shall lead them.”
DOUBLE ENTENDRE
can be delicious. Who does not delight in learning that Earnest, in Oscar Wilde’s play, is a good chap, not a worthy attitude. And who has ever begrudged that tragic figure his little joke. But double meanings also have their dangers—particularly when two communities use the same term in different ways, and annoying confusion, rather than pleasant amusement or enlightenment, results.
Differences in scientific and vernacular definitions of the same word provide many examples of this frustrating phenomenon. “Significance” in statistics, for example, bears little relation to the ordinary meaning of something that matters deeply. Mouse tails may be “significantly” longer in Mississippi than in Michigan—meaning only that average lengths are not the same at some level of confidence—but the difference may be so small that no one would argue for significance in the ordinary sense. But the most serious of all misunderstandings between technical and vernacular haunts the concepts of probability and particularly the words
random
and
chance
.
In ordinary English, a random event is one without order, predictability, or pattern. The word connotes disaggregation, falling apart, formless anarchy, and fear. Yet, ironically, the scientific sense of
random
conveys a precisely opposite set of associations. A phenomenon governed by chance yields maximal simplicity, order, and predictability—at least in the long run. Suppose that we are interested in resolving the forces behind a large-scale pattern of historical change. Randomness becomes our best hope for a maximally simple and tractable model. If we flip a penny or throw a pair of dice, once a second for days on end, we achieve a rigidly predictable distribution of outcomes. We can predict the margins of departure from 50-50 in the coins or the percentage of sevens for our dice, based on the total number of throws. When the number of tosses becomes quite large, we can give precise estimates and ranges of error for the frequencies and lengths of runs—heads or sevens in a row—all based on the simplest of mathematical formulas from the theory of probability. (Of course, we cannot predict the outcome of any particular trial or know when a run will occur, as any casual gambler should—but so few do—know.)
Thus, if you wish to understand patterns of long historical sequences, pray for randomness. Ironically, nothing works so powerfully against resolution as conventional forms of determinism. If each event in a sequence has a definite cause, then, in a world of such complexity, we are lost. If
A
happened because the estuary thawed on a particular day, leading to
B
because Billy the Seal swam by and gobbled up all those fishes, followed by
C
when Sue the Polar Bear sauntered through—not to mention ice age fluctuations, impacting asteroids, and drifting continents of the truly long run—then how can we hope to predict and resolve the outcome?
The beauty (and simplicity) of randomness lies in the absence of these maximally confusing properties. Coin flipping permits no distinctive personality to any time or moment; each toss can be treated in exactly the same way, whenever it occurs. We can date geological time with precision by radioactive decay because each atom has an equal probability of decaying in each instant. If causal individuality intervened—if 10:00
A.M.
on Sunday differed from 5:00
P.M.
on Wednesday, or if Joe the uranium atom, by dint of moral fiber, resisted decay better than his brother Tom, then randomness would fail and the method would not work.
One of the best illustrations for this vitally important, but counterintuitive, principle of maximal long-term order in randomness comes from my own field of evolutionary biology—and from a debate that has greatly contributed to making professional life more interesting during the past twenty years. Traditional Darwinism includes an important role for randomness—but only as a source of variation, or raw material, for evolutionary change, not as an agent for the direction of change itself. For Darwin, the predominant source of evolutionary change resides in the deterministic force of natural selection. Selection works for cause and adapts organisms to changing local environments. Random variation supplies the indispensable “fuel” for natural selection but does not set the rate, timing, or pattern of change. Darwinism is a two-part theory of randomness for raw material and conventional causality for change—
Chance and Necessity
, as so well epitomized by Jacques Monod in the title of his famous book about the nature of Darwinism.
In the domain of organisms and their good designs, we have little reason to doubt the strong, probably dominant influence of deterministic forces like natural selection. The intricate, highly adapted forms of organisms—the wing of a bird or the mimicry of a dead twig by an insect—are too complex to arise as long sequences of sheer good fortune under the simplest random models. But this stricture of complexity need not apply to the nucleotide-by-nucleotide substitutions that build the smallest increments of evolutionary change at the molecular level. In this domain of basic changes in DNA, a “neutralist” theory, based on simple random models, has been challenging conventional Darwinism with marked success during the past generation.
When the great Japanese geneticist Motoo Kimura formulated his first version of neutral theory in 1968 (see bibliography), he was impressed by two discoveries that seemed difficult to interpret under the conventional view that natural selection overwhelms all other causes of evolutionary change. First, at the molecular level of substitutions in amino acids, measured rates indicated a constancy of change across molecules and organisms—the so-called molecular clock of evolution. Such a result makes no sense in Darwin’s world, where molecules subject to strong selection should evolve faster than others, and where organisms exposed to different changes and challenges from the environment should vary their evolutionary rates accordingly. At most, one might claim that these deterministic differences in rate might tend to “even out” over very long stretches of geological time, yielding roughly regular rates of change. But a molecular clock surely gains an easier interpretation from random models. If deterministic selection does not regulate most molecular changes—if, on the contrary, most molecular variations are neutral, and therefore rise and fall in frequency by the luck of the draw—then mutation rate and population size will govern the tempo of change. If most populations are large, and if mutation rates are roughly the same for most genes, then simple random models predict a molecular clock.
Second, Kimura noted the recent discovery of surprisingly high levels of variation maintained by many genes among members of populations. Too much variation poses a problem for conventional Darwinism because a cost must accompany the replacement of an ancestral gene by a new and more advantageous state of the same gene—namely, the differential death, by natural selection, of the now disfavored parental forms. This cost poses no problem if only a few old genes are being pushed out of a population at any time. But if hundreds of genes are being eliminated, then any organism must carry many of the disfavored states and should be ripe for death. Thus, selection should not be able to replace many genes at once. But the data on copious variability seemed to indicate a caldron of evolutionary activity at far too many genetic sites—too many, that is, if selection governs the changes in each varying gene. Kimura, however, recognized a simple and elegant way out of this paradox. If most of the varying forms of a gene are neutral with respect to selection, then they are drifting in frequency by the luck of the draw. Invisible to natural selection because they make no difference to the organism, these variations impose no cost in replacement.
In twenty years of copious writing, Kimura has always carefully emphasized that his neutral theory does not disprove Darwinism or deny the power of natural selection to shape the adaptations of organisms. He writes, for example, at the beginning of his epochal book
The Neutral Theory of Molecular Evolution
(1983):
The neutral theory is not antagonistic to the cherished view that evolution of form and function is guided by Darwinian selection, but it brings out another facet of the evolutionary process by emphasizing the much greater role of mutation pressure and random drift at the molecular level.
The issue, as so often in natural history (and as I emphasize so frequently in these essays), centers upon the relative importance of the two processes. Kimura has never denied adaptation and natural selection, but he has tended to view these processes as quantitatively insignificant in the total picture—a superficial and minor ripple upon the ocean of neutral molecular change, imposed every now and again when selection casts a stone upon the waters of evolution. Darwinians, on the other hand, at least before Kimura and his colleagues advanced their potent challenge and reeled in the supporting evidence, tended to argue that neutral change occupied a tiny and insignificant corner of evolution—an odd process occasionally operating in small populations at the brink of extinction anyway.
This argument about relative frequency has raged for twenty years and has been, at least in the judgment of this bystander with no particular stake in the issue, basically a draw. More influence has been measured for selection than Kimura’s original words had anticipated; Darwin’s process is no mere pockmark on a sea of steady tranquility. But neutral change has been established at a comfortably high relative frequency. The molecular clock is neither as consistent nor as regular as Kimura once hoped, but even an imperfect molecular timepiece makes little sense in Darwin’s world. The ticking seems best interpreted as a pervasive and underlying neutralism, the considerable perturbations as a substantial input from natural selection (and other causes).
Nonetheless, if forced to award the laurels in a struggle with no clear winners, I would give the nod to Kimura. After all, when innovation fights orthodoxy to a draw, then novelty has seized a good chunk of space from convention. But I bow to Kimura for another and more important reason than the empirical adequacy of neutralism at high relative frequency, for his theory so beautifully illustrates the theme that served as an introduction to this essay: the virtue of randomness in the technical as opposed to the vernacular sense.
Kimura’s neutralist theory has the great advantage of simplicity in mathematical expression and specification of outcome. Deterministic natural selection yields no firm predictions for the histories of lineages—for you would have to know the exact and particular sequences of biotic and environmental changes, and the sizes and prior genetic states of populations, in order to forecast an outcome. This knowledge is not attainable in a world of imperfect historical information. Even if obtainable, such data would only provide a prediction for a particular lineage, not a general theory. But neutralism, as a random model treating all items and times in the same manner, yields a set of simple, general equations serving as precise predictions for the results of evolutionary change. These equations give us, for the first time, a base-level criterion for assessing any kind of genetic change. If neutralism holds, then actual outcomes will fit the equations. If selection predominates, then results will depart from predictions—and in a way that permits identification of Darwinian control. Thus, Kimura’s equations have been as useful for selectionists as for neutralists themselves; the formulas provide a criterion for everyone, and debate can center upon whether or not the equations fit actual outcomes. Kimura has often emphasized this point about his equations, and about random models in general. He wrote, for example, in 1982:
The neutral theory is accompanied by a well-developed mathematical theory based on the stochastic theory of population genetics. The latter enables us to treat evolution and variation quantitatively, and therefore to check the theory by observations and experiments.
The most important and useful of these predictions involves a paradox under older Darwinian views. If selection controls evolutionary rate, one might think that the fastest tempos of alteration would be associated with the strongest selective pressures for change. Speed of change should vary directly with intensity of selection. Neutral theory predicts precisely the opposite—for an obvious reason once you start thinking about it. The most rapid change should be associated with unconstrained randomness—following the old thermodynamic imperative that things will invariably go to hell unless you struggle actively to maintain them as they are. After all, stability is far more common than change at any moment in the history of life. In its ordinary everyday mode, natural selection must struggle to preserve working combinations against a constant input of deleterious mutations. In other words, natural selection, in our technical parlance, must usually be “purifying” or “stabilizing.” Positive selection for change must be a much rarer event than watchdog selection for tossing out harmful variants and preserving what works.
Now, if mutations are neutral, then the watchdog sees nothing and evolutionary change can proceed at its maximal tempo—the neutral rate of substitution. But if a molecule is being preserved by selection, then the watchdog inhibits evolutionary change. This originally counterintuitive proposal may be regarded as the key statement of neutral theory. Kimura emphasizes the point with italics in most of his general papers, writing for example (in 1982): “
Those molecular changes that are less likely to be subjected to natural selection occur more rapidly in evolution
.”
Both the greatest success, and the greatest modification, of Kimura’s original theory have occurred by applying this principle that selection slows the maximal rate of neutral molecular change. For modification of the original theory, thousands of empirical studies have now shown that watchdog selection, measured by diminished tempo of change relative to predictions of randomness, operates at a far higher relative frequency than Kimura’s initial version of neutralist theory had anticipated. For success, the firm establishment of the principle itself must rank as the greatest triumph of neutralism—for the tie of maximal rate to randomness (rather than to the opposite expectation of intense selection) does show that neutralism exerts a kind of base-level control over evolution as a whole.