Evolution Impossible (27 page)

Read Evolution Impossible Online

Authors: Dr John Ashton

Tags: #Christian Books & Bibles, #Theology, #Apologetics, #Religion & Spirituality

BOOK: Evolution Impossible
5.64Mb size Format: txt, pdf, ePub

Professor Edward A. Boudreaux is professor emeritus of chemistry at the University of New Orleans and holds a PhD in theoretical chemistry from Tulane University. He writes:

The element oxygen (O) exists freely in nature as the gaseous diatomic molecule O
2
. There are other representative elements that also occur as free diatomic molecules, e.g., hydrogen (H
2
), nitrogen (N
2
), fluorine (F
2
), and chlorine (Cl
2
). However, O
2
is the only molecule of this type possessing two unpaired electrons
; the others all have paired electrons. In spite of this, O
2
is still chemically stable. This singular notable exception to the electron-pair rule of stability for representative elements has no known explanation. The only other molecule with an electron arrangement exactly that of O
2
is S
2
. However, S
2
is a highly unstable molecule, which is the reason that sulfur does not exist in this form. Furthermore, if it were not for the two unpaired electrons in O
2
, it would not be capable of binding to the iron (Fe) atoms in hemoglobin with precisely the amount of energy needed to carry the O
2
into the bloodstream and then release it. Some other molecules such as CO and NO can replace O
2
in binding to hemoglobin, but they completely destroy the hemoglobin function.
Similarly, there are several other transition metals comparable to iron that can replace it in hemoglobin and also bind O
2
, but this binding is either too strong or too weak. Thus, there are no non-iron analogues of hemoglobin having the required properties of normal hemoglobin for transporting 0
2
in blood metabolism.
The structured portion of hemoglobin that binds iron is called a
porphrin ring.
If this porphrin is translated into another biomolecular environment and the iron atom replaced by magnesium (Mg), chlorophyll, a key component essential to plant metabolism, is the most efficient photoelectric cell known. It is some 80 percent more efficient than any photocells fabricated by man. While calcium (Ca) and some other metals can replace Mg in chlorophyll, the products do not at all duplicate the photoelectric efficiency of true chlorophyll.
Proteins are composed of amino acid molecules chemically bound together by what are called
polypeptide bonds.
The amino acids themselves are carbon-hydrogen compounds containing an amine group, i.e., -NH
2
, -NHR, or-NR
2
(where R represents one or more carbon-hydrogen groups) bonded to a C atom, plus an acid group (-COOH) bonded to the same C atom. Although there are thousands of varieties of amino acids, only
20
are involved in
all
protein structures.
Furthermore, amino acids exist in two structural forms, D and L, which are non-superimposable mirror images of each other. In the absence of any imposed controls, both D and L forms will naturally occur in essentially equal amounts; however, all proteins are made of
only
the L form. By way of contrast, sugars (saccharides) that are carbon-hydrogen-oxygen compounds, have closed ring structures and also exist in both D and L isomeric forms. While there are numerous varieties of sugars, it is only the simplest, five-membered ring structure called
ribose,
in only its D form, that is present as one of the three fundamental molecular components in the structures of DNA and RNA.
Both DNA (deoxyribonucleic acid) and RNA (ribonucleic acid) are in some respects more complex than proteins, because they contain a greater variety of molecular units forming nucleosides (nucleotide bases, ribose, and phosphate). These nucleosides are all joined together in very specific patterns so as to perform unique and crucial functions. The ribose and phosphate (-P04) units are bonded together in a regularly alternating sequence, thus producing long chains coiled in a right-handed helix. Each nucleotide is bound to one specific C atom on each ribose unit. In the case of RNA, the structure is a single stranded
right-handed helix
containing four different nucleotides (adenine, cytosine, guanine, uracil) arranged in very specific repeating sequences throughout the length of the chain. Each type of RNA has a different pattern in the sequencing of the four nucleotides. The DNA structure consists of a
right-handed double helix
also containing four nucleotides. Three of these are the same as in RNA, but one is different; thymine replaces uracil.
The nucleotides themselves belong to two classes of molecules called
purines
and
pyrimidines.
Adenine and guanine are purines, while cytosine, thymine, and uracil are pyrimidines. There are many hundreds of varieties of purines and pyrimidines, but
only these select five
determine the structures and functions of DNA and RNA.
Similarly, ribose is only one of a large number of molecules called
saccharides.
Why only
ribose
and its
D
isomer, but not one or more other saccharides in DNA and RNA? Likewise, why
only phosphate
and not sulfate or silicate, etc.?
Only phosphate works.
These few examples contain clear evidence of complex design imparting tailor-made functions. Such characteristics defy the probability that any random evolutionary process could account for such unique specificity in design.
6

Dr. John R. Baumgardner worked as a research scientist in the theoretical division of the Los Alamos National Laboratory for 20 years. He holds a PhD in geophysics from the University of California (Los Angeles) and was the chief developer of the TERRA code, a 3-D finite element program for modeling the earth’s mantle and lithosphere. He also holds an MS degree in electrical engineering from Princeton University and has specialized in complex numerical simulations. He writes:

Many evolutionists are persuaded that the 15 billion years they assume for the age of the cosmos is an abundance of time for random interactions of atoms and molecules to generate life. A simple arithmetic lesson reveals this to be no more than an irrational fantasy.
This arithmetic lesson is similar to calculating the odds of winning the lottery. The number of possible lottery combinations corresponds to the total number of protein structures (of an appropriate size range) that are possible to assemble from standard building blocks. The winning tickets correspond to the tiny sets of such proteins with the correct special properties from which a living organism, say a simple bacterium, can be successfully built. The maximum number of lottery tickets a person can buy corresponds to the maximum number of protein molecules that could have ever existed in the history of the cosmos.
Let us first establish a reasonable upper limit on the number of molecules that could ever have been formed anywhere in the universe during its entire history. Taking 1080 as a generous estimate for the total number of atoms in the cosmos, 1012 for a generous upper bound for the average number of interatomic interactions per second per atom, and 1018 seconds (roughly 30 billion years) as an upper bound for the age of the universe, we get 10110 as a very generous upper limit on the total number of interatomic interactions which could have ever occurred during the long cosmic history the evolutionist imagines. Now if we make the extremely generous assumption that each interatomic interaction always produces a unique molecule, then we conclude that no more than 10110 unique molecules could have ever existed in the universe during its entire history. Now let us contemplate what is involved in demanding that a purely random process find a minimal set of about 1,000 protein molecules needed for the most primitive form of life. To simplify the problem dramatically, suppose somehow we already have found 999 of the 1,000 different proteins required and we need only to search for that final magic sequence of amino acids which gives us that last special protein. Let us restrict our consideration to the specific set of 20 amino acids found in living systems and ignore the hundred or so that are not. Let us also ignore the fact that only those with left-handed symmetry appear in life proteins. Let us also ignore the incredibly unfavorable chemical reaction kinetics involved in forming long peptide chains in any sort of plausible nonliving chemical environment.
Let us merely focus on the task of obtaining a suitable sequence of amino acids that yields a 3D protein structure with some minimal degree of essential functionality. Various theoretical and experimental evidence indicates that in some average sense about half of the amino acid sites must be specified exactly. For a relatively short protein consisting of a chain of 200 amino acids, the number of random trials needed for a reasonable likelihood of hitting a useful sequence is then on the order of 20
100
(100 amino acid sites with 20 possible candidates at each site), or about 10
130
trials.
This is a hundred billion billion times the upper bound we computed for the total number of molecules ever to exist in the history of the cosmos!!
No random process could
ever
hope to find even one such protein structure, much less the full set of roughly 1,000 needed in the simplest forms of life. It is therefore sheer irrationality for a person to believe random chemical interactions could ever identify a viable set of functional proteins out of the truly staggering number of candidate possibilities.
In the face of such stunningly unfavorable odds, how could any scientist with any sense of honesty appeal to chance interactions as the explanation for the complexity we observe in living systems? To do so, with conscious awareness of these numbers, in my opinion represents a serious breach of scientific integrity. This line of argument applies, of course, not only to the issue of biogenesis but also to the issue of how a new gene/protein might arise in any sort of macroevolution process.
One retired Los Alamos National Laboratory fellow, a chemist, wanted to quibble that this argument was flawed because I did not account for details of chemical reaction kinetics. My intention was deliberately to choose a reaction rate so gigantic (one million million reactions per atom per second on average) that all such considerations would become utterly irrelevant. How could a reasonable person trained in chemistry or physics imagine there could be a way to assemble polypeptides on the order of hundreds of amino acid units in length, to allow them to fold into their three-dimensional structures, and then to express their unique properties, all within a small fraction of one picosecond!? Prior metaphysical commitments forced him to such irrationality.
Another scientist, a physicist at Sandia National Laboratories, asserted that I had misapplied the rules of probability in my analysis. If my example were correct, he suggested, it “would turn the scientific world upside-down.” I responded that the science community has been confronted with this basic argument in the past but has simply engaged in mass denial. Fred Hoyle, the eminent British cosmologist, published similar calculations two decades ago. Most scientists just put their hands over their ears and refused to listen.
7

He goes on to write:

One of the most dramatic discoveries in biology in the 20th century is that living organisms are realizations of coded language structures. All the detailed chemical and structural complexity associated with the metabolism, repair, specialized function, and reproduction of each living cell is a realization of the coded algorithms stored in its DNA. A paramount issue, therefore, is how do such extremely large language structures arise?
The origin of such structures is, of course, the central issue of the origin of life question. The simplest bacteria have genomes consisting of roughly a million codons. (Each codon, or genetic word, consists of three letters from the four-letter genetic alphabet.) Do coded algorithms a million words in length arise spontaneously by any known naturalistic process? Is there anything in the laws of physics that suggests how such structures might arise in a spontaneous fashion? The honest answer is simple. What we presently understand from thermodynamics and information theory argues persuasively they do not and cannot!
Language involves a symbolic code, a vocabulary, and a set of grammatical rules to relay or record thought. Many of us spend most of our waking hours generating, processing, or disseminating linguistic data. Seldom do we reflect on the fact that language structures are clear manifestations of nonmaterial reality.
This conclusion may be reached by observing that the linguistic information itself is independent
of its material carrier. The meaning or message does not depend on whether it is represented as sound waves in the air or as ink patterns on paper or as alignment of magnetic domains on a floppy disk or as voltage patterns in a transistor network. The message that a person has won the $100 billion lottery is the same whether that person receives the information by someone speaking at his door or by telephone or by mail or on television or over the Internet.
Indeed, Einstein pointed to the nature and origin of symbolic information as one of the profound questions about the world as we know it. He could identify no means by which matter could bestow meaning to symbols. The clear implication is that symbolic information, or language, represents a category of reality
distinct
from matter and energy. Linguists therefore today speak of this gap between matter and meaning-bearing symbols sets as the “Einstein gulf.” Today, in this information age, there is no debate that linguistic information is objectively real. With only a moment’s reflection we can conclude its reality is qualitatively different from the matter/energy substrate on which the linguistic information rides.

Other books

Mission Mars by Janet L. Cannon
Matthew's Chance by Odessa Lynne
In the Dark by Heather Graham
Heir of Danger by Alix Rickloff
Murphy's Law by Kat Attalla
Storky by D. L. Garfinkle
Rule by Crownover, Jay
Candy and Me by Hilary Liftin