Read Life on a Young Planet Online
Authors: Andrew H. Knoll
Figure 6.5.
Summary of geologic evidence for environmental transition on the early Proterozoic Earth. See text for discussion.
As these oxygen-sensitive minerals faded from the scene, another oxygen-
requiring
rock type rose to prominence (
figure 6.5
). Visitors to Arizona or Utah take home vivid memories of canyons carved out of strikingly red sandstones and shales. These rocks—called red beds, in the button-down parlance of geologists—derive their color from tiny flecks of iron oxide that coat sand grains. The iron oxides form within surface sands, but only when the groundwaters that wash them contain oxygen. Red beds are common only in sedimentary successions deposited after about 2.2 billion years ago.
The simplest explanation for these observations is that prior to about 2.2 billion years ago, the amount of oxygen in the atmosphere and surface ocean was small. Once again, the question of how small still sparks debate, but if we avoid special pleading, the upper limit appears to be
about 1 percent of present-day oxygen levels—and might have been much lower.
Independent evidence for early Proterozoic environmental change comes from ancient soils preserved by burial during floods. Soils form at the interface between rock and air, so they might be expected to reflect aspects of atmospheric chemistry. Dick Holland, a longtime friend and colleague at Harvard, has spent years hunting down ancient soil horizons and analyzing their chemistry. In fossil soils older than 2.4–2.2 billion years, he finds that the iron originally present in underlying rocks was removed as the soils formed. In contrast, iron in younger soils is retained (
figure 6.5
). Dick’s explanation is that when parent rocks weathered under low oxygen conditions, iron was released as ferrous ions and carried away in solution by oxygen-poor groundwaters. In contrast, once oxygen became plentiful, iron released by weathering was immediately converted to insoluble iron oxides and, so, remained in place. Deriving quantitative estimates of atmospheric oxygen from these observations is a complicated business, requiring knowledge of parent rock chemistry and (poorly constrained) estimates of carbon dioxide levels in the ancient atmosphere. Dick’s conclusion that atmospheric oxygen reached at least 15 percent of its present-day level may or may not be correct, but the
qualitative
conclusion that air became more breathable 2.4–2.2 billion years ago seems robust.
A loyal opposition, small but adamant, maintains that this record of atmospheric transformation is deeply misleading—that our oxygen-rich atmosphere originated much earlier than 2.2 billion years ago, perhaps even before Warrawoona time. Hiroshi Ohmoto, a geochemist at Pennsylvania State University and chief advocate of this alternative view, points out that mineralogical clues to ancient environments record
local
conditions that may not mirror the state of the planet as a whole. Ohmoto, therefore, interprets the evidence of iron formations, red beds, fossil soils, and O
2
-sensitive minerals in terms of unusual Archean and earliest Proterozoic volcanic rocks, local oxygen depletion in marine basins, and the like. Ohmoto was particularly buoyed by Jochen Brocks’s discovery of steranes in late Archean rocks, because sterol synthesis requires at least moderate amounts of oxygen (perhaps 1 percent of present-day levels, although the lower limit has not been established
rigorously). Of course, what is good for the goose is good for the gander; the steranes might also record local rather than global oxygen abundance. Quite possibly—and consistent with the mineralogical evidence—sterol synthesis originated in local oxygen oases within cyanobacterial mats, only later to spread across the planet.
How do we adjudicate this debate? Is Earth’s early sedimentary record really systematically misleading? Fortunately, some
bio
geochemical indicators provide globally integrated environmental signals, allowing us to evaluate mineralogical and biomarker data from a broader perspective. Principal among these are the isotopic abundances of carbon and sulfur in sedimentary rocks.
As explained in
chapter 3
, organic matter and limestones that accumulate on the present-day seafloor differ in their ratios of the stable carbon isotopes
13
C and
12
C by about 25 parts per thousand, reflecting the fractionation of carbon isotopes by photosynthetic algae and cyanobacteria. The isotopic differences between carbonate rocks and organic matter in most Precambrian sedimentary successions are only a little larger (26 to 30 parts per thousand)—the slight difference is thought to reflect similar biological processes played out beneath an atmosphere containing more carbon dioxide than at present. There are exceptions to this otherwise monotonous pattern, however, and, tellingly, almost all occur in rocks a bit older than 2.2–2.3 billion years.
In 1981, Martin Schoell and F. M. Wellmer discovered organic matter with unusually low ratios of
13
C to
12
C in lake beds about 2.8 billion years old from Canada. The organic matter was depleted in
13
C by as much as 45 parts per thousand, a fractionation too large to be ascribed to photosynthesis alone.
To understand these measurements and how they bear on Earth’s oxygen history, we need to call on the Jacob Marley facts introduced in
chapters 2
and
3
. Earlier we learned that microorganisms have evolved diverse metabolisms and that some metabolic processes, notably photosynthesis, fractionate carbon isotopes as they work. Because photosynthetic (or chemosynthetic) organisms can’t fractionate carbon isotopes by more than about 30 parts per thousand, we need to invoke additional metabolisms to explain Schoell and Wellmer’s measurements. The most likely candidates are methane-eating bacteria at work in sediments. Methane eaters gain both carbon and energy from natural
gas (CH
4
), and, like photosynthetic organisms, they are choosy when it comes to isotopes. Because of their chemical preference for
12
CH
4
over
13
CH
4
, microbes that eat methane fractionate carbon isotopes by 20–25 parts per thousand in environments where methane is abundant.
This allows us to account for the unusual chemical signatures in Schoell and Wellmer’s lake beds. We begin with cyanobacteria that fractionate carbon isotopes by 30 parts per thousand, convert some of the organic matter they produce to methane, and then use this gas to feed hungry methane eaters that impart additional fractionation. The intermediate step is the trick. How do we convert cyanobacterial biomass into methane? Thinking back to
chapter 2
, the answer is methanogenic Archaea. Methanogens living in sediments gain carbon and energy by breaking down organic molecules to methane and carbon dioxide. When hydrogen is present, they can grow by chemosynthesis, as well, generating methane that is strongly depleted in
13
C. In combination, then, photosynthetic organisms, methane-producing archaeans, and methane-eating bacteria can explain the unusual isotopic values in late Archean lake deposits.
Methanogens play an important role in the carbon cycle of modern lakes. Knowing this, paleontologists believed that Schoell and Wellmer’s discovery of high fractionation, 45 parts per thousand, made sense as a local, environmentally restricted exception to the rule. But it turned out not to be so exceptional. At about the same time that Schoell and Wellmer were working on Canadian rocks, John Hayes, an eminent geochemist now at the Woods Hole Oceanographic Institution, began a comprehensive survey of organic matter in Earth’s oldest sediments. Hayes found carbon isotopic differences between carbonates and organic matter as large as 60 parts per thousand in late Archean and earliest Proterozoic rocks, and he found them in marine as well as lacustrine strata. Between 2.8 and 2.2 billion years ago, methanogenic Archaea must have enjoyed a prominence in the global carbon cycle that they have not commanded since that time.
If we wish to understand why methanogens were so important in early ecosystems, we must first ask what limits their abundance today. The reasons once again have to do with the varied forms of microbial metabolism introduced in
chapter 2
. In terms of energy yield, aerobic respiration is the favored pathway for breaking down organic molecules,
so wherever oxygen is present, O
2
-respiring organisms will dominate this leg of the carbon cycle. Within sediments, however, organisms use oxygen faster than it can be supplied from overlying waters. As a result, oxygen declines and, at some distance below the surface, disappears completely. (In lakes and coastal marine environments, oxygen can drop to zero within a few millimeters of the sediment surface.) Under these conditions, other metabolic pathways kick in. Nitrate respiration is next in line in terms of energy yield, but nitrate is generally in short supply, so these bacteria aren’t major players in the carbon cycle. More important are sulfate-reducing bacteria. Sulfate is a major ion in seawater, enabling oxygen-depleted marine sediments to host large populations of sulfate reducers. Only where sulfate has been depleted, deep within marine sediments and at the bottom of the metabolic ladder, do we find fermenting bacteria and methanogenic archaeans. Lakes are a bit different. Because sulfate is only a minor constituent of fresh water, methanogens are more important than sulfate reducers in these settings.
We can now rephrase our question: why did the carbon cycle of late Archean and earliest Proterozoic oceans resemble that of modern oxygen-depleted lakes? Low oxygen levels provide an obvious explanation, or at least part of one. If oxygen was scarce on the early Earth, aerobic respiration must have been absent, or at least of limited and local biogeochemical importance. Oxygen alone doesn’t solve the problem, however, since sulfate-reducing bacteria still dominate over methanogens in modern marine sediments. Perhaps sulfate, like oxygen, was scarce in early oceans.
Now we’re closing in on our answer. Sulfate is produced in several ways. Photosynthetic bacteria can generate a limited supply, but most of the oceans’ sulfate is formed when sulfurous volcanic gases combine with oxygen or when pyrite crystals react with oxygen during weathering. Thus, if oxygen was scarce on the early Earth, sulfate would have been, as well.
By calling once more on Jacob Marley facts from
chapter 3
, we can test the idea that Archean oceans were sulfate poor. Recall that sulfate-reducing bacteria fractionate sulfur isotopes much in the way that cyanobacteria fractionate carbon. Experiments on modern sulfate reducers show that the H
2
S they produce can be depleted in
34
S by as
much as 45 parts per thousand; however, where sulfate falls to levels below about 3 percent of present-day seawater, little isotopic fractionation takes place. Compilations by Donald Canfield, of Odense University in Denmark, show only limited isotopic fractionation in sedimentary sulfur from Archean deposits. Fractionation levels increase markedly in lower Proterozoic rocks, just as the exaggerated carbon isotopic signal associated with methane producers and methane eaters begins to tail off (
figure 6.5
). Isotopic measurements, thus, support the idea that oxygen levels rose early in the Proterozoic Eon, increasing the abundance of sulfate in seawater and, in consequence, reversing the importance of methanogenic archeans and sulfate reducing bacteria in the marine carbon cycle.
One more high-tech probe can be pressed into service. Sulfur comes in four isotopic varieties:
32
S,
33
S,
34
S, and
36
S. The
32
S and
34
S isotopes get most of the attention because they are abundant and easily measured. For most purposes we don’t need to measure the rarer forms, because most processes that differentiate among isotopes do so by amounts that are directly proportional to their masses. Thus, if we know how fractionation has affected the abundant isotopes, we can calculate its effects on the rare ones.
I introduce this bit of chemical arcana because it leads us to an exciting new perspective on Earth’s early environmental history. Although
most
chemical and biochemical processes fractionate isotopes in a mass-dependent fashion, a few—especially chemical reactions driven by light in the upper atmosphere—can partition isotopes in a way that is
independent
of their masses. Finding the chemical fingerprint of these processes in ancient rocks requires the painstaking measurement of sulfur in all its isotopic variety. Mark Thiemens and his team at the University of California, San Diego, figured out how to do just that. Their sensitive measurements of sulfur isotopes in samples of Mars delivered to Earth as meteorites showed that early in the history of our planetary neighbor, its sulfur cycle was dominated by atmospheric processes that imparted a mass-independent fractionation. In the wake of this discovery, James Farquhar, a postdoc in Thiemens’s lab, trained his sights on ancient terrestrial rocks. To the great surprise of most geochemists, Farquhar demonstrated that gypsum and pyrite in Earth’s oldest sedimentary
successions
also
record mass-independent fractionation of sulfur isotopes. Like that of Mars, sulfur chemistry on the early Earth appears to have been influenced by photochemical processes that could be carried out only in an oxygen-poor atmosphere. Only after 2.45 billion years ago does this isotopic signal fade (
figure 6.5
), suggesting, independently of any other line of evidence, that oxygen began to accumulate in our atmosphere early in the Proterozoic Eon.