The Universal Sense (32 page)

Read The Universal Sense Online

Authors: Seth Horowitz

BOOK: The Universal Sense
9.02Mb size Format: txt, pdf, ePub

But focusing too much on the use of sound in highly technical fields ignores the huge role sound plays in the everyday lives of the billions of people with daily access to smartphones and the Internet. During a discussion about the
Just Listen
project, Brad Lisle and I were talking about how sound not only could be used in teaching and media applications but also be an interactive tool to get people more engaged in their sonic world.
One of the projects we got most excited about was one we called Worldwide Ear. Worldwide Ear is a proposed citizen science project to carry out acoustic environmental mapping using simple recording equipment that will provide publicly available crowd-sourced information on global bioacoustics. The project would be run in a similar fashion to other web-based citizen science experiments such as Galaxy Zoo (
www.galaxyzoo.org
), which allows users to identify galactic phenotypes in Hubble images, or Snowtweets (
www.snowtweets.org
), which allows users to tweet the depth of snow in their area and then observe global snow depths based on the collected data using a application called Snowbird. Anybody with a cell phone or recording device and Internet access could participate, starting with entering information about the make and model of their recording device and their location. Two to three times a day, a user would go to a specific location, orient and mount their equipment in the same way, and record sounds for a minimum of five minutes. The recording would then be uploaded to a central server, where it is translated into a common format (e.g., MP3) and linked to a geographical database program with user-uploadable content, such as Google Earth or NASA’s World Wind.

Why would this be useful? Why would people want to bother participating in such a project? Because by getting enough people to participate, we could create a remarkable environmental tool to assess global acoustic ecology. Acoustic ecology is the study of changes in sound caused by modifications to the environment, by either human or natural causes. Sound can be both a measure of environmental factors, such as the loss of specific birdsongs in an area that has undergone significant development, and an instigator of environmental changes, as shown
by correlations between poorer cardiovascular health and excessive noise levels in inner-city areas. As changes in sound are among the most pervasive and ubiquitous things we can sense, the Worldwide Ear project would allows us to examine the ecological “health” of a region over time scales ranging from hours to years with relatively simple equipment.

If the Worldwide Ear project gets under way, it would not only give us an aural window on our world, letting us be acoustic tourists, but also could be an incredibly powerful research, policy, and educational tool. Such a freely accessible acoustic database could provide lawmakers and acousticians with information on urban bioacoustics, the sonic environment of cities. It could let someone compare low-frequency noise bands at different times of day in Rome, a populated site with a great deal of vehicular traffic, versus Venice, a site with many fewer roads but comparable human foot traffic. These data would be important for assessing human health and epidemiology by allowing researchers to create maps of specific urban and rural regions based on sound levels for specific frequency bands (similar to isobars used in meteorology to create weather maps) and then look for correlations with human health or cognitive issues. Do quiet areas have lower cardiac risk? Do students in schools near airports show lower performance?

The data would also be useful for understanding economics. Are farming communities that use “bird cannons” showing better crop yield by frightening off birds or are they showing lower yields because of increased insect pest activity from the lack of birds? Can correlations be made between acoustic conditions and socioeconomic status of a region, that is, are property values higher in quieter areas, or is there more wealth in regions with noisy manufacturing? And what about monitoring the
ecological health of an area? Recordings made at different times of the day from different sites can be used in combination with automatic animal call recognition software to identify the species of birds, frogs, insects, and other acoustically active animals and map changes in population activity across a year or even changes in population density by comparing the sounds of a single species over multiple years or locations. While this is a near-future application that could be implemented today, the data acquired could help researchers assess and improve human and ecological health on Earth for decades to come, just by people listening where they live with the tools they already have.

But what can go beyond mapping the whole Earth with our ears? Hearing is our exploratory sense, the one that reaches out ahead and behind us, in dark and light. So it is somewhat ironic that its role has been so incredibly limited in the greatest human adventure of all, the exploration of space.

At the ungodly hour of 6:30 A.M. on October 9, 2009, my wife and I headed over to the NASA Planetary Data Center at Brown University to watch the LCROSS spacecraft impact into the lunar surface in aid of NASA’s search for water on the moon. Because of the hour, I was a little concerned that no one would show up at the event. I was very happy to see that I was wrong—two rooms were full of students and faculty, eyes locked on the screens showing the real-time link from the LCROSS spacecraft on NASA TV. When the final countdown started, the room became silent. And there, in the middle of a black region untouched by sunlight probably for billions of years, was the tiniest of blips on the screen, so small that I wondered if it was just signal interference. The room was full of quiet murmurs, mostly from people who missed it, wondering if something had gone wrong.

I realized that one of the things making many of us wonder was that this huge impact yielded only the tiniest bit of input to any of our senses. While intellectually everyone in that room
knew
that there wouldn’t have been any sound even if they had been standing on the moon within sight of the impact (except for vibrations transmitted through their space suits’ boots), the silence divorced us from the event.

We expect sound when something huge happens—and if there is none to be had, we provide it with applause and cheers (or screams and yells). For every extraterrestrial landing event, the success of the landing, out of sight millions of miles away from the Earth, with no access to sound or images, is usually signaled by a flip of a data bit, but heralded by the cheers of the mission planners. (Or the silence, which weighs heavier and heavier when the signal doesn’t come through, as with the Mars Polar Lander.) For anyone old enough to remember it (and allowed to stay up late enough to watch), the first moon landing was a pivotal event. For a brief moment, no one cared about the Vietnam War, student protests, or racial problems. Everyone was watching a human set foot on another world. But what stands out for me, and what I still carry burned into my memory, is not the blurry video signals but the noisy, static-ridden, and highly compressed voice of Neil Armstrong saying, “That’s one small step for a man, one giant leap for mankind.” (And yes, he did say the “a”—subsequent analysis of the old radio signals a few years back showed that it was there but had gotten lost in data compression.)

But these are the sounds humans provide. The entire arena of sound off Earth is something that has gotten only the most cursory attention, probably because we think of space as silent. Sound requires some medium to propagate, and human ears are
primarily sensitive to airborne sound. But while interplanetary and intergalactic space have vacuum levels so dramatic that we have trouble simulating them even with expensive test chambers, they are not truly empty. There are particles out there, a few hydrogen atoms in each cubic meter—not much compared to the 10
25
particles in the same volume at sea level on Earth, but with a powerful enough source, it’s still possible to get what would pass for an acoustic oscillation. The problem is that propagation of acoustic waves in space happens over such large distances, over such long time scales, and in such a thin medium that to hear the B-flat drone of a black hole fifty-seven octaves below middle C (giving it a period of oscillation of over 10 million years), you need either really big ears and a lot of patience or access to the NASA Chandra X-ray observatory, as Andrew Fabian, the drone’s discoverer, had.

For the most part, when you hear what’s called “sounds from space,” most of what you are actually hearing is “sonification,” or the translation of electromagnetic wave phenomena such as radio waves into acoustic signals. You need the translation for a number of reasons. First of all, you don’t have any sensors capable of picking up electromagnetic radiation.
59
Next, while radio information has frequency, period, and amplitude and is subject to a lot of the same loss factors as sound, including reflection, refraction, and spreading loss, radio signals are spreading and vibrating about a million times faster than sound does on Earth.

But we have over a century of experience converting radio signals into sound right here on Earth, and nearly eighty years
ago Karl Jansky built the first operational radio telescope, allowing him to listen in to the radio emissions of the Milky Way. Since then, sonification of electromagnetic and space-borne acoustic events has let us listen in to the sounds of our sun and other stars ringing as convection currents cycle heat from their surface, and listen to the howling of the solar wind as charged particles spread out through the solar system. Ground- and satellite-based systems let us hear the effects on the Earth’s upper atmosphere of this solar wind, generating eerie high-pitched kilometric radiation from auroras, as well as the “Earth chorus” formed by free electrons spiraling through the Van Allen radiation belts. Large radio telescopes and orbital probes have detected the same kind of phenomenon around Jupiter and Saturn, not only giving us insight into the structure of their electromagnetic environment but showing commonalities of planetary interactions with the solar wind in both the inner and outer parts of the solar system. And as we’ve pushed the limits of our remote exploration further and further, sonification by John Cramer of the University of Washington of the microwave energy left over from the big bang has let us “hear” the 14-billion-year-old echoes of the creation of the universe, a slowly changing, mournful sound, as if the universe were having second thoughts.

But despite our endless fascination with the energies of interplanetary and interstellar space, we are planet dwellers. Even I have to admit to being much more interested in what goes on or under the surface of Mars or in the ethane lakes of Titan than in the incredible power of a solar coronal mass ejection, even though the latter might impact me more directly by knocking out my GPS. Our acoustic experience on the planets has been extremely limited. Despite eighteen successful landing
missions on other bodies in our solar system, only three probes have had dedicated microphones built in.

In 1981, the Soviet Union launched the Venera 13 and 14 probes to take measurements of the atmosphere and carry out experiments on the surface of Venus. Previous Venera missions had landed and survived on the surface for up to two hours, but all of them had their share of problems, ranging from stuck camera lens covers to failure of the soil analysis experiments due to damaged pressure seals. But Venera 13 in particular was remarkable in that it not only survived the crushing atmospheric pressure and high temperatures but sent back high-resolution images of the surface, analyses of soil samples drilled from the ancient basaltic ground, and, for the first time, sound from another world. The Venera 13 and 14 probes had an instrument called the Groza-2, designed by investigator Leonid Ksanfomaliti, which consisted of seismometers for detecting surface vibration and small microphones for picking up airborne sound. The microphones were heavily armored and relatively insensitive, designed more for survival in the pressure-cooker atmosphere than for high-fidelity recordings, but they worked for the several hours of descent and several more on the surface, in the midst of sulfuric acid clouds. The microphones detected the sound of thunder as the probes were descending, and the low susurration of slow, thick winds on the surface.

I tried for several years to get copies of the actual recordings, but neither the original tapes nor any backups seem to have been translated into any contemporary format. The closest I could come to hearing the original sounds were low-sample-rate plots of the waveforms showing how the Groza-2 instrument on Venera 13 picked up the sound of the lens cap ejecting and striking the surface, followed by the sound of drilling during a
soil sampling experiment and the sound of the sample being placed in the experimental chamber. Before we roll our eyes at such a limited return, bear in mind that the recordings were done using technology from thirty years ago on the surface of Venus, at temperatures of about 455°C and under 89 Earth atmospheres of pressure. I have trouble getting decent recordings when it’s raining.

The only other solar system body to reveal any of its sounds was not even a planet but Titan, the largest moon of Saturn. Titan is a peculiar moon, 50 percent larger than our moon, making it almost planet-size, tidally locked with its host planet, Saturn, giving it a day of 15 days and 22 hours in Earth time. Its distance from the Sun, 1.5 billion kilometers, would make it just another icy or rocky body, except for one thing: it has an atmosphere similar in composition to that of Earth, mostly nitrogen with traces of organics such as ethane and methane, which form its clouds. Rather than the thin, wispy envelope you might expect from a small body, its atmosphere is actually about 50 percent thicker than that of Earth. Under the influence of both energy from the sun and tidal stresses from Saturn, Titan is an extremely dynamic place, covered in lakes and rivers of ethane and propane, snows of methane, sand made of frozen hydrocarbons forming massive dunes, and slow-moving but powerful weather.

Other books

206 BONES by Kathy Reichs
The Runaway Wife by Elizabeth Birkelund
Joker One by Donovan Campbell
In Sarah's Shadow by Karen McCombie
Aesop's Fables by Aesop, Arthur Rackham, V. S. Vernon Jones, D. L. Ashliman
Night Terrors by Tim Waggoner
Maria by Briana Gaitan