Read The Universal Sense Online
Authors: Seth Horowitz
On December 25, 2004, a small cylindrical body separated from the American nuclear-powered Saturn probe Cassini and began its approach to Titan. This probe, called Huygens after the seventeenth-century Dutch astronomer Christiaan Huygens, landed in a “muddy” area on the surface of Titan near the region called Xanadu on January 14, 2005. The probe was battery-powered and the best that was hoped for was good data return
during the two and a half hours it would take to descend through the thick atmosphere, and perhaps a few minutes of surface operation. During its descent, one of the most novel devices was the Huygens Atmospheric Structure Instrument (HASI), which contained accelerometers and a small microphone to capture the forces on the spacecraft during descent and the actual sounds of the Titanian winds. When combined with data from the Doppler Wind Experiment, which used radio telescopes to calculate the changes in the probe’s position as it swung under its parachutes while descending, researchers were able to reconstruct the sound of the winds on Titan, giving us the first auditory glimpse of this far-off world. When I listened to them on the European Space Agency website, I got a very strange feeling when I realized that even more than a billion kilometers away, some things could still sound like home.
Aside from Veneras 13 and 14 and Huygens, only the Mars Polar Lander was equipped with sound recording equipment—an instrument called the Mars Microphone. This was a 1.8-ounce digital sampling device that was meant to record short sound samples on the surface of the red planet. With only 512K of onboard memory, it could hold a total of one 10-second sound clip, but for 1998, it was a masterpiece of audio miniaturization, not to mention radiation-proofing. Recording on Mars is a serious challenge compared to recordings on Earth, Venus, or even Titan, all of which have substantial atmospheres. The Martian environment, while almost Earth-like compared to the crushing pressure and melting temperatures of Venus or the frigid −180°C solvent-laden mud of Titan, has an atmosphere that is only 1 percent as thick as that of the Earth, so sounds are much quieter. The Mars Microphone had built-in amplifiers that would have boosted the signal up to audible levels, but, sad to
say, it met its end when the Mars Polar Lander crashed on the surface due to a programming error. Its replacement was due to launch on the French Mars Netlander, but the project was cancelled in 2004 and now it simply sits idle. Given the low priority most researchers place on hearing the sounds of other places in the solar system, in part due to equipment and launch costs but also because of the relatively high bandwidth required to get real-time sonic information, it may be that the first people who actually hear the sounds of Mars are the ones who will land there. But, frankly, this is shortsighted. Because human space explorers evolved in the terrestrial acoustic environment, when they find themselves on an alien world, sound may not be so much a sensory warning and information source as a source of confusion.
Relatively little attention has been paid to the human psychoacoustics of space exploration, which is odd considering how acoustically stressful it can be. Even Yuri Gagarin, on the first human spaceflight, commented on the sounds of launch as an “ever-growing din … no louder than one would expect to hear in a jet plane, but it had a great range of musical tones and timbres that no composer could hope to score.” Early space flight research here and in the Soviet Union did spend a lot of time and resources examining the effects of vibration and force on bodies during simulated launch stress, which is where some of the best work on infrasonic effects on human bodies emerged. Using ground-based data, NASA (and presumably the Soviets) carried out extensive analyses to determine the engineering procedures for establishing acceptable noise levels, effects of reverberation in enclosed spaces, and what kind of bandwidth communication equipment would need in order to avoid having critical verbal commands or alarms be masked by any spacecraft
cabin noise. But there has been relatively little recent published work in the United States on noise and sound levels in spacecraft. Since construction of the International Space Station (ISS) is finally complete and it has full-size working crews on a permanent basis, trying to see if there are long-term effects on hearing and downrange cognitive effects would seem to be a useful endeavor. The ISS is significantly different from any previous spacecraft or habitats. With 837 cubic meters of internal space, it is the largest space structure built to date, and has been continuously occupied for over eleven years.
Despite constant harping about cost overruns and criticisms that it has been scientifically underutilized, mostly because of delays in completing it, the ISS has provided us with an amazing laboratory to study the effects of long-duration space habitation on humans. But one of the most overlooked stressors in any environment is chronic noise exposure. The ISS suffers from some of the same issues as any other spacecraft: it is basically an airfilled rigid container with fans that have to be active more or less constantly. This creates a constant resonance condition that can be only partially lessened by sound insulation. A paper by R. I. Bogatova and colleagues in 2009 reported that an onboard acoustic survey showed that noise levels were above the safety limits in every region of the crew module, from workstations to sleeping quarters. It may seem a minor point to worry about whether your spacecraft is too loud when you have to put up with 100 dB noise in subway tunnels or kids playing their music at deafening levels, but remember: this is the only place these astronauts can go, so the noise never stops. You can’t exactly open the door and go outside (at least not without a lot of preparation, anyway), nor can you safely wear earplugs all the time because they might prevent you from hearing a critical alarm.
Yet acoustic stress can have serious long-term effects on task performance, emotional state, attention, and problem solving, no matter whether you’re on Earth or in orbit. This is a particularly important factor to consider for the future, because human-crewed spacecraft that will one day go to Mars or beyond will probably be built much more along the lines of the ISS rather than the tight-fitting capsules of Soyuz and Apollo. We have to consider the role of acoustic stress in the daily life and abilities of crews when they start exploring beyond Earth orbit.
And what will they hear when they get there? I doubt that I will be around when the first humans hover in the Jovian cloud decks or slog through the frozen mud of Titan, but I still have hope that there will be human boots on Mars in my lifetime. To date, all extra-vehicular activities on missions have taken place either in Earth or lunar orbit or on the moon, environments with nothing to carry sounds at normal levels of human sensitivity unless the astronauts put their helmet against a structure. The heavily insulated nature of their boots would prevent anything but the strongest ground-based shock and vibration from getting through to them. But Mars will be different. While it is geologically (and biologically, we presume) much less dynamic than Earth, it is a planet with remarkable weather and a chemically active environment, and despite fifty years of attempts to decipher its secrets, it is still mostly unexplored. Gullies have been seen on canyon rims, showing evidence of possible flows of briny water, and ice caps of frozen carbon dioxide expand and retreat with the seasons, leaving strange landforms in their wake. In its more temperate regions, giant sand dunes change on a daily basis, marching around and through craters and valleys, while sub-surface tunnels lead into areas that may hold thicker atmospheres and more water. So Mars is not a dead
place—it is merely very different, and so, presumably, are its acoustics.
As I’ve noted, the Martian atmosphere is only 1 percent as thick as Earth’s, and it is mostly composed of carbon dioxide, with temperatures at the height of a human head ranging from 1°C to −107°C. These factors yield three basic differences from sound on Earth. First, the speed of sound would be about 244 meters per second, about 71 percent of the speed of sound at sea level on Earth. The second aspect is that the Martian atmosphere would tend to attenuate sounds in the range of 500–1,500 Hz, right in the middle of humans’ most sensitive auditory region. Lastly, the lower density of the atmosphere would drop the relative loudness of any sounds by anywhere from 50 to 70 dB. We can presume that future space suits built for operations on Mars likely would have built-in microphones to monitor outside sounds. So our Martian explorers will have to learn to compensate for the acoustic differences in their new environment.
Even presuming that the microphones are connected to amplifiers, properly calibrated to prevent the masking of verbal communications from the ship or other team members, the explorers would still have to cope with the fact that everything around them will sound different. Imagine that they are standing on the surface near a crater wall when the vibrations of a rover trigger a rock slide. But where was the rover? Where was the rock? They will have trouble determining not only how far away a sound is but also exactly where it is coming from. We evolved our auditory localization ability based on differences in the time of arrival and relative loudness of sounds in our two ears based on the propagation qualities of sound moving through air on Earth; we can’t even do it underwater on our home planet. The low speed of sound and the attenuation of frequencies in
our best hearing range will cause problems on Mars. Even identification of the sources of sound will be made more difficult, not only because it will be a completely new environment full of strange sound sources but also because of the loss of spectral information.
Luckily, it won’t interfere with voice communication (anyone dumb enough to remove his helmet and yell on Mars is probably going to have more to worry about than his voice sounding funny), but environmental sounds will not be easy to figure out. While the explorers are trying to get surveying equipment set up, a low-pitched moaning sound comes into their headsets from the outside. It’s only seconds later that they feel a gentle pattering of sand on their helmets. Even the sound of the wind, normally almost white noise, will sound different, the notch in the 500–1,500 Hz range making it sound lower in pitch, more organic. Few astronauts are likely to assume it’s the sound of some ravenous alien, but it will still be disorienting. Human space explorers will take our hundreds of millions of years of auditory evolution and hundreds of thousands of years of human psychophysics with them, and like their ancestors moving into any new environment, they will have to learn to listen for the sounds of danger or the sounds of opportunity. No matter where we go or how far in the future, one of our oldest, most conserved, and universal sensory systems will both adapt to and drive the evolution of how the human mind will cope with future scenarios.
Chapter 11
You Are What You Hear
I’ve spent the last thirty years of my life listening to things. Actually, check that: I’ve been listening to things for about half a century. I’ve only been
paying attention
while listening to things for the last thirty years or so. And over the last eighteen years, I’ve been paying a lot of attention to brains (human and otherwise), the sounds those brains are trying to assess, and the sounds they make.
I was lucky enough to get my training in things both musical and scientific at the right time. Despite years of piano lessons as a kid, I seriously got into music only in my early twenties, when the first PCs became affordable and useful and the idea of being able to connect them to musical instruments became practical with the advent of the musical instrument digital interface (MIDI). My entry into auditory science in the 1990s not only trained me in traditional psychophysical and anatomical techniques but also let me enter the field just at the point when EEGs changed from giant, clumsy, relay-driven, steampunk-looking towers to sleeker self-contained units, when PET and MRI
scans were becoming practical tools for neural imaging, and when electrophysiology moved from the analog oscilloscope era to the more self-contained digital format. It let me play at the crossroads of sound technology, from making sounds to figuring out how they affect the brain. And I still remember my thought the time I carried out my first successful brain recording.
The brain sings.
During an electrophysiology experiment, the first thing that grabs my attention is the sound the brain makes. Is it a stream of white noise, meaning the electrode is not in yet? Or does it click in time to a stimulus, meaning I’ve hit an auditory responsive area? Are there huge tympanic strikes like a snare drum solo, independent of any sound I play to it, suggesting I’m in some region that has an important rhythm of its own but doesn’t play well with outside sounds? Or are there soft, susurrating changes like the sound of waves on a beach, suggesting that maybe I pushed too far and the electrode is in a ventricle, letting me indirectly hear cardiovascular rhythms in the cerebrospinal fluid?
For over a century, placing a conductive electrode through a surgical opening in the skull into the brain has been
the
way to gather information about the real-time ionic processes that underlie neurons, which spend their days sending signals outward and receiving information sent inward. Most often the collected data are published as images: plots of amplitude of changes in voltage over time, audiograms to illustrate neural sensitivity to sounds at different frequencies, graphs of changing conductance of individual patches of channels, or diagrams showing connectivity between different responding areas based on the differences in timing of responses from two electrodes in different places. But more often than not, our first data are
sonic, as the neuronal electrical changes are passed into a small stereo amplifier and to a perfectly normal speaker.
These data rarely make it into publication—I’m sorry to report that we don’t yet have research journals that embrace multimedia enough to reproduce audio recordings. Admittedly, it’s a very particular complaint, but I’ve always thought this is a shame. I can usually tell where I’m recording in the brain depending on how the brain itself sings. Depending on the type of electrode you use, you can listen in on millions of neurons at a time or the ticking of individual ionic channels. You can even identify different types of neurons by the sounds they make in response to a stimulus. If you’re recording using an electrode with high impedance to pick out responses from a single neuron, and you play a brief clicking sound even to an anesthetized subject, you’ll discover that the brain responds, issuing electrochemically based clicks of its own.
60
Some neurons will click back only at the start of the sound, some only at the end. Some will create a burst of regular clicks. Some will do nothing. Sometimes what you hear is not a response but only the highly distinctive “neuronal death song,” which sounds like a plaintive wail (although it is really just ionic channels dumping potassium through inappropriate holes in the cell membrane from misplaced electrodes).