The Universal Sense (13 page)

Read The Universal Sense Online

Authors: Seth Horowitz

BOOK: The Universal Sense
11.92Mb size Format: txt, pdf, ePub

But we are a visual species, diurnal (mostly awake during the daytime) and trichromatic (able to see a reasonably full range of colors), and we usually describe our surroundings or think about them using visual descriptions. Vision is based on the perception of light, and light defines the fastest possible speed in our neck of the universe, 300 million meters per second. Light from the surface of this page would travel to the back of your head where your visual cortex is in less than a nanosecond (one-billionth of a second). But our brains get in the way, and not just by blocking the back of our skulls.

Vision from input to recognition operates in the time span of several hundred to several thousand milliseconds, millions of times slower than the speed of light. Photons go in through our eyes and impact special photoreceptors in our retinas, and then things get slowed way down to chemical speeds, activating second messenger systems in the photoreceptors, whose signals pass through synapses to retinal ganglion cells. The ganglion cells then have to double-check with other retinal cells to see if they should respond to this input or ignore it, then accordingly fire or not fire a hybrid chemical-electrical signal down the long path of the optic nerve to one of several destination layers in the lateral geniculate. From there you have to synapse
again
and maybe go to the primary visual cortex or else the superior
colliculus, but in either case, it becomes like the most hideous subway ride a poor little visual signal has ever been on.
22
And it takes hundreds of milliseconds just to get somewhere that might make you able to say, “Um, did I just see something?” Luckily for us, our brains are also temporally tuned to deal with “real time” based on vision. In real-world terms, changes occurring faster than about fifteen to twenty-five times a second can’t be seen as discrete changes; thanks to numerous neural and psychological adaptations, we instead perceive them as a continuous change. This is very handy for the television, film, and computer industries, as they don’t have to release drivers for a yoctosecond graphics card.

But hearing is an objectively faster processing system. While vision maxes out at fifteen to twenty-five events per second, hearing is based on events that occur thousands of times per second. The hair cells in your ears can lock to vibrations or specific points in the phase of a vibration at up to 5,000 times per second. At a perceptual level, you can easily hear changes in auditory events 200 times per second or more (substantially more if you’re a bat). A recent study by Daniel Pressnitzer and colleagues demonstrated that some features of auditory perceptual organization occur in the cochlear nucleus, the first place in the brain to receive input from the auditory nerve, within a thousandth of a second of the sound reaching your ears. You figure out where sound is coming from in the superior olivary nucleus, which compares input from your two ears based on microsecond-to-millisecond differences in time of arrival of low-frequency
sounds at the ears (or in subdecibel differences in loudness for higher-frequency sounds) and does this with only a few more milliseconds of delay. Even at the level of the cortex, which, as the top of the neural chain, is trafficking in enormous amounts of data from lower down and tends to be relatively slow, there are specialized ion channels that allow for high-speed firing and retain the most basic features of auditory input all the way up from your ears. So even after these signals go through about ten or more synapses to reach your cortex, where most of what we think of as conscious behavior takes place, it takes only about 50 milliseconds or less for you to identify a sound and point to where it’s coming from.

But this is taking a classical view of the auditory pathway, such as you might find on Wikipedia or in an undergraduate class on the biology of hearing. At this level, it seems to be a pretty straightforward system. From the hair cells in the ear, sound is coded and passed along the auditory nerve to the cochlear nuclei, where the signals get segregated by frequency, phase, and amplitude, passed through the trapezoid body to the left and right superior olive to determine where the sound is coming from, and sent on to the nuclei of the lateral lemniscus, which processes complex timing processes. From there, the signals pass to the auditory midbrain, the inferior colliculus, which starts integrating some of the individual features into complex sounds, then sends them on to the auditory thalamus, the medial geniculate, which acts as the primary relay station to get things to the auditory cortex.

The auditory cortex is where you start becoming consciously aware of the sounds you heard all the way back in your ears, with specialized regions subserving different auditory specialties, such as the planum temporale, which identifies tones, and
Wernicke’s and Broca’s regions, which specialize in comprehension and generation of speech, respectively. It is also here that lateralization of sound processing emerges—in most right-handed people, the informational aspects of speech tend to be processed on the left side of the brain, whereas the emotional content of the speech tends to be processed on the right side. This lateralization leads to an interesting practical phenomenon. Think about when you talk on the phone. Do you put the phone to your right ear or your left? Most right-handed people put it to the right ear because the bulk of the auditory input from your ear crosses over to the left hemisphere to help you understand speech. This is sometimes reversed in left-handed people, but there are left-handed people with speech cognition on the left and emotional content cognition on the right. A similar variation can occur in right-handed people, too. I always found it weird that I have trouble understanding phone calls unless I listen with my left ear, even though I am right-handed. It wasn’t until I volunteered in a neural imaging study years ago that I found out that I am one of the few mutants with flipped content/emotional processing centers. The rest of my brain is mostly normal, but there have been some studies showing that evolutionarily new areas have a greater tendency to show variation in things such as lateralization, and speech comprehension is one of the newest.

But auditory signals from the lower brain stem don’t climb up to the cortex just so you can enjoy music or be irritated by the kid next door practicing the violin badly. Any single element of the auditory path has had dozens or sometimes hundreds of complex papers written about it, in a variety of species, ranging from connections of individual types of neurons within them to variability of expression of immediate early genes depending
on the type of input you provide to them. As more and more data emerge about a system, you don’t just add up more information; if you have the right mind-set (or enough grad students), you start realizing that the old rules about how things work aren’t holding up so well anymore.

When I was a grad student in the 1990s, I still heard the terms “law of specific energy” (ears respond only to sound, eyes only to light, vestibular organs only to acceleration) and “labeled lines” (neural connections are specific to their modality—sound goes to the ears, which connect to auditory nuclei, and so sound ends up being processed as “hearing,” whereas visual input goes in the eyes and becomes things you see). This concept of a “modular brain” was widely adopted—the idea was that specific regions processed the input they got and then communicated between themselves, with consciousness or the mind emerging as a meta-phenomenon of all the underlying complexity.

But starting around this time, neuroscience reached an interesting tipping point (to borrow a phrase from Malcolm Gladwell). Centuries of anatomical and physiological data were beginning to show strange overlaps. The superior colliculus, formerly thought of as a visual midbrain nucleus, brings maps from all the sensory systems into register with each other. The medial geniculate, while providing most of its input via a ventral pathway to the auditory cortex, also has a dorsal projection that goes to attentional, physiological, and emotional control regions. The auditory cortex can respond to familiar faces. The tens of thousands of published studies (combined with the availability of papers on the Internet, instead of deep in the science library stacks) started providing fodder for what has been called the “binding problem”—how all the disparate sensations get tied together via multisensory integration to form a coherent model
of reality. Throw in the gene revolution in the last ten years, with sequencing, protein expression, and the availability of complete genetic libraries of organisms (at reasonable prices for those living on grants), and even more understanding of the brain’s flexibility and complexity emerged, allowing insights into how inherited and environmental conditions can change how the brain responds. And once that particular genie was let out of the bottle, researchers started noticing that some of the things that formerly were addressed only through the old black-box psychological techniques could now be reexamined in a neuroscience context. This begins telling the tale of how sound not only lets us hear things but actually drives some of our most important subconscious and conscious processes. Which brings us to the paradoxical issue of why we don’t usually pay attention to sound even though we use sound to drive our attention to important events in the world around us.

Let’s go back to brain pathways for a minute. Auditory signals exiting from the medial geniculate do not necessarily wind up in parts of the brain scientists think of as auditory. They project to regions that traditionally have been lumped into what used to be called the
limbic system
, an outdated term that you still see in textbooks and even some current papers. The limbic system includes structures that form the deepest boundary of the cortex and control functions ranging from heart rate and blood pressure to cognitive ones including memory formation, attention, and emotion. The reason calling this a specific system is outdated is because the harder you look at the fine anatomical, biochemical, and processing structures of the regions, the less they seem to be discrete modules. They are more of an interconnected network with loops that feed both forward to provide information and backward to modify the next incoming
signal. The problem of figuring out how sound affects all these systems based on anatomical projections is made more complicated by the fact that there are few direct auditory-only projections to these regions. To understand the complexities of the brain, sometimes you have to start looking at real-world behavior and work backward. So: what does sound have to do with attention?

A while back, I was contacted by the Perkins School for the Blind in Watertown, Massachusetts, to see if I could help with an acoustic problem. It seems that state-mandated fire alarms were panicking their students to such an extent that the kids would actually stay home on days when the alarm was to be tested. The staff wondered if it would be possible to make an alarm that would still get their attention but wouldn’t terrify and disorient them. I started working on the problem (and still am), but the point of the issue was brought home to me recently when a fire alarm went off in the lab. After about thirty seconds of trying to get to my office through the halls, all of which were in plain sight, while simultaneously trying to turn my head to lessen the noise, I realized I was having trouble navigating. How much worse is it for people who not only can’t see where they’re going but have their cane taps and any verbal instructions masked as well?

An alarm—whether a loud bell, a klaxon, or a blaring synthesized voice yelling “Fire, fire” (like in my annoying kitchen smoke detector)—is a psychophysical tool. It presents a very sudden loud noise to get your attention (like my startle at the clumsy coyote’s splash) and then continues repeating that signal. While you don’t startle to subsequent loud sounds, the very loudness itself keeps your arousal levels high, and if the arousal isn’t abated either by the siren ending or by you getting away from it, it’s easy
to have arousal change to fear and disorientation. This shows the linkages between sound, attention, and emotion.

At first it might seem that you couldn’t pick more-different aspects of behavior. Attention gets you to focus on specific environmental (or internal) cues, whereas emotions are preconscious reactions to events. These two features seem remarkably different, but are both based on getting you to change your response to your environment before conscious thought takes over. The long, subtle buildup of arousal that occurs as you realize that something is wrong with your environment when you don’t hear things you expect and the sudden onslaught of fear and the associated physical responses that occur when you hear a sudden, unexpected sound out of your line of sight show how these two systems are interrelated. Hearing is the sensory system that operates fast enough to underlie both.

Attention is about picking important information from the sensory clutter that the world (and your brain) throws at you twenty-four hours a day. At the simplest level, it is just the ability to focus on some events while ignoring others. But because attentional processes are, like hearing, continually ongoing, we are rarely aware of them unless we make ourselves aware of our behavior and have our attention drawn to our attention.

Do an experiment: go wash your hands. You’ve been sitting and reading for a while, and who knows where your hands or this book was a few hours ago. Before you get up, though, think for a second about the sound of washing your hands—you’ll probably think of the water splashing in the sink and that’s about it. But this time pay attention to
all
the sounds. The sound of your footsteps, whether shod or in slippers or socks, padding toward the sink. Did you walk on tile? Is your kitchen echoing with each footstep or are you wearing something soft and absorbent
that damps it? When you reach for the faucet handle, do your clothes make a quiet shushing sound? Does the handle squeak a bit? What is the sound of the water as it is first leaving the faucet before it hits the sink? Is there a pattern to the sound of the water as it hits the metal or porcelain or plastic? If you don’t have a stopper in the sink, does the water make a hollow resonant sound as it goes down the drain or does it build up and splash along the edges? What is the sound of water striking your hands as you move them under the flow? And when you turn the water off, did you notice how the flow stops masking the sound of the water draining away and the occasional drips of water from your hands?

Other books

And kill once more by Fray, Al
Eat'em by Webster, Chase
Black Lace by Beverly Jenkins
Crescent by Phil Rossi
Yarned and Dangerous by Sadie Hartwell
The Pearl of Bengal by Sir Steve Stevenson