You are not a Gadget: A Manifesto (24 page)

BOOK: You are not a Gadget: A Manifesto
9.46Mb size Format: txt, pdf, ePub
ads

Eventually data and insight might make the story more specific, but for the moment we can at least construct a plausible story of ourselves in terms of grand-scale computational natural history. A myth, a creation tale, can stand in for a while, to give us a way to think computationally that isn’t as vulnerable to the confusion brought about by our ideas about ideal computers (i.e., ones that only have to run small computer programs).

Such an act of storytelling is a speculation, but a speculation with a purpose. A nice benefit of this approach is that specifics tend to be more colorful than generalities, so instead of algorithms and hypothetical abstract computers, we will be considering songbirds, morphing cephalopods, and Shakespearean metaphors.

CHAPTER 13
One Story of How Semantics Might Have Evolved

THIS CHAPTER PRESENTS
a pragmatic alternation between philosophies (instead of a demand that a single philosophy be applied in all seasons). Computationalism is applied to a naturalistic speculation about the origins of semantics.

Computers Are Finally Starting to Be Able to Recognize Patterns

In January 2002 I was asked to give an opening talk and performance for the National Association of Music Merchants,
*
the annual trade show for makers and sellers of musical instruments. What I did was create a rhythmic beat by making the most extreme funny faces I could in quick succession.

A computer was watching my face through a digital camera and generating varied opprobrious percussive sounds according to which funny face it recognized in each moment.

(Keeping a rhythm with your face is
a strange new trick—we should expect a generation of kids to adopt the practice en masse any year now.)

This is the sort of deceptively silly event that should be taken seriously as an indicator of technological change. In the coming years, pattern-recognition tasks like facial tracking will become commonplace. On one level, this means we will have to rethink public policy related to privacy, since hypothetically a network of security cameras could automatically determine where everyone is and what faces they are making, but there are many other extraordinary possibilities. Imagine that your avatar in Second Life (or, better yet, in fully realized, immersive virtual reality) was conveying the subtleties of your facial expressions at every moment.

There’s an even deeper significance to facial tracking. For many years there was an absolute, unchanging divide between what you could and could not represent or recognize with a computer. You could represent a precise quantity, such as a number, but you could not represent an approximate holistic quality, such as an expression on a face.

But until recently, computers couldn’t even see a smile. Facial expressions were imbedded deep within the imprecise domain of quality, not anywhere close to the other side, the infinitely deciphered domain of quantity. No smile was precisely the same as any other, and there was no way to say precisely what all the smiles had in common. Similarity was a subjective perception of interest to poets—and irrelevant to software engineers.

While there are still a great many qualities in our experience that cannot be represented in software using any known technique, engineers have finally gained the ability to create software that can represent a smile, and write code that captures at least part of what all smiles have in common. This is an unheralded transformation in our abilities that took place around the turn of our new century. I wasn’t sure I would live to see it, though it continues to surprise me that engineers and scientists I run across from time to time don’t realize it has happened.

Pattern-recognition technology and neuroscience are growing up together. The software I used at NAMM was a perfect example of this intertwining. Neuroscience can inspire practical technology rather quickly. The original project was undertaken in the 1990s under the auspices of Christoph von der Malsburg, a University of Southern California
neuroscientist, and his students, especially Hartmut Neven. (Von der Malsburg might be best known for his crucial observation in the early 1980s that synchronous firing—that is, when multiple neurons go off at the same moment—is important to the way that neural networks function.)

In this case, he was trying to develop hypotheses about what functions are performed by particular patches of tissue in the visual cortex—the part of the brain that initially receives input from the optic nerves. There aren’t yet any instruments that can measure what a large, complicated neural net is doing in detail, especially while it is part of a living brain, so scientists have to find indirect ways of testing their ideas about what’s going on in there.

One way is to build the idea into software and see if it works. If a hypothesis about what a part of the brain is doing turns out to inspire a working technology, the hypothesis certainly gets a boost. But it isn’t clear how strong a boost. Computational neuroscience takes place on an imprecise edge of scientific method. For example, while facial expression tracking software might seem to reduce the degree of ambiguity present in the human adventure, it actually might add more ambiguity than it takes away. This is because, strangely, it draws scientists and engineers into collaborations in which science gradually adopts methods that look a little like poetry and storytelling. The rules are a little fuzzy, and probably will remain so until there is vastly better data about what neurons are actually doing in a living brain.

For the first time, we can at least tell the outlines of a reasonable story about how your brain is recognizing things out in the world—such as smiles—even if we aren’t sure of how to tell if the story is true. Here is that story …

What the World Looks Like to a Statistical Algorithm

I’ll start with a childhood memory. When I was a boy growing up in the desert of southern New Mexico, I began to notice patterns on the dirt roads created by the tires of passing cars. The roads had wavy corduroylike rows that were a little like a naturally emerging, endless sequence of
speed bumps. Their spacing was determined by the average speed of the drivers on the road.

When your speed matched that average, the ride would feel less bumpy. You couldn’t see the bumps with your eyes except right at sunset, when the horizontal red light rays highlighted every irregularity in the ground. At midday you had to drive carefully to avoid the hidden information in the road.

Digital algorithms must approach pattern recognition in a similarly indirect way, and they often have to make use of a common procedure that’s a little like running virtual tires over virtual bumps. It’s called the Fourier transform. A Fourier transform detects how much action there is at particular “speeds” (frequencies) in a block of digital information.

Think of the graphic equalizer display found on audio players, which shows the intensity of the music in different frequency bands. The Fourier transform is what does the work to separate the frequency bands.)

Unfortunately, the Fourier transform isn’t powerful enough to recognize a face, but there is a related but more sophisticated transform, the Gabor wavelet transform, that can get us halfway there. This mathematical process identifies individual blips of action at particular frequencies in particular places, while the Fourier transform just tells you what frequencies are present overall.

There are striking parallels between what works in engineering and what is observed in human brains, including a Platonic/Darwinian duality: a newborn infant can track a simple diagrammatic face, but a child needs to see people in order to learn how to recognize individuals.

I’m happy to report that Hartmut’s group earned some top scores in a government-sponsored competition in facial recognition. The National Institute of Standards and Technology tests facial recognition systems in the same spirit in which drugs and cars are tested: the public needs to know which ones are trustworthy.

From Images to Odors

So now we are starting to have theories—or at least are able to tell detailed stories—about how a brain might be able to recognize features
of its world, such as a smile. But mouths do more than smile. Is there a way to extend our story to explain what a word is, and how a brain can know a word?

It turns out that the best way to consider that question might be to consider a completely different sensory domain. Instead of sights or sounds, we might best start by considering the odors detected by a human nose.

For twenty years or so I gave a lecture introducing the fundamentals of virtual reality. I’d review the basics of vision and hearing as well as of touch and taste. At the end, the questions would begin, and one of the first ones was usually about smell: Will we have smells in virtual reality machines anytime soon?

Maybe, but probably just a few. Odors are fundamentally different from images or sounds. The latter can be broken down into primary components that are relatively straightforward for computers—and the brain—to process. The visible colors are merely words for different wavelengths of light. Every sound wave is actually composed of numerous sine waves, each of which can be easily described mathematically. Each one is like a particular size of bump in the corduroy roads of my childhood.

In other words, both colors and sounds can be described with just a few numbers; a wide spectrum of colors and tones is described by the interpolations between those numbers. The human retina need be sensitive to only a few wavelengths, or colors, in order for our brains to process all the intermediate ones. Computer graphics work similarly: a screen of pixels, each capable of reproducing red, green, or blue, can produce approximately all the colors that the human eye can see.
*
A music synthesizer can be thought of as generating a lot of sine waves, then layering them to create an array of sounds.

Odors are completely different, as is the brain’s method of sensing them. Deep in the nasal passage, shrouded by a mucous membrane, sits a patch of tissue—the olfactory epithelium—studded with neurons that detect chemicals. Each of these neurons has cup-shaped proteins called olfactory receptors. When a particular molecule happens to fall into a
matching receptor, a neural signal is triggered that is transmitted to the brain as an odor. A molecule too large to fit into one of the receptors has no odor. The number of distinct odors is limited only by the number of olfactory receptors capable of interacting with them. Linda Buck of the Fred Hutchinson Cancer Research Center and Richard Axel of Columbia University, winners of the 2004 Nobel Prize in Physiology or Medicine, have found that the human nose contains about one thousand different types of olfactory neurons, each type able to detect a particular set of chemicals.

This adds up to a profound difference in the underlying structure of the senses—a difference that gives rise to compelling questions about the way we think, and perhaps even about the origins of language. There is no way to interpolate between two smell molecules. True, odors can be mixed together to form millions of scents. But the world’s smells can’t be broken down into just a few numbers on a gradient; there is no “smell pixel.” Think of it this way: colors and sounds can be measured with rulers, but odors must be looked up in a dictionary.

That’s a shame, from the point of view of a virtual reality technologist. There are thousands of fundamental odors, far more than the handful of primary colors. Perhaps someday we will be able to wire up a person’s brain in order to create the illusion of smell. But it would take a lot of wires to address all those entries in the mental smell dictionary. Then again, the brain must have some way of organizing all those odors. Maybe at some level smells do fit into a pattern. Maybe there’s a smell pixel after all.

Were Odors the First Words?

I’ve long discussed this question with Jim Bower, a computational neuroscientist at the University of Texas at San Antonio, best known for making biologically accurate computer models of the brain. For some years now, Jim and his laboratory team have been working to understand the brain’s “smell dictionary.”

They suspect that the olfactory system is organized in a way that has little to do with how an organic chemist organizes molecules (for instance, by the number of carbon atoms on each molecule). Instead, it
more closely resembles the complex way that chemicals are associated in the real world. For example, a lot of smelly chemicals—the chemicals that trigger olfactory neurons—are tied to the many stages of rotting or ripening of organic materials. As it turns out, there are three major, distinct chemical paths of rotting, each of which appears to define a different stream of entries in the brain’s dictionary of smells.

Keep in mind that smells are not patterns of energy, like images or sounds. To smell an apple, you physically bring hundreds or thousands of apple molecules into your body. You don’t smell the entire form; you steal a piece of it and look it up in your smell dictionary for the larger reference.

To solve the problem of olfaction—that is, to make the complex world of smells quickly identifiable—brains had to have evolved a specific type of neural circuitry, Jim believes. That circuitry, he hypothesizes, formed the basis for the cerebral cortex—the largest part of our brain, and perhaps the most critical in shaping the way we think. For this reason, Jim has proposed that the way we think is fundamentally based in the olfactory.

A smell is a synecdoche: a part standing in for the whole. Consequently, smell requires additional input from the other senses. Context is everything: if you are blindfolded in a bathroom and a good French cheese is placed under your nose, your interpretation of the odor will likely be very different than it would be if you knew you were standing in a kitchen. Similarly, if you can see the cheese, you can be fairly confident that what you’re smelling is cheese, even if you’re in a restroom.

Recently, Jim and his students have been looking at the olfactory systems of different types of animals for evidence that the cerebral cortex as a whole grew out of the olfactory system. He often refers to the olfactory parts of the brain as the “Old Factory,” as they are remarkably similar across species, which suggests that the structure has ancient origins. Because smell recognition often requires input from other senses, Jim is particularly interested to know how that input makes its way into the olfactory system.

BOOK: You are not a Gadget: A Manifesto
9.46Mb size Format: txt, pdf, ePub
ads

Other books

God's Callgirl by Carla Van Raay
Burned by Natasha Deen
Hara's Legacy by D'Arc, Bianca
Carmen by Walter Dean Myers
Excesión by Iain M. Banks
Honeymoon For One by Zante, Lily