Authors: Robert Kurson
On the way to the airport, Fine told May a bit more about growing up in Scotland as the daughter of a philosopher and a famous children’s book writer. Her name, she said, appeared in some of her mother’s books. Certain that Fine was pretty, May wanted to look at her more closely in the bright sunlight of the car, but he feared getting busted, so mostly he looked straight ahead.
As they neared the airport, Fine asked if May was willing to do some follow-up testing—he could return to San Diego or she could travel to Davis.
“Either way,” he said. “I’m game.”
At the terminal, she thanked him for his time and for being such a good sport during the tests. She told him how unlikely—impossible, really—it was for a scientist to find such a rare case as his, and such a bright and willing subject. She called it “once in a lifetime,” and May knew what she meant. It was how he had come to think of his encounter with Dr. Goodman, the adventure he’d undertaken over the last five months, and about his feeling for these scientists in San Diego who were trying, with a few ghosts from history as their guides, to understand him.
May went back to work the same day he returned from San Diego. At night, he told Jennifer and his kids about his tests, about Fine, and about the idea that the scientists might put together some interesting theories about his case.
“And remember this,” he warned his kids. “I don’t fall for illusions. So don’t try to pull any fast ones.”
The next day he told his friend Bashin about the tests. Bashin couldn’t get enough of the information.
“It’s even more fascinating than we realized,” Bashin said. “What’s their thinking on this?”
“They haven’t told me yet,” May replied. “I think they’re still trying to figure me out.”
In San Diego, Fine tested a control group of subjects on the same tests she had given May. She removed the detail to simulate May’s low acuity. They still got everything right. They still perceived illusions. That meant May’s results were not due to poor acuity. They were due to something else.
Fine kept puzzling over May’s case. He could perceive motion beautifully but was shockingly bad at other critical aspects of vision. Late one night, she wrote an e-mail to MacLeod:
I keep thinking about Mike suddenly seeing the cube in depth when it was put in motion. It’s a little like the way a cat chases a ball of string when it’s moving. Maybe Mike has a cat brain? Am I going crazy?
She was horrified a moment later to realize she’d sent the e-mail to May rather than to MacLeod. She received a reply a few minutes later.
Glad to know I have a cat brain. Must go out for cat food now. Mike.
In subsequent discussions, Fine and MacLeod came to think of May’s visual world as much like an abstract painting, filled with colorful and mostly flat and meaningless shapes. When people asked what Fine thought it was like for May to see, that was the best description she could give—that it was like looking at an abstract painting, that he had Picasso eyes.
Except when things moved. Motion, it seemed, lent a sense of depth to May’s visual world.
Over the next several weeks, May traveled to San Diego and Fine traveled to Davis for more testing. The results were always the same: he was excellent at motion and color, terrible at understanding faces, seeing in depth (except if something was moving), and recognizing objects.
To May, this dichotomy remained as mystifying as ever. To Fine, however, it was all starting to fall into place. As she further contemplated the test results, reviewed cases that dated back to the 1700s, and lay awake at night thinking, she began to understand not just why May saw the way he did, but about the implications for his future, about whether he might improve. Her insights were grounded in a new way of thinking about how vision works—a way of thinking that just a few scientists were beginning to explore.
CHAPTER
FOURTEEN
Before the middle of the nineteenth century, vision was widely thought to be a passive experience, one in which objects were simply “out there” to be seen. Various explanations were put forth to describe the process, including the idea that the eye shot “fingers” of light onto objects in order to “touch” them, or that objects broadcast images of themselves to the observer. These accounts supposed the world and its objects to be self-evident; seeing them did not require the brain to make inferences or engage in problem solving or do any of its usual cognitive work. And that made sense. Seeing felt effortless and automatic, if it felt like anything at all.
But then, starting in 1850 with the renowned German scientist Hermann von Helmholtz, and continuing in the middle of the twentieth century with psychologist Richard Gregory and others, scientists offered a startlingly different explanation for the brain’s role in vision. Human beings, they argued, depended to a great extent on knowledge in order to see, to make sense of what Gregory called the “shadowy ghosts” that were the retinal images in our eyes.
The idea seemed preposterous on its face. How could knowledge make it possible to see? Surely, the most uneducated person saw as well as the most learned. But Helmholtz, Gregory, and the others were not referring to a knowledge of facts and figures of the kind found in encyclopedias. By knowledge, they meant
a set of assumptions about the world and the objects that exist in it.
This set of assumptions, they argued, was so deeply ingrained in the human brain that people imposed them instantaneously, automatically, and unconsciously on the visual data streaming in from the eyes. No one realized they were using knowledge to interpret the visual scene, but everyone did it all the time.
There was powerful evidence to support this theory. Among the most compelling examples was the existence of visual illusions. If objects were simply out there to be seen, visual illusions wouldn’t occur—people would see things properly, as they actually were. Yet there were numerous visual illusions. What caused them?
Gregory and others argued that many visual illusions resulted when a person’s implicit knowledge—that instant, automatic, and unconscious set of assumptions about the world and its objects—dominated over contrary evidence from the eye.
The hollow face illusion provides a powerful example of this dynamic. It can be demonstrated by showing an observer the front of a simple plastic Halloween mask—say, one of Charlie Chaplin. As expected, the observer sees the face as convex—Chaplin’s features protrude outward. When the mask is rotated to show the reverse side, however, Chaplin’s hollow features also suddenly appear to protrude outward; they look as robust and convex as they did when viewed from the front.
What explains this illusion? Gregory argued—and every vision scientist now agrees—that it is due to the observer’s very powerful knowledge of faces: every face he has ever seen has been convex. Therefore, despite the visual evidence, he must perceive the hollow face as pointing outward. His implicit knowledge of faces is so powerful that he cannot defeat the illusion—even if he consciously tells himself that he is seeing a hollow face.
Consider another illusion, “Terror Subterra” by Roger Shepard. Which monster in the picture is larger?
Nearly all observers perceive the monster in the rear to be much larger. In fact, they are identical in size—hold your finger against the picture to check. Again, the role of knowledge—one’s set of assumptions about the world and its objects—is critical to the perception. But what knowledge causes us to perceive one monster as so much larger than the other?
In human experience, an object’s perceived size depends on two factors:
•
Its size on the retina
•
Its perceived distance
That makes for a simple formula:
Perceived size=size on the retina × perceived distance
If these monsters were the same size, the one that appears farther away should cast a smaller image on the retina. Since it doesn’t, the brain hypothesizes that the more distant monster is larger than the closer monster. And that hypothesis is so strong that the observer truly sees it that way.
But that’s not the only bit of knowledge the brain imposes on this scene. Look at the monster’s faces. The one being chased appears terrified. The one doing the chasing appears aggressive or angry. In fact, their faces are identical. In human experience, people being chased almost always appear frightened, while people doing the chasing almost always appear aggressive or angry. Our brain imposes that knowledge on the scene and therefore “sees” what it expects to see. (Illusions like this can even be affected by the particulars of our own experience—children of abusive parents, for example, are more likely to see a neutral face as angry, even at a very early age.)
This idea—that knowledge and vision are highly related—can be demonstrated in myriad examples that do not involve visual illusions. What do you see in this picture?
Some observers see a duck; others see a rabbit. Then the perception quickly shifts—those who saw the duck now see the rabbit, and vice versa. The brain’s knowledge of these animals—its assumptions about them—causes it to form two hypotheses about the image. Since each hypothesis is equally likely, the brain continues to entertain them both, resulting in the “flipping” of vision between duck and rabbit. (Note, however, that if an observer is told beforehand that he will be viewing a picture of a duck, it is unlikely that he will first see the rabbit. In that case, the brain has been given some extra knowledge that it will bring to bear on the picture, and this will dominate what is seen.)
Recent advances in the ability to measure specific kinds of brain activity confirm that knowledge and vision are highly related. It is now thought that more than a third of the human brain is involved with vision, an indication of the magnitude of the task. Today, it is virtually impossible to find a vision scientist, researcher, or psychologist who does not agree that knowledge and vision are highly related, and that without our knowledge about the visual world our ability to understand visual scenes would fall apart.
This current understanding of vision seemed to have great implications for May’s case. If knowledge and vision are highly related, and there’s nothing wrong with May’s eye, it seemed distinctly likely that May had a knowledge problem.
To get at the nature of such a problem, we must understand how human beings come to attain this knowledge in the first place.
A newborn’s eyes are flooded with visual information—colors, motion, and shapes that come from objects in the world around it. Yet newborn babies have no experience with any of these things, and few assumptions about them.
What must it be like to see things about which you have no experience or knowledge? We can scarcely imagine it—by adulthood, we have experience with nearly everything. If we do experience something completely foreign to us, we find it nearly impossible to impose a meaning or interpretation on the image.
Consider this photo. What is it?
Very few people would be able to tell that the object in the photo is a fossil. Even fewer know that the fossil is of a swift-swimming turtle from Germany. Certain archaeologists, however, would understand it immediately. Their visual experience of this image, as a result of their knowledge, is richer and more certain. Most of us have that sort of richness and certainty only when we see a fossil to which we can attach meaning, such as the one below:
Notice how our visual experience of the first fossil feels very different from our visual experience of the second. That is because we have knowledge of fish, but not of German swift-swimming turtles, especially in fossils. We can imagine that to the infant, almost all visual experiences feel like our experience of the first fossil rather than of the second.
Here are two more objects for which we don’t have a sufficient knowledge for true vision: