How We Learn (25 page)

Read How We Learn Online

Authors: Benedict Carey

BOOK: How We Learn
12.98Mb size Format: txt, pdf, ePub
Tapping the Subconscious

Chapter Nine

Learning Without Thinking

Harnessing Perceptual Discrimination

What’s a good eye?

You probably know someone who has one, for fashion, for photography, for antiques, for seeing a baseball. All of those skills are real, and they’re special. But what are they? What’s the eye doing in any one of those examples that makes it good? What’s it
reading
, exactly?

Take hitting a baseball. Players with a “good eye” are those who seem to have a sixth sense for the strike zone, who are somehow able to lay off pitches that come in a little too high or low, inside or outside, and swing only at those in the zone. Players, coaches, and scientists have all broken this ability down endlessly, so we can describe some of the crucial elements. Let’s begin with the basics of hitting. A major league fastball comes in at upward of 90 mph, from 60 feet, 6 inches away. The ball arrives at the plate in roughly 4/10 of a second, or 400 milliseconds. The brain needs about two thirds of that time—250 milliseconds—to make the decision whether to swing or
not. In that time it needs to read the pitch: where it’s going, how fast, whether it’s going to sink or curve or rise as it approaches (most pitchers have a variety of pitches, all of which break across different planes). Research shows that the batter himself isn’t even aware whether he’s swinging or not until the ball is about 10 feet away—and by that point, it’s too late to make major adjustments, other than to hold up (maybe). A batter with a good eye makes an instantaneous—and
almost always accurate—read.

What’s this snap judgment based on? Velocity is one variable, of course. The (trained) brain can make a rough estimate of that using the tiny change in the ball’s image over that first 250 milliseconds; stereoscopic vision evolved to compute, at incredible speed, all sorts of trajectories and certainly one coming toward our body. Still, how does the eye account for the spin of the ball, which alters the trajectory of the pitch? Hitters with a good eye have trouble describing that in any detail. Some talk about seeing a red dot, signaling a breaking ball, or a grayish blur, for a fastball; they say they focus only on the little patch in their field of vision where the pitcher’s hand releases the ball, which helps them judge its probable trajectory. Yet that release point can vary, too. “They may get a snapshot of the ball, plus something about the pitcher’s body language,” Steven Sloman, a cognitive scientist at Brown University, told me. “But we don’t entirely understand it.”

A batting coach can tinker with a player’s swing and mechanics, but no one can tell him how to
see
pitches better. That’s one reason major league baseball players get paid like major league baseball players. And it’s why we think of their visual acuity more as a gift than an expertise. We tell ourselves it’s all about reflexes, all in the fast-twitch fibers and brain synapses. They’re “naturals.” We make a clear distinction between this kind of ability and expertise of the academic kind. Expertise is a matter of learning—of accumulating knowledge, of studying and careful thinking, of creating. It’s
built
,
not born. The culture itself makes the same distinction, too, between gifted athletes and productive scholars. Yet this distinction is also flawed in a fundamental way. And it blinds us to an aspect of learning that even scientists don’t yet entirely understand.

To flesh out this dimension and appreciate its importance, let’s compare baseball stars to an equally exotic group of competitors, known more for their intellectual prowess than their ability to hit line drives: chess players. On a good day, a chess grand master can defeat the world’s most advanced supercomputer,
and this is no small thing. Every second, the computer can consider more than 200 million possible moves, and draw on a vast array of strategies developed by leading scientists and players. By contrast, a human player—even a grand master—considers about four move sequences per turn in any depth, playing out the likely series of parries and countermoves to follow. That’s four
per turn
, not per second. Depending on the amount of time allotted for each turn, the computer might search one billion more possibilities than its human opponent. And still, the grand master often wins. How?

The answer is not obvious. In a series of studies in the 1960s, a Dutch psychologist who was also himself a chess master, Adriaan de Groot, compared masters to novices and found no differences in the number of moves considered; the depth of each search, the series of countermoves played out, mentally; or the way players thought about the pieces (for instance, seeing the rook primarily as an attacking piece in some positions, and as a defensive one in others). If anything, the masters searched
fewer
moves than the novices. But they could do one thing the novices could not: memorize a chess position after seeing the board for less than five seconds. One look, and they could reconstruct the arrangement of the pieces precisely, as if they’d taken a mental snapshot.

In a follow-up study, a pair of researchers at Carnegie Mellon University—William G. Chase and Herbert A. Simon—showed that
this skill had nothing to do with the capacity of the
masters’ memory. Their short-term recall of things like numbers was no better than anyone else’s. Yet they saw the chessboard in more meaningful chunks than the novices did.
*
“The superior performance of stronger players derives from the ability of those players to encode the position into larger perceptual chunks, each consisting of a familiar configuration of pieces,” Chase and Simon concluded.

Grand masters have a good eye, too, just like baseball players, and they’re no more able to describe it. (If they could, it would quickly be programmed into the computer, and machines would rule the game.) It’s clear, though, that both ballplayers and grand masters are doing more than merely seeing or doing some rough analysis. Their eyes, and the visual systems in their brains, are extracting
the most meaningful set of clues
from a vast visual tapestry, and doing so instantaneously. I think of this ability in terms of infrared photography: You see hot spots of information,
live
information, and everything else is dark. All experts—in arts, sciences, IT, mechanics, baseball, chess, what have you—eventually develop this kind of infrared lens to some extent. Like chess and baseball prodigies, they do it through career-long experience, making mistakes, building intuition. The rest of us, however, don’t have a lifetime to invest in Chemistry 101 or music class. We’ll take the good eye—but need to do it on the cheap, quick and dirty.

• • •

When I was a kid, everyone’s notebooks and textbooks, every margin of every sheet of lined paper in sight, was covered with doodles:
graffiti letters, caricatures, signatures, band logos, mazes, 3-D cubes. Everyone doodled, sometimes all class long, and the most common doodle of all was the squiggle:

Those squiggles have a snowflake quality; they all look the same and yet each has its own identity when you think about it. Not that many people have. The common squiggle is less interesting than any nonsense syllable, which at least contains meaningful letters. It’s virtually invisible, and in the late 1940s one young researcher recognized that quality as special. In some moment of playful or deep thinking, she decided that the humble squiggle was just the right tool to test a big idea.

Eleanor Gibson came of age as a researcher in the middle of the twentieth century, during what some call the stimulus-response, or S-R, era of psychology. Psychologists at the time were under the influence of behaviorism, which viewed learning as a pairing of a stimulus and response: the ringing of a bell before mealtime and salivation, in Ivan Pavlov’s famous experiment. Their theories were rooted in work with animals, and included so-called operant conditioning, which rewarded a correct behavior (navigating a maze) with a treat (a piece of cheese) and discouraged mistakes with mild electrical shocks. This S-R conception of learning viewed the sights, sounds, and smells streaming through the senses as not particularly meaningful on their own. The brain provided that meaning by seeing connections. Most of us learn early in life, for instance, that making eye contact brings social approval, and screaming less so. We
learn that when the family dog barks one way, it’s registering excitement; another way, it senses danger. In the S-R world, learning was a matter of making those associations—between senses and behaviors, causes and effects.

Gibson was not a member of the S-R fraternity. After graduating from Smith College in 1931, she entered graduate studies at Yale University hoping to work under the legendary primatologist Robert Yerkes. Yerkes refused. “He wanted no women in his lab and made it extremely clear to me that I
wasn’t wanted there,” Gibson said years later. She eventually found a place with Clark Hull, an influential behaviorist known for his work with rats in mazes, where she sharpened her grasp of experimental methods—and became convinced that there wasn’t much more left to learn about conditioned reflexes. Hull and his contemporaries had done some landmark experiments, but the S-R paradigm itself limited the types of questions a researcher could ask. If you were studying only stimuli and responses, that’s all you’d see. The field, Gibson believed, was completely overlooking something fundamental: discrimination. How the brain learns to detect minute differences in sights, sounds, or textures. Before linking different names to distinct people, for example, children have to be able to distinguish between the sounds of those names, between Ron and Don, Fluffy and Scruffy. That’s one of the first steps we take in making sense of the world. In hindsight, this seems an obvious point. Yet it took years for her to get anyone to listen.

In 1948, her husband—himself a prominent psychologist at Smith—got an offer from Cornell University, and the couple moved to Ithaca, New York. Gibson soon got the opportunity to study learning in young children, and that’s when she saw that her gut feeling about discrimination learning was correct. In some of her early studies at Cornell, she found that children between the ages of three and seven could learn to distinguish standard letters—like a “D” or a “V”—from misshapen ones, like:

These kids had no idea what the letters represented; they weren’t making associations between a stimulus and response. Still, they quickly developed a knack for detecting subtle differences in the figures they studied. And it was this work that led to the now classic doodle experiment, which Gibson
conducted with her husband in 1949. The Gibsons called the doodles “nonsense scribbles,” and the purpose of the study was to test how quickly people could discriminate between similar ones. They brought thirty-two adults and children into their lab, one at a time, and showed each a single doodle on a flashcard:

Other books

Last Ditch by G. M. Ford
Infinite in Between by Carolyn Mackler
Over the Moon by Diane Daniels
The Eye of the Serpent by Philip Caveney
Man With a Pan by John Donahue
Out of Orbit by Chris Jones