How We Decide (7 page)

Read How We Decide Online

Authors: Jonah Lehrer

BOOK: How We Decide
7.21Mb size Format: txt, pdf, ePub

This is an essential aspect of decision-making. If we can't incorporate the lessons of the past into our future decisions, then we're destined to endlessly repeat our mistakes. When the ACC is surgically removed from the monkey brain, the behavior of the primate becomes erratic and ineffective. The monkeys can no longer predict rewards or make sense of their surroundings. Researchers at Oxford performed an elegant experiment that made this deficit clear. A monkey clutched a joystick that moved in two different directions: it could be either lifted or turned. At any given moment, only one of the movements would trigger a reward (a pellet of food). To make things more interesting, the scientists switched the direction that would be rewarded every twenty-five trials. If the monkey had previously gotten in the habit of lifting the joystick in order to get a food pellet, it now had to shift its strategy.

So what did the monkeys do? Animals with intact ACCs had no problem with the task. As soon as they stopped receiving rewards for lifting the joystick, they started turning it in the other direction. The problem was soon solved, and the monkeys continued to receive their pellets of food. However, monkeys that were missing their ACCs demonstrated a telling defect. When they stopped being rewarded for moving the joystick in a certain direction, they were still able (most of the time) to change direction, just like the normal monkeys. However, they were unable to
persist
in this successful strategy and soon went back to moving the joystick in the direction that garnered no reward. They never learned how to consistently find the food, to turn a mistake into an enduring lesson. Because these monkeys couldn't update their cellular predictions, they ended up hopelessly confused by the simple experiment.

People with a genetic mutation that reduces the number of dopamine receptors in the ACC suffer from a similar problem; just like the monkeys, they are less likely to learn from negative reinforcement. This seemingly minor deficit has powerful consequences. For example, studies have found that people carrying this mutation are significantly more likely to become addicted to drugs and alcohol. Because they have difficulty learning from their mistakes, they make the same mistakes over and over. They can't adjust their behavior even when it proves self-destructive.

The ACC has one last crucial feature, which further explains its importance: it is densely populated with a very rare type of cell known as a spindle neuron. Unlike the rest of our brain cells, which are generally short and bushy, these brain cells are long and slender. They are found only in humans and great apes, which suggests that their evolution was intertwined with higher cognition. Humans have about forty times more spindle cells than any other primate.

The strange form of spindle cells reveals their unique function: their antenna-like bodies are able to convey emotions across the entire brain. After the ACC receives input from a dopamine neuron, spindle cells use their cellular velocity—they transmit electrical signals faster than any other neuron—to make sure that the rest of the cortex is instantly saturated in that specific feeling. The consequence of this is that the minor fluctuations of a single type of neurotransmitter play a huge role in guiding our actions, telling us how we should feel about what we see. "You're probably 99.9 percent unaware of dopamine release," says Read Montague, a professor of neuroscience at Baylor University. "But you're probably 99.9 percent driven by the information and emotions it conveys to other parts of the brain."

WE CAN NOW
begin to understand the surprising wisdom of our emotions. The activity of our dopamine neurons demonstrates that feelings aren't simply reflections of hard-wired animal instincts. Those wild horses aren't acting on a whim. Instead, human emotions are rooted in the predictions of highly flexible brain cells, which are constantly adjusting their connections to reflect reality. Every time you make a mistake or encounter something new, your brain cells are busy changing themselves. Our emotions are deeply empirical.

Look, for example, at Schultz's experiment. When Schultz studied those juice-craving monkeys, he discovered that it took only a few experimental trials before the monkeys' neurons knew exactly when to expect their rewards. The neurons did this by continually incorporating the new information, turning a negative feeling into a teachable moment. If the juice didn't arrive, then the dopamine cells adjusted their expectations. Fool me once, shame on you. Fool me twice, shame on my dopamine neurons.

The same process is constantly at work in the human mind. Motion sickness is largely the result of a dopamine prediction error: there is a conflict between the type of motion being experienced—for instance, the unfamiliar pitch of a boat—and the type of motion
expected
(solid, unmoving ground). The result in this case is nausea and vomiting. But it doesn't take long before the dopamine neurons start to revise their models of motion; this is why seasickness is usually temporary. After a few horrible hours, the dopamine neurons fix their predictions and learn to expect the gentle rocking of the high seas.

When the dopamine system breaks down completely—when neurons are unable to revise their expectations in light of reality—mental illness can result. The roots of schizophrenia remain shrouded in mystery, but one of its causes seems to be an excess of certain types of dopamine receptors. This makes the dopamine system hyperactive and disregulated, which means that the neurons of a schizophrenic are unable to make cogent predictions or correlate their firing with outside events. (Most antipsychotic medications work by reducing the activity of dopamine neurons.) Because schizophrenics cannot detect the patterns that actually exist, they start hallucinating false patterns. This is why schizophrenics become paranoid and experience completely unpredictable shifts in mood. Their emotions have been uncoupled from the events of the real world.

The crippling symptoms of schizophrenia serve to highlight the necessity and precision of dopamine neurons. When these neurons are working properly, they are a crucial source of wisdom. The emotional brain effortlessly figures out what's going on and how to exploit the situation for maximum gain. Every time you experience a feeling of joy or disappointment, fear or happiness, your neurons are busy rewiring themselves, constructing a theory of what sensory cues preceded the emotions. The lesson is then committed to memory, so the next time you make a decision, your brain cells are ready. They have learned how to predict what will happen next.

2

Backgammon is the oldest board game in the world. It was first played in ancient Mesopotamia, starting around 3000
B.C.
The game was a popular diversion in ancient Rome, celebrated by the Persians, and banned by King Louis IX of France for encouraging illicit gambling. In the seventeenth century, Elizabethan courtiers codified the rules of backgammon, and the game has changed little since.

The same can't be said about the
players
of the game. One of the best backgammon players in the world is now a software program. In the early 1990s, Gerald Tesauro, a computer programmer at IBM, began developing a new kind of artificial intelligence (AI). At the time, most AI programs relied on the brute computational power of microchips. This was the approach used by Deep Blue, the powerful set of IBM mainframes that managed to defeat chess grand master Garry Kasparov in 1997. Deep Blue was capable of analyzing more than two hundred million possible chess moves per second, allowing it to consistently select the optimal chess strategy. (Kasparov's brain, on the other hand, evaluated only about five moves per second.) But all this strategic firepower consumed a lot of energy: while playing chess, Deep Blue was a fire hazard and required specialized heat-dissipating equipment so that it didn't burst into flames. Kasparov, meanwhile, barely broke a sweat. That's because the human brain is a model of efficiency: even when it's deep in thought, the cortex consumes less energy than a light bulb.

While the popular press was celebrating Deep Blue's stunning achievement—a machine had outwitted the greatest chess player in the world!—Tesauro was puzzled by its limitations. Here was a machine capable of thinking millions of times faster than its human opponent, and yet it had barely won the match. Tesauro realized that the problem with all conventional AI programs, even brilliant ones like Deep Blue's, was their
rigidity.
Most of Deep Blue's intelligence was derived from other chess grand masters, whose wisdom was painstakingly programmed into the machine. (IBM programmers also studied Kasparov's previous chess matches and engineered the software to exploit his recurring strategic mistakes.) But the machine itself was incapable of learning. Instead, it made decisions by predicting the probable outcomes of several million different chess moves. The move with the highest predicted "value" was what the computer ended up executing. For Deep Blue, the game of chess was just an endless series of math problems.

Of course, this sort of artificial intelligence isn't an accurate model of human cognition. Kasparov managed to compete on the same level as Deep Blue even though his mind had far less computational power. Tesauro's surprising insight was that Kasparov's neurons were effective because they had trained themselves. They had been refined by decades of experience to detect subtle spatial patterns on the chessboard. Unlike Deep Blue, which analyzed
every
possible move, Kasparov was able to instantly winnow his options and focus his mental energies on evaluating only the most useful strategic alternatives.

Tesauro set out to create an AI program that acted like Garry Kasparov. He chose backgammon as his paradigm and named the program TD-Gammon. (The
TD
stands for
temporal difference.
) Deep Blue had been preprogrammed with chess acumen, but Tesauro's software began with absolutely zero knowledge. At first, its backgammon moves were entirely random. It lost every match and made stupid mistakes. But the computer didn't remain a novice for long; TD-Gammon was designed to learn from its own experience. Day and night, the software played backgammon against itself, patiently learning which moves were most effective. After a few hundred thousand games of backgammon, TD-Gammon was able to defeat the best human players in the world.

How did the machine turn itself into an expert? Although the mathematical details of Tesauro's software are numbingly complex, the basic approach is simple.
*
TD-Gammon generates a set of predictions about how the backgammon game will unfold. Unlike Deep Blue, the computer program doesn't investigate every possible permutation. Instead, it acts like Garry Kasparov and generates its predictions from its previous experiences. The software compares these predictions to what actually happens during the backgammon game. The ensuing discrepancies provide the substance of its education, and the software strives to continually decrease this "error signal." As a result, its predictions constantly increase in accuracy, which means that its strategic decisions get more and more effective and intelligent.

In recent years, the same software strategy has been used to solve all kinds of difficult problems, from programming banks of elevators in skyscrapers to determining the schedules of flights. "Anytime you've got a problem with a seemingly infinite number of possibilities"—the elevators and planes can be arranged in any number of sequences—"these sorts of learning programs can be a crucial guide," says Read Montague. The essential distinction between these reinforcement-learning programs and traditional approaches is that these new programs find the optimal solutions by themselves. Nobody tells the computer how to organize the elevators. Instead, it methodically learns by running trials and focusing on its errors until, after a certain number of
trials, the elevators are running as efficiently as possible. The seemingly inevitable mistakes have disappeared.

This programming method closely mirrors the activity of dopamine neurons. The brain's cells also measure the mismatch between expectation and outcome. They use their inevitable errors to improve performance; failure is eventually turned into success. Take, for example, an experiment known as the Iowa Gambling Task designed by the neuroscientists Antonio Damasio and Antoine Bechara. The game went as follows: a subject—"the player"—was given four decks of cards, two black and two red, and $2,000 of play money. Each card told the player whether he'd won or lost money. The subject was instructed to turn over a card from one of the four decks and to make as much money as possible.

But the cards weren't distributed at random. The scientists had rigged the game. Two of the decks were full of high-risk cards. These decks had bigger payouts ($100), but also contained extravagant punishments ($1,250). The other two decks, by comparison, were staid and conservative. Although they had smaller payouts ($50), they rarely punished the player. If the gambler drew only from those two decks, he would come out way ahead.

At first, the card-selection process was entirely haphazard. There was no reason to favor any specific deck, and so most people sampled from each pile, searching for the most lucrative cards. On average, people had to turn over about fifty cards before they began to draw solely from the profitable decks. It took about eighty cards before the average experimental subject could explain
why
he or she favored those decks. Logic is slow.

But Damasio wasn't interested in logic; he was interested in emotion. While the gamblers in the experiment were playing the card game, they were hooked up to a machine that measured the electrical conductance of their skin. In general, higher levels of conductance signal nervousness and anxiety. What the scientists found was that after a player had drawn only ten cards, his hand got "nervous" when it reached for the negative decks. Although the subject still had little inkling of which card piles were the most lucrative, his emotions had developed an accurate sense of fear. The emotions knew which decks were dangerous. The subject's feelings figured out the game first.

Other books

When I Surrender by Kendall Ryan
Beauty & The Biker by Glenna Maynard
Candlenight by Phil Rickman
Opiniones de un payaso by Heinrich Böll
Yield by Cyndi Goodgame
Beds and Blazes by Bebe Balocca
Death Likes It Hot by Gore Vidal
Stroke Of Fear by Kar, Alla
Bending Bethany by Aria Cole
Barbara Metzger by Lord Heartless