Misbehaving: The Making of Behavioral Economics (4 page)

BOOK: Misbehaving: The Making of Behavioral Economics
10.19Mb size Format: txt, pdf, ePub
ads

To an Econ these two policies are identical. If the credit card price is $1.03 and the cash price is $1, it should not matter whether you call the three-cent difference a discount or a surcharge. Nevertheless, the credit card industry rightly had a strong preference for the discount. Many years later Kahneman and Tversky would call this distinction “framing,” but marketers already had a gut instinct that framing mattered. Paying a surcharge is out-of-pocket, whereas not receiving a discount is a “mere” opportunity cost.

I called this phenomenon the “endowment effect” because, in economists’ lingo, the stuff you own is part of your endowment, and I had stumbled upon a finding that suggested people valued things that were already part of their endowment more highly than things that could be part of their endowment, that were available but not yet owned.

The endowment effect has a pronounced influence on behavior for those considering attending special concerts and sporting events. Often the retail price for a given ticket is well below the market price. Someone lucky enough to have grabbed a ticket, either by waiting in line or by being quickest to click on a website, now has a decision to make: go to the event or sell the ticket? In many parts of the world there is now a simple, legal market for tickets on websites such as Stubhub.com, such that ticket-holders no longer have to stand outside a venue and hawk the tickets in order to realize the windfall gain they received when they bought a highly valued item.

Few people other than economists think about this decision correctly. A nice illustration of this involves economist Dean Karlan, now of Yale University. Dean’s time in Chicago—he was an MBA student then—coincided with Michael Jordan’s reign as the king of professional basketball. Jordan’s Chicago Bulls won six championships while he was on the team. The year in question, the Bulls were playing the Washington Wizards in the first round of the playoffs. Although the Bulls were heavily favored to win, tickets were in high demand in part because fans knew seats would be even more expensive later in the playoffs.

Dean had a college buddy who worked for the Wizards and gave Dean two tickets. Dean also had a friend, a graduate student in divinity school, who shared the same Wizards connection and had also received a pair of free tickets. Both of them faced the usual financial struggles associated with being a graduate student, although Dean had better long-term financial prospects: MBAs tend to make more money than graduates of divinity school.

Both Dean and his friend found the decision of whether to sell or attend the game to be an easy one. The divinity school student invited someone to go to the game with him and enjoyed himself. Dean, meanwhile, got busy scoping out which basketball-loving professors also had lucrative consulting practices. He sold his tickets for several hundred dollars each. Both Dean and his friend thought the other’s behavior was nuts. Dean did not understand how his friend could possibly think he could afford to go to the game. His friend could not understand why Dean didn’t realize the tickets were free.

That is the endowment effect. I knew it was real, but I had no idea what to do with it.

________________

*
   Typical Schelling thought experiment: suppose there was some medical procedure that will provide some modest health benefit but is extremely painful. However, the procedure is administered with a drug that does not prevent the pain but instead erases all memory of the event. Would you be willing to undertake this procedure?


   The question that Zeckhauser was interested in is: how does Aidan’s willingness to pay depend on the number of bullets in the gun? If all the chambers are full, Aidan should pay all he has (and can borrow) to remove even one bullet. But what if there are only two bullets loaded? What will he pay to remove one of them? And would it be more or less than what he would pay to remove the last bullet?


   Technically, the answers can differ by what economists call an income or wealth effect. You are worse off in version A than version B because if you do nothing in version B you do not get exposed to the disease. But this effect cannot explain differences of the magnitudes that I observed, and other surveys in which I would hypothetically tell people in version A that they had been given (say) $50,000 did not eliminate the disparity.

§
   Rosett did not seem much troubled by this behavior. I subsequently published an article that included this anecdote, with Rosett described as Mr. R. I sent Rosett a copy of the article when it came out and received a two-word reply: “Ah fame!”


   Of course, the divinity school students might make up for this disparity in the very, very long run.

3

The List

T
he discrepancy between buying and selling prices got my mind wandering. What else do people do that is inconsistent with the economists’ model of rational choice? Once I started paying attention, so many examples cropped up that I started a list on the blackboard in my office. Here are a few that describe the behavior of some of my friends:

•  
Jeffrey and I somehow get two free tickets to a professional basketball game in Buffalo, normally an hour and a half drive from where we live in Rochester. The day of the game there is a big snowstorm. We decide not to go, but Jeffrey remarks that, had we bought the (expensive) tickets, we would have braved the blizzard and attempted to drive to the game.

•  
Stanley mows his lawn every weekend and it gives him terrible hay fever. I ask Stan why he doesn’t hire a kid to mow his lawn. Stan says he doesn’t want to pay the $10. I ask Stan whether he would mow his neighbor’s lawn for $20 and Stan says no, of course not.

•  
Linnea is shopping for a clock radio. She finds a model she likes at what her research has suggested is a good price, $45. As she is about to buy it, the clerk at the store mentions that the same radio is on sale for $35 at new branch of the store, ten minutes away, that is holding a grand opening sale. Does she drive to the other store to make the purchase?

On a separate shopping trip, Linnea is shopping for a television set and finds one at the good price of $495. Again the clerk informs her that the same model is on sale at another store ten minutes away for $485. Same question . . . but likely different answer.

•  
Lee’s wife gives him an expensive cashmere sweater for Christmas. He had seen the sweater in the store and decided that it was too big of an indulgence to feel good about buying it. He is nevertheless delighted with the gift. Lee and his wife pool all their financial assets; neither has any separate source of money.

•  
Some friends come over for dinner. We are having drinks and waiting for something roasting in the oven to be finished so we can sit down to eat. I bring out a large bowl of cashew nuts for us to nibble on. We eat half the bowl in five minutes, and our appetite is in danger. I remove the bowl and hide it in the kitchen. Everyone is happy.

Each example illustrates a behavior that is inconsistent with economic theory. Jeffrey is ignoring the economists’ dictum to “ignore sunk costs,” meaning money that has already been spent. The price we paid for the tickets should not affect our choice about whether to go to the game. Stanley is violating the precept that buying and selling prices should be about the same. If Linnea spends ten minutes to save $10 on a small purchase but not a large one, she is not valuing time consistently. Lee feels better about spending family resources on an expensive sweater if his wife made the decision, though the sweater was no cheaper. And removing the cashews takes away the option to eat some more; to Econs, more choices are always preferred to fewer.

I spent a fair amount of time staring at the List and adding new items, but I did not know what to do with it. “Dumb stuff people do” is not a satisfactory title for an academic paper. Then I caught a break. In the summer of 1976 Sherwin and I went to a conference near Monterey, California. We were there to talk about the value of a life. What made the conference special for me were two psychologists who attended: Baruch Fischhoff and Paul Slovic. They both studied how people make decisions. It was like discovering a new species. I had never met anyone in academia with their backgrounds.

I ended up giving Fischhoff a ride to the airport. As we drove, Fisch-hoff told me he had completed a PhD in psychology at the Hebrew University in Israel. There he had worked with two guys whose names I had never heard: Daniel Kahneman and Amos Tversky. Baruch told me about his now-famous thesis on “hindsight bias.” The finding is that, after the fact, we think that we always knew the outcome was likely, if not a foregone conclusion. After the virtually unknown African American senator Barack Obama defeated the heavily favored Hillary Clinton for the Democratic Party presidential nomination, many people thought they had seen it coming. They hadn’t. They were just misremembering.

I found the concept of hindsight bias fascinating, and incredibly important to management. One of the toughest problems a CEO faces is convincing managers that they should take on risky projects if the expected gains are high enough. Their managers worry, for good reason, that if the project works out badly, the manager who championed the project will be blamed whether or not the decision was a good one at the time. Hindsight bias greatly exacerbates this problem, because the CEO will wrongly think that whatever was the cause of the failure, it should have been anticipated in advance. And, with the benefit of hindsight, he always knew this project was a poor risk. What makes the bias particularly pernicious is that we all recognize this bias in others but not in ourselves.

Baruch suggested that I might enjoy reading some of the work of his advisors. The next day, when I was back in my office in Rochester, I headed over to the library. Having spent all my time in the economics section, I found myself in a new part of the library. I started with the duo’s summary paper published in
Science
: “Judgment Under Uncertainty: Heuristics and Biases.” At the time I was not sure what a heuristic was, but it turns out to be a fancy word for a rule of thumb. As I read, my heart started pounding the way it might during the final minutes of a close game. The paper took me thirty minutes to read from start to finish, but my life had changed forever.

The thesis of the paper was simple and elegant. Humans have limited time and brainpower. As a result, they use simple rules of thumb—heuristics—to help them make judgments. An example would be “availability.” Suppose I ask you if Dhruv is a common name. If you are from most countries in the world you would likely say no, but it happens to be a very common name in India, a country with a lot of people, so on a global scale it is in fact a rather common name. In guessing how frequent something is, we tend to ask ourselves how often we can think of instances of that type. It’s a fine rule of thumb, and in the community in which you live, the ease with which you can recall meeting people with a given name will offer a good clue as to its actual frequency. But the rule will fail in cases in which the number of instances of some event is not highly correlated with the ease with which you can summon up examples (such as the name Dhruv). This is an illustration of the big idea of this article, one that made my hands shake as I read: using these heuristics causes people to make
predictable errors
. Thus the title of the paper: heuristics and
biases.
The concept of predictable biases offered a framework for my heretofore helter-skelter set of ideas.

A forerunner of Kahneman and Tversky was Herbert Simon, a polymath academic who spent most of his career at Carnegie Mellon University. Simon was well known in nearly every field of social science, including economics, political science, artificial intelligence, and organizational theory, but most germane to this book, he wrote about what he called “bounded rationality” well before Kahneman and Tversky came along. In saying that people have bounded rationality, Simon meant that they lack the cognitive ability to solve complex problems, which is obviously true. Yet, although he received a Nobel Prize in economics, unfortunately I think it is fair to say that he had little impact on the economics profession.
*
I believe many economists ignored Simon because it was too easy to brush aside bounded rationality as a “true but unimportant” concept. Economists were fine with the idea that their models were imprecise and that the predictions of those models would contain error. In the statistical models used by economists, this is handled simply by adding what is called an “error” term to the equation. Suppose you try to predict the height that a child will reach at adulthood using the height of both parents as predictors. This model will do a decent job since tall parents tend to have tall children, but the model will not be perfectly accurate, which is what the error term is meant to capture. And as long as the errors are random—that is, the model’s predictions are too high or too low with equal frequency—then all is well. The errors cancel each other out. This was economists’ reasoning to justify why the errors produced by bounded rationality could safely be ignored. Back to the fully rational model!

Kahneman and Tversky were waving a big red flag that said these errors were not random. Ask people whether there are more gun deaths caused by homicide or suicide in the U.S., and most will guess homicide, but in fact there are almost twice as many gun deaths by suicide than homicides.

This is a
predictable
error. Even across many people, the errors will not average out to zero. Although I did not appreciate it fully at the time, Kahneman and Tversky’s insights had inched me forward so that I was just one step away from doing something serious with my list. Each of the items on the List was an example of a systematic bias.

The items on the List had another noteworthy feature. In every case, economic theory had a highly specific prediction about some key factor—such as the presence of the cashews or the amount paid for the basketball game tickets—that the theory said should not influence decisions. They were all supposedly irrelevant factors, or SIFs. Much subsequent work in behavioral economics has been to show which SIFs are in fact highly relevant in predicting behavior, often by taking advantage of the systematic biases suggested in Tversky and Kahneman’s 1974 paper.

By now it’s a long list, far surpassing what was written on my blackboard all those years ago.

BOOK: Misbehaving: The Making of Behavioral Economics
10.19Mb size Format: txt, pdf, ePub
ads

Other books

Cherringham--Snowblind by Neil Richards
Little Sister Death by William Gay
Spinning Starlight by R.C. Lewis
Man's Best Friend by EC Sheedy
Perfect Freedom by Gordon Merrick
Run by Ann Patchett
Resolution: Evan Warner Book 1 by Nick Adams, Shawn Underhill
Players by Don Delillo