Read Priceless: The Myth of Fair Value (and How to Take Advantage of It) Online

Authors: William Poundstone

Tags: #Marketing, #Consumer Behavior, #Economics, #Business & Economics, #General

Priceless: The Myth of Fair Value (and How to Take Advantage of It) (39 page)

BOOK: Priceless: The Myth of Fair Value (and How to Take Advantage of It)
13.8Mb size Format: txt, pdf, ePub
ads

In their 1998 article, “Shared Outrage and Erratic Awards: The Psychology of Punitive Damages,” Kahneman, Schkade, and Sunstein describe their “outrage theory” of jury awards. In effect, they say, juries are psychophysics experiments in which the jurors are rating the outrage they feel at the defendant’s actions. The problem is that they are forced to translate outrage into dollars, a magnitude scale with no standard of comparison. “The unpredictability of raw dollar awards,” write the
authors, “is produced primarily by large (and possibly meaningless) individual differences in the use of the dollar scale.”

Citing the work of S. S. Stevens (an authority previously unknown to legal scholars), they show that jury awards have many of the features of magnitude scales. The error or “noise” in psychophysical estimates rises in proportion to the size of the estimate itself. This is true whether you’re looking at the repeated estimates of one subject or comparing the estimates of different people. With juries, this would mean that the largest jury awards are likely to be the most off the mark. Furthermore, juries are small samples. Twelve people is too few to sample public opinion with any degree of accuracy. This leads to anomalously high awards (and also to ridiculously low awards, though they rarely get any press).

The experiment involved 899 residents of metropolitan Austin, Texas, which was then Schkade’s home base at the University of Texas. The participants were recruited from the voter rolls, the same population that would be called for jury duty. They met in a downtown hotel and read descriptions of hypothetical lawsuits in which a wronged individual was suing a corporation. In each scenario, the corporate defendant had been found guilty and was liable for $200,000 in compensatory damages. The participants’ role was to set punitive damages.

One set of participants did this by naming a dollar amount. Another group was asked only to rate the defendant’s actions on a scale of “outrage.” This was a category scale ranging from 0 (“Completely Acceptable”) to 6 (“Absolutely Outrageous”).

Still another group was asked to rate the degree of punishment justified, from 0 (“No Punishment”) to 6 (“Extremely Severe Punishment”).

In each case, the mock jurors filled out their questionnaires alone, without conferring with anyone else. Nevertheless, there was a strong correlation between responses on the two category scales of outrage and punishment. But the dollar awards, the magnitude scale, were all over the map. This is what you’d expect from psychophysics.

The tale of poor little “Joan” got the highest average damage award in dollars. This was absurd for several reasons. It didn’t represent a consensus. Though $22 million was the mean award, the median amount was only $1 million. Half the participants thought the damages should be a million or less. There were even a few jurors (2.8 percent) who thought the award should be
zero
.

Do these disparate dollar amounts indicate a split jury? Not so much as you’d think. Looking at the category scale ratings, you find a decent consensus. Jurors rated the drug company’s actions an average of 4.19 out of 6 on the outrage scale and 4.65 out of 6 on the punishment scale. Responses were scattered in a rough bell curve around the means.

The consensus fell apart only when jurors had to name a dollar amount. Everyone did this differently. You could have two people in complete agreement that the case merits “severe punishment.” To one, severe punishment means $100,000; to another, it’s $100 million. The average for Joan was high because of a few high rollers who awarded astronomical sums. Their valuations had an outsized impact when the numbers were averaged.

Now, of course, real juries don’t average each juror’s separate figure. They debate the amount among themselves and try to talk reason into outliers (as was reported to have happened with the
Liebeck v. McDonald’s
jury). Nevertheless, there have been studies showing that deliberating groups, and juries in particular, have no better judgment than the individuals making them up. “Wisdom of crowds” effects work best when everyone makes an independent judgment. Juries may even magnify the biases of their members. This could happen when the first juror to speak names an outrageously high number. “The unpredictability and characteristic skewness of jury dollar awards is readily replicated under laboratory conditions,” the research team wrote. “Under these circumstances, we expect judgments to be highly labile, and therefore susceptible to any anchors that may be provided in the course of the trial or in jury deliberations.”

 

The $22 million average award for Joan was way out of line with the dollar amounts for other scenarios. The best proof of that is an alternate version of the Joan scenario that was tested. Some of the jurors read a description in which Joan’s overdose permanently weakened her respiratory system, “which will make her more susceptible to breathing-related diseases such as asthma and emphysema for the rest of her life.” These jurors gave an average award of $17.9 million—
less
than in the scenario where she’s just afraid of pills. This doesn’t mean that anyone actually thought permanent respiratory damage was less serious. No juror saw
both versions of the story; it was a different randomly chosen group of Metro Austin voters each time (just as would be the case with a real jury). Apparently, the group given the less serious scenario happened to have a few more high rollers.

Again, category scale ratings were more consistent. The permanent-respiratory-damage version of Joan’s tale got higher outrage and punishment scores than the afraid-of-pills version, just as logic would demand. These judgments scarcely varied with income, age, or ethnic group. (Women were somewhat harsher than men in their punishment scale ratings.) The researchers concluded that punishment scale ratings “rest on a bedrock of moral intuitions that are broadly shared in society.”

And dollar amounts don’t. The root of the crazy-jury-award problem is that there is no consensus on how to convert outrage into dollars.

 

Kahneman, Schkade, and Sunstein used these empirical findings to tackle some philosophical issues. Justice requires consistency, they wrote. Identical crimes deserve identical punishments. In practice, however, every situation is different. That’s why we need juries to ensure that punishments accord with community sentiments.

The article sketches several possible reforms. Most involve having jurors use a category scale rather than a dollar scale to set damages. They would rate the degree of punishment, not the dollar amount. Then a “conversion function” would translate the punishment rating into dollars. This conversion function could be set by a judge or a legislature, for instance. A more democratic idea is to let the people decide. Judicial districts, or the nation as a whole, could do experiments much like that done in Austin, to determine just how the public thinks punitive intentions ought to translate into dollar amounts. The empirically derived conversion function would then be used in setting damage awards. The experiment could be repeated every so many years to make sure that the function remained in sync with the public’s thinking. As Kahneman, Schkade, and Sunstein wrote, “Many new possibilities are opened by raising the question ‘How can we obtain the best estimate of community sentiment?’ ” It’s something that the present system doesn’t even ask.

Fifty-six
Honesty Box

Eric Johnson is a boyishly enthusiastic Columbia Business School professor, old enough to have taken a Ph.D. under Herbert Simon and to have collaborated with Amos Tversky. One of Johnson’s grad students, Naomi Mandel, was reading about priming and wondered whether it would work with a Web page. “I said it was a very cute idea,” Johnson recalled, adding, “It will never work.” Mandel did some pilot studies anyway. “We just kept doing it, it kept working,” Johnson said. “I never expected the data to be that clean, the effect to be that powerful.”

Mandel and Johnson’s experiment, published in the
Journal of Consumer Research
, has already made a stir in the marketing and Web design communities. The Internet has long been promoted as a level playing field for shoppers. No longer must the consumer accept the prices of the few nearby bricks-and-mortar stores. The Web buyer can comparison-shop the wide world, free of the manipulation of high-pressure sales tactics . . . Well, scratch that last part. Mandel and Johnson found that manipulation could be as simple as a line of HTML code.

Seventy-six undergraduates participated in what they were told was a test of online shopping. Each visited two (bogus) websites, one offering sofas and the other cars. Using the information on the site, they were to choose between two models in each product category. Each posed the familiar trade-off of price versus quality, and the shoppers had to determine which was more important.

The experiment’s one variable was the background image of each site’s home page. Some visitors to the sofa site saw a wallpaper design of
pennies on a green background. Others saw a background of fluffy clouds (suggesting comfort). The car site had either green dollar signs or red and orange flames.

Incredibly, the cheap car’s market share rose from 50 percent (with the flames wallpaper) to 66 percent (with the dollar signs). The cheap sofa’s share surged from 39 percent (clouds) to 56 percent (pennies).

“It is important to note that our priming manipulation was not subliminal,” Mandel and Johnson wrote. “All of our subjects could plainly see the background on the first page, and many recalled the wallpaper when asked.” But when asked whether the wallpaper could have affected their decision, 86 percent said no. “This lack of awareness,” Mandel and Johnson wrote, “suggests that . . . electronic environments may present significant challenges to consumers.”

A second, expanded experiment involved 385 Internet users who had agreed to participate in a survey. The participants were adults from across the United States whose average age and income approximated those of the Internet population. A questionnaire gauged how much experience each user had in buying cars or sofas. This time the website kept track of how much time was spent on each page. The priming effect showed up clearly in the novice buyers’ browsing history. When primed with money images, they spent more time comparing prices.

The expert buyers’ browsing behavior was not so influenced by the wallpaper images. Their choices, however, were. Mandel and Johnson suspect that seasoned consumers find it easier to judge which sofa is softer or which car is cheaper. The priming affects the facts that experts retrieve from memory. Novices have to construct a similar level of competence from HTML pages. The end result was about the same. The background images could nudge shoppers from a “price matters” to a “quality matters” mind-set.

Already marketers are starting to use the science. Johnson is now helping a major German automaker—he’s not allowed to say which one—redesign its website. These applications raise ethical questions transcending the age-old ones of advertising. Our ethics, no less than our economics, has been rendered partly obsolete by decision research. For the most part, we still subscribe to the idea that people have a fixed set of values. Anything that covertly changes those values (a “hidden persuader”) is judged to be a violation of personal freedom. The reality is
that what consumers want is often constructed between mouse clicks. All sorts of details of context exert measurable statistical effects. No consumer wants to feel “manipulated.” But to some degree, that is like a fish not wanting to feel wet.

Consider this: Mandel and Johnson’s experiment included a control group of subjects who saw neutral versions of the websites with no background images at all. Their choices were not much different from those of subjects who saw the money backgrounds. This raises the possibility that American consumers focus on price by default. It takes a “manipulation” to get them to pay attention to anything else.

 

Our bustling, profit-obsessed society rarely grants leisure to ponder “Just how important is money?” That doesn’t make the question go away; it just relegates it to the realm of the unconscious and automatic. In a 2004 experiment at Stanford, Christian Wheeler and colleagues had volunteers perform a “visual acuity test” before playing the ultimatum game. The vision test consisted of sorting photographs by size. It was simply a pretext to show the subjects some photographs without arousing suspicions. One group saw pictures relating to business (a boardroom table, a dress suit, a briefcase), and another saw images with no connection to business or money (a kite, a whale, an electrical outlet). This made a difference in how they subsequently played the ultimatum game. Proposers seeing the business pictures offered 14 percent less to responders than the control group’s proposers did. The players seeing kites or whales were more inclined to offer fifty-fifty splits rather than to shave a few pennies off. “These are pretty big effects with pretty minor manipulations,” Wheeler said. “People are always trying to figure out how to act in any given situation, and they look to external cues to guide their behavior particularly when it’s unclear what’s expected of them. When there aren’t a lot of explicit cues to help define a situation, we are more likely to act based on cues we pick up implicitly.”

For many years, a common room at Newcastle University has used an “honesty box” to pay for tea and coffee. Anyone is free to help himself to hot beverages and to deposit the posted price in the honesty box. This saves hiring a checker to take people’s money, something that would cost more than the sums collected anyway. The honesty box is a vernacular
dictator game. Everyone is supposed to chip in their fair share. They have the option of contributing less, or nothing at all. Based on dictator game research, you’d expect that honesty box compliance has a lot to do with whether people are watching. A 2006 experiment found something more startling.

BOOK: Priceless: The Myth of Fair Value (and How to Take Advantage of It)
13.8Mb size Format: txt, pdf, ePub
ads

Other books

Virginia Henley by Ravished
Beyond Eden by Kele Moon
Plumage by Nancy Springer
If Only by A. J. Pine
Chasing Butterflies by Beckie Stevenson
Birthmarked by Caragh M. O'brien
Perfect Hatred by Leighton Gage