Read Everything Is Obvious Online

Authors: Duncan J. Watts

Everything Is Obvious (38 page)

BOOK: Everything Is Obvious
8.89Mb size Format: txt, pdf, ePub
ads

Stephen, Andrew. 2009. “Why Do People Transmit Word-of-Mouth? The Effects of Recipient and Relationship Characteristics on Transmission Behaviors.” Marketing Department, Columbia University.

Stouffer, Samuel A. 1947. “Sociology and Common Sense: Discussion.”
American Sociological Review
12 (1):11–12.

Sun, Eric, Itamar Rosenn, Cameron A. Marlow, and Thomas M. Lento. 2009. “Gesundheit! Modeling Contagion Through Facebook News Feed.” Third International Conference on Weblogs and Social Media, at San Jose, CA. AAAI Press.

Sunstein, Cass R. 2005. “Group Judgments: Statistical Means, Deliberation, and Information Markets.”
New York Law Review
80 (3):962–1049.

Surowiecki, James. 2004.
The Wisdom of Crowds: Why the Many Are
Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies, and Nations
. New York: Doubleday.

Svenson, Ola. 1981. “Are We All Less Risky and More Skillful Than Our Fellow Drivers?”
Acta Psychologica
47 (2):143–48.

Tabibi, Matt. 2009. “The Real Price of Goldman’s Giganto-Profits.” July 16
http://trueslant.com/

Taleb, Nassim Nicholas. 2001.
Fooled by Randomness
. New York: W. W. Norton.

———. 2007.
The Black Swan: The Impact of the Highly Improbable
. New York: Random House.

Tang, Diane, Ashish Agarwal, Dierdre O’Brien, and Mike Meyer. 2010. Overlapping Experiment Infrastructure: More, Better, Faster Experimentation. 16th ACMSIGKDD International Conference on Knowledge Discovery abd Data Mining, Washington, DC. ACM Press.

Taylor, Carl C. 1947. “Sociology and Common Sense.”
American Sociological Review
12 (1):1–9.

Tetlock, Philip E. 2005.
Expert Political Judgment: How Good Is It? How Can We Know?
Princeton, NJ: Princeton University Press.

Thaler, Richard H., and Cass R. Sunstein. 2008.
Nudge: Improving Decisions about Health, Wealth, and Happiness
. New Haven, CT: Yale University Press.

Thompson, Clive. 2010. “What Is I.B.M.’s Watson?”
New York Times Magazine
(June 20):30–45.

Thorndike, Edward L. 1920. “A Constant Error on Psychological Rating.”
Journal of Applied Psychology
4:25–9.

Tomlinson, Brian, and Clive Cockram. 2003. “SARS: Experience at Prince of Wales Hospital, Hong Kong.”
The Lancet
361 (9368):1486–87.

Tuchman, Barbara W. 1985.
The March of Folly: From Troy to Vietnam
. New York: Ballantine Books.

Tucker, Nicholas. 1999. “The Rise and Rise of Harry Potter.”
Children’s Literature in Education
30 (4):221–34.

Turow, Joseph, Jennifer King, Chris J. Hoofnagle, et al. 2009. “Americans Reject Tailored Advertising and Three Activities That Enable It.” Available at SSRN:
http://ssrn.com/abstract-1478214

Tversky, Amos, and Daniel Kahneman. 1983. “Extensional Versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment.”
Psychological Review
90 (4):293–315.

———. 1974. “Judgment Under Uncertainty: Heuristics and Biases.”
Science
185 (4157):1124–31.

Tyler, Joshua R., Dennis M. Wilkinson, and Bernardo A. Huberman. 2005. “Email as Spectroscopy: Automated Discovery of Community Structure Within Organizations.” The Information Society 21(2): 143–153.

Tziralis, George, and Ilias Tatsiopoulos. 2006. “Prediction Markets: An Extended Literature Review.”
Journal of Prediction Markets
1 (1).

Wack, Pierre. 1985a. “Scenarios: Shooting the Rapids.”
Harvard Business Review
63 (6):139–50.

Wack, Pierre. 1985b. “Scenarios: Uncharted Waters Ahead.”
Harvard Business Review
, 63(5).

Wade, Nicholas. 2010. “A Decade Later, Genetic Map Yields Few New Cures.”
New York Times
, June 12.

Wadler, Joyce. 2010. “The No Lock People.”
New York Times
, Jan. 13.

Wasserman, Noam, Bharat Anand, and Nitin Nohria. 2010. “When Does Leadership Matter?” In
Handbook of Leadership Theory and Practice
, ed. N. Nohria and R. Khurana. Cambridge, MA: Harvard Business Press.

Watts, Duncan J. 1999.
Small Worlds : The Dynamics of Networks Between Order and Randomness
. Princeton, NJ: Princeton University Press.

———. 2003.
Six Degrees: The Science of a Connected Age
. New York: W. W. Norton.

Watts, Duncan J. 2004. “The ‘New’ Science of Networks.”
Annual Review of Sociology
, 30:243–270.

———. 2007. “A 21st Century Science.”
Nature
445:489.

———. 2009. “Too Big to Fail? How About Too Big to Exist?”
Harvard Business Review
, 87(6):16.

Watts, Duncan J., P. S. Dodds, and M. E. J. Newman. 2002. “Identity and Search in Social Networks.”
Science
296 (5571):1302–1305.

Watts, Duncan J., and Peter Sheridan Dodds. 2007. “Influentials, Networks, and Public Opinion Formation.”
Journal of Consumer Research
34:441–58.

Watts, Duncan J., and Steve Hasker. 2006. “Marketing in an Unpredictable World.”
Harvard Business Review
84(9).:25–30.

Watts, Duncan J., and S. H. Strogatz. 1998. “Collective Dynamics of ‘Small-World’ Networks.”
Nature
393 (6684):440–42.

Weaver, Warren. 1958. “A Quarter Century in the Natural Sciences.”
Public Health Reports
76:57–65.

Weimann, Gabriel. 1994.
The Influentials: People Who Influence People
. Albany, NY: State University of New York Press.

Whitford, Josh. 2002. “Pragmatism and the Untenable Dualism of Means and Ends: Why Rational Choice Theory Does Not Deserve Paradigmatic Privilege.”
Theory and Society
31 (3):325–63.

Wilson, Eric. 2008. “Is This the World’s Cheapest Dress?”
New York Times
, May 1.

Wimmer, Andreas, and Kevin Lewis. 2010. “Beyond and Below Racial Homophily: ERG Models of a Friendship Network Documented on Facebook.”
American Journal of Sociology
116 (2):583–642.

Wolfers, Justin, and Eric Zitzewitz. 2004. “Prediction Markets.”
Journal of Economic Perspectives
18 (2):107–26.

Wortman, Jenna. 2010. “Once Just a Site with Funny Cat Pictures, and Now a Web Empire.”
New York Times
, June 13.

Wright, George, and Paul Goodwin. 2009. “Decision Making and Planning Under Low Levels of Predictability: Enhancing the Scenario Method.”
International Journal of Forecasting
25 (4):813–25.

Zelditch, Morris. 1969. “Can You Really Study an Army in the Laboratory?” In A. Etzioni and E. N. Lehman (eds)
A Sociological Reader on Complex Organizations
. New York: Holt, Rinehent, and Winston. pp. 528–39.

Zheng, Tian, Matthew J. Salganik, and Andrew Gelman. 2006. “How Many People Do You Know in Prison?: Using Overdispersion in Count Data to Estimate Social Structure in Networks.”
Journal-American Statistical Association
101 (474):409.

Zuckerman, Ezra W., and John T. Jost. 2001. “What Makes You Think You’re So Popular? Self-Evaluation Maintenance and the Subjective Side of the ‘Friendship Paradox.’ ”
Social Psychology Quarterly
64 (3):207–23.

NOTES
PREFACE: A SOCIOLOGIST’S APOLOGY

  
1.
 For John Gribbin’s review of Becker (1998), see Gribbin (1998).

  
2.
 See Watts (1999) for a description of small-world networks.

  
3.
 See, for example, a recent story on the complexity of modern finance, war, and policy (Segal 2010).

  
4.
 For a report on Bailey-Hutchinson’s proposal, see Mervis (2006). For a report on Senator Coburn’s remarks, see Glenn (2009).

  
5.
 See Lazarsfeld (1949).

  
6.
 For an example of the “it’s not rocket science” mentality, see Frist et al. (2010).

  
7.
 See Svenson (1981) for the result about drivers. See Hoorens (1993), Klar and Giladi (1999), Dunning et al (1989), and Zuckerman and Jost (2001) for other examples of illusory superiority bias. See Alicke and Govorun (2005) for the leadership result.

CHAPTER 1: THE MYTH OF COMMON SENSE

  
1.
 See Milgram’s
Obedience to Authority
for details (Milgram, 1969). An engaging account of Milgram’s life and research is given in Blass (2009).

  
2.
 Milgram’s reaction was described in a 1974 interview in
Psychology Today
, and is reprinted in Blass (2009). The original report on the subway experiment is Milgram and Sabini (1983) and has been reprinted in Milgram (1992). Three decades later, two
New York Times
reporters set out to repeat Milgram’s experiment. They reported almost exactly the same experience: bafflement, even anger, from riders; and extreme discomfort themselves (Luo 2004, Ramirez and Medina 2004).

  
3.
 Although the nature and limitations of common sense are discussed in introductory sociology textbooks (according to Mathisen [1989], roughly half of the sociology texts he surveyed contained references to common sense), the topic is rarely discussed in sociology journals. See, however, Taylor (1947), Stouffer (1947), Lazarsfeld (1949), Black (1979), Boudon (1988a), Mathisen (1989), Bengston and Hazzard (1990), Dobbin (1994), and Klein (2006) for a variety of perspectives by sociologists. Economists have been even less concerned with common sense than sociologists, but see Andreozzi (2004) for some interesting remarks on social versus physical intuition.

  
4.
 See Geertz (1975, p.6).

  
5.
 Taylor (1947, p. 1).

  
6.
 
Philosophers in particular have wondered about the place of common sense in understanding the world, with the tide of philosophical opinion going back and forth on the matter of how much respect common sense ought to be given. In brief, the argument seems to have been about the fundamental reliability of experience itself; that is, when is it acceptable to take something—an object, an experience, or an observation—for granted, and when must one question the evidence of one’s own senses? On one extreme were the radical skeptics, who posited that because all experience was, in effect, filtered through the mind, nothing at all could be taken for granted as representing some kind of objective reality. At the other extreme were philosophers like Thomas Reid, of the Scottish Realist School, who were of the opinion that any philosophy of nature ought to take the world “as it is.” Something of a compromise position was outlined in America at the beginning of the ninteenth century by the pragmatist school of philosophy, most prominently William James and Charles Saunders Peirce, who emphasized the need to reconcile abstract knowledge of a scientific kind with that of ordinary experience, but who also held that much of what passes for common sense was to be regarded with suspicion (James 1909, p 193). See Rescher (2005) and Mathisen (1989) for discussions of the history of common sense in philosophy.

  
7.
 It should be noted that commonsense reasoning also seems to have backup systems that act like general principles. Thus when some commonsense rule for dealing with some particular situation fails, on account of some previously unencountered contingency, we are not completely lost, but rather simply refer to this more general covering rule for guidance. It should also be noted, however, that attempts to formalize this backup system, most notably in artificial intelligence research, have so far been unsuccessful (Dennett 1984); thus, however it works, it does not resemble the logical structure of science and mathematics.

  
8.
 See Minsky (2006) for a discussion of common sense and artificial intelligence.

  
9.
 For a description of the cross-cultural Ultimatum game study, see Henrich et al. (2001). For a review of Ultimatum game results in industrial countries, see Camerer, Loewenstein, and Rabin (2003).

10.
 See Collins (2007). Another consequence of the culturally embedded nature of commonsense knowledge is that what it treats as “facts”—self-evident, unadorned descriptions of an objective reality—often turn out to be value judgments that depend on other seemingly unrelated features of the socio-cultural landscape. Consider, for example, the claim that “police are more likely to respond to serious than non-serious crimes.” Empirical research on the matter has found that indeed they do—just as common sense would suggest—yet as the sociologist Donald Black has argued, it is also the case that victims of crimes are more likely to classify them as “serious” when the police respond to them. Viewed this way, the seriousness of a crime is determined not only by its intrinsic nature—robbery, burglary, assault, etc.—but also by the circumstances of the people who are
the most likely to be attended to by the police. And as Black noted, these people tend be highly educated professionals living in wealthy neighborhoods. Thus what seems to be a plain description of reality—serious crime attracts police attention—is, in fact, really a value judgment about what counts as serious; and this in turn depends on other features of the world, like social and economic inequality, that would seem to have nothing to do with the “fact” in question. See Black (1979) for a discussion of the conflation of facts and values. Becker (1998, pp. 133–34) makes a similar point in slightly different language, noting that “factual” statements about individual attributes—height, intelligence, etc.—are invariably relational judgments that in turn depend on social structure (e.g., someone who is “tall” in one context may be short in another; someone who is poor at drawing is not considered “mentally retarded” whereas someone who is poor at math or reading may be). Finally, Berger and Luckman (1966) advance a more general theory of how subjective, possibly arbitrary routines, practices, and beliefs become reified as “facts” via a process of social construction.

11.
 See Geertz (1975).

12.
 See Wadler (2010) for the story about the “no lock people.”

13.
 For the Geertz quote, see Geertz (1975, p. 22). For a discussion of how people respond to their differences of opinions, and an intriguing theoretical explanation of their failure to converge on a consensus view, see Sethi and Yildiz (2009).

14.
 See Gelman, Lax, and Phillips. (2010) for survey results documenting Americans’ evolving attitudes toward same-sex marriage.

15.
 It should be noted that political professionals, like politicians, pundits, and party officials, do tend to hold consistently liberal or conservative positions. Thus, Congress, for example, is much more polarized along a liberal-conservative divide than the general population (Layman et al. 2006). See Baldassari and Gelman (2008) for a detailed discussion of how political beliefs of individuals do and don’t correlate with each other. See also Gelman et al. (2008) for a more general discussion of common misunderstanding about political beliefs and voting behavior.

16.
 Le Corbusier (1923, p. 61).

17.
 See Scott (1998).

18.
 For a detailed argument about the failures of planning in economic development, particularly with respect to Africa, see Easterly (2006). For an even more negative viewpoint of the effect of foreign aid in Africa, see Moyo (2009), who argues that it has actually hurt Africa, not helped. For a more hopeful alternative viewpoint see Sachs (2006).

19.
 See Jacobs (1961, p. 4)

20.
 See Venkatesh (2002).

21.
 See Ravitch (2010) for a discussion of how popular, commonsense policies such as increased testing and school choice actually undermined public education. See Cohn (2007) and Reid (2009) for analysis of the cost of health care and possible alternative models. See O’Toole (2007) for a detailed discussion on forestry management, urban planning, and other
failures of government planning and regulation. See Howard (1997) for a discussion and numerous anecdotes of the unintended consequences of government regulations. See Easterly (2006) again for some interesting remarks on nation-building and political interference, and Tuchman (1985) for a scathing and detailed account of US involvement in Vietnam. See Gelb (2009) for an alternate view of American foreign policy.

22.
 See Barbera (2009) and Cassidy (2009) for discussion of the cost of financial crises. See Mintzberg (2000) and Raynor (2007) for overviews of strategic planning methods and failures. See Knee, Greenwald, and Seave (2009) for a discussion of the fallibility of media moguls; and McDonald and Robinson (2009), and Sorkin (2009) for inside accounts of investment banking leaders whose actions precipitated the recent financial crisis. See also recent news stories recounting the failed AOL–Time Warner merger (Arango 2010), and the rampant, ultimately doomed growth of Citigroup (Brooker 2010).

23.
 Clearly not all attempts at corporate or even government planning end badly. Looking back over the past few centuries, in fact, overall conditions of living have improved dramatically for a large fraction of the world’s populations—evidence that even the largest and most unwieldy political institutions do sometimes get things right. How are we to know, then, that common sense isn’t actually quite good at solving complex social problems, failing no more frequently than any other method we might use? Ultimately we cannot know the answer to this question, if only because no systematic attempt to collect data on relative rates of planning successes and failures has ever been attempted—at least, not to my knowledge. Even if such an attempt had been made, moreover, it would still not resolve the matter, because absent some other “uncommon sense” method against which to compare it, the success rate of commonsense-based planning would be meaningless. A more precise way to state my criticism of commonsense reasoning, therefore, is not that it is universally “good” or “bad,” but rather that there are sufficiently many examples where commonsense reasoning has led to important planning failures that it is worth contemplating how we might do better.

24.
 For details of financial crises throughout the ages, see Mackay (1932), Kindleberger (1978), and Reinhart and Rogoff (2009).

25.
 There are, of course, several overlapping traditions in philosophy that already take a suspicious view of what I am calling common sense as their starting point. One way to understand the entire project of what Rawls called political liberalism (Rawls 1993), along with the closely related idea of deliberative democracy (Bohman 1998; Bohman and Rehg 1997), is, in fact, as an attempt to prescribe a political system that can offer procedural justice to all its members without presupposing that any particular point of view—whether religious, moral, or otherwise—is correct. The whole principle of deliberation, in other words, presupposes that common sense is not to be trusted, thereby shifting the objective from determining what is “right” to designing political institutions that don’t privilege any one view
of what is right over any other. Although this tradition is entirely consistent with the critiques of common sense that I raise in this book, my emphasis is somewhat different. Whereas deliberation simply assumes incompatibility of commonsense beliefs and looks to build political institutions that work anyway, I am more concerned with the particular types of errors that arise in commonsense reasoning. Nevertheless, I touch on aspects of this work in chapter 9 when I discuss matters of fairness and justice. A second strand of philosophy that starts with suspicion of common sense is the pragmatism of James and Dewey (see, for example, James 1909, p. 193). Pragmatists see errors embedded in common sense as an important obstruction to effective action in the world, and therefore take willingness to question and revise common sense as a condition for effective problem solving. This kind of pragmatism has in turn influenced efforts to build institutions, some of which I have described in chapter 8, that systematically question and revise their own routines and thus can adapt quickly to changes that cannot be predicted. This tradition, therefore, is also consistent with the critiques of common sense developed here, but as with the deliberation tradition, it can be advanced without explicitly articulating the particular cognitive biases that I identify. Nevertheless, I would contend that a discussion of the biases inherent to commonsense reasoning is a useful complement to both the deliberative and pragmatist agendas, providing in effect an alternative argument for the necessity of institutions and procedures that do not depend on commonsense reasoning in order to function.

CHAPTER 2: THINKING ABOUT THINKING

  
1.
 For the original study of organ donor rates, see Johnson and Goldstein (2003). It should be noted that the rates of indicated consent were not the same as the eventual organ-donation rate, which often depends on other factors like family members’ approval. The difference in final donation rates was actually much smaller—more like 16 percent—but still dramatic.

  
2.
 See Duesenberry (1960) for the original quotation, which is repeated approvingly by Becker himself (Becker and Murphy 2000, p. 22).

  
3.
 For more details on the interplay between cooperation and punishment, see Fehr and Fischbacher (2003), Fehr and Gachter (2000 and 2002), Bowles et al. (2003), and Gurerk et al. (2006).

  
4.
 Within sociology, the debate over rational choice theory has played out over the past twenty years, beginning with an early volume (Coleman and Fararo 1992) in which perspectives from both sides of the debate are represented, and continued in journals like the
American Journal of Sociology
(Kiser and Hechter 1998; Somers 1998; Boudon 1998) and
Sociological Methods and Research
(Quadagno and Knapp 1992). Over the same period, a similar debate has also played out in political science, sparked by the publication of Green and Shapiro’s (1994) polemic,
Pathologies of Rational Choice Theory
. See Friedman (1996) for the responses of a number of rational choice advocates to Green and Shapiro’s critique, along with Green
and Shapiro’s responses to the responses. Other interesting commentaries are by Elster (1993, 2009), Goldthorpe (1998), McFadden (1999), and Whitford (2002).

  
5.
 For accounts of the power of rational choice theory to explain behavior, see Harsanyi (1969), Becker (1976), Buchanan (1989), Farmer (1992) Coleman (1993) Kiser and Hechter (1998), and Cox (1999).

  
6.
 See
Freakonomics
for details (Levitt and Dubner 2005). For other similar examples see Landsburg (1993 and 2007), Harford (2006), and Frank (2007).

  
7.
 Max Weber, one of the founding fathers of sociology, effectively
defined
rational behavior as behavior that is understandable, while James Coleman, one of the intellectual fathers of rational choice theory, wrote that “The very concept of rational action is a conception of action that is ‘understandable’ action that we need ask no more questions about” (Coleman 1986, p. 1). Finally, Goldthorpe (1998, pp. 184–85) makes the interesting point that it is not even clear how we should talk about irrational, or nonrational behavior unless we first have a conception of what it means to behave rationally; thus even if it does not explain all behavior, rational action should be accorded what he calls “privilege” over other theories of action.

  
8.
 See Berman (2009) for an economic analysis of terrorism. See Leonhardt (2009) for a discussion of incentives in the medical profession.

  
9.
 See Goldstein et al. (2008) and Thaler and Sunstein (2008) for more discussion and examples of defaults.

10.
 For details of the major results of the psychology literature, see Gilovich, Griffin, and Kahneman (2002) and Gigerenzer et al., (1999). For the more recently established behavioral economics see Camerer, Loewenstein, and Rabin (2003). In addition to these academic contributions, a number of popular books have been published recently that cover much of the same ground. See, for example, Gilbert (2006), Ariely (2008), Marcus (2008), and Gigerenzer (2007).

11.
 See North et al. (1997) for details on the wine study, Berger and Fitzsimons (2008) for the study on Gatorade, and Mandel and Johnson (2002) for the online shopping study. See Bargh et al. (1996) for other examples of priming.

12.
 For more details and examples of anchoring and adjustment, see Chapman and Johnson (1994), Ariely et al. (2003), and Tversky and Kahneman (1974).

13.
 See Griffin et al. (2005) and Bettman et al. (1998) for examples of framing effects on consumer behavior. See Payne, Bettman, and Johnson (1992) for a discussion of what they call constructive preferences, including preference reversal.

14.
 See Tversky and Kahneman (1974) for a discussion of “availability bias.” See Gilbert (2006) for a discussion of what he calls “presentism.” See Bargh and Chartrand (1999) and Schwarz (2004) for more on the importance of “fluency.”

15.
 See Nickerson (1998) for a review of confirmation bias. See Bond et al. (2007) for an example of confirmation bias in evaluating consumer products.
See Marcus (2008, pp. 53–57) for a discussion of motivated reasoning versus confirmation bias. Both biases are also closely to related to the phenomenon of cognitive dissonance (Festinger 1957; Harmon-Jones and Mills 1999) according to which individuals actively seek to reconcile conflicting beliefs (“The car I just bought was more expensive than I can really afford” versus “The car I just bought is awesome”) by exposing themselves selectively to information that supports one view or discredits the other.

16.
 See Dennett (1984).

17.
 According to the philosopher Jerry Fodor (2006), the crux of the frame problem derives from the “local” nature of computation, which—at least as currently understood—takes some set of parameters and conditions as given, and then applies some sort of operation on these inputs that generates an output. In the case of rational choice theory, for example, the “parameters and conditions” might be captured by the utility function, and the “operation” would be some optimization procedure; but one could imagine other conditions and operations as well, including heuristics, habits, and other nonrational approaches to problem solving. The point is that no matter what kind of computation one tries to write down, one must start from some set of assumptions about what is relevant, and that decision is not one that can be resolved in the same (i.e., local) manner. If one tried to resolve it, for example, by starting with some independent set of assumptions about what is relevant to the computation itself, one would simply end up with a different version of the same problem (what is relevant to that computation?), just one step removed. Of course, one could keep iterating this process and hope that it terminates at some well-defined point. In fact, one can always do this trivially by exhaustively including every item and concept in the known universe in the basket of potentially relevant factors, thereby making what at first seems to be a global problem local by definition. Unfortunately, this approach succeeds only at the expense of rendering the computational procedure intractable.

18.
 For an introduction to machine learning, see Bishop (2006). See Thompson (2010) for a story about the Jeopardy-playing computer.

19.
 For a compelling discussion of the many ways in which our brains misrepresent both our memories of past events and our anticipated experience of future events, see Gilbert (2006). As Becker (1998, p. 14) has noted, even social scientists are prone to this error, filling in the motivations, perspectives, and intentions of their subjects whenever they have no direct evidence of them. For related work on memory, see Schacter (2001) and Marcus (2008). See Bernard et al. (1984) for many examples of errors in survey respondents’ recollections of their own past behavior and experience. See Ariely (2008) for additional examples of individuals overestimating their anticipated happiness or, alternatively, underestimating their anticipated unhappiness, regarding future events. For the results on online dating, see Norton, Frost, and Ariely (2007).

20.
 For discussions of performance-based pay, see Hall and Liebman (1997) and Murphy (1998).

21.
 
Mechanical Turk is named for a ninteenth-century chess-playing automaton that was famous for having beaten Napoleon. The original Turk, of course, was a hoax—in reality there was a human inside making all the moves—and that’s exactly the point. The tasks that one typically finds on Mechanical Turk are there because they are relatively easy for humans to solve, but difficult for computers—a phenomenon that Amazon founder Jeff Bezos calls “artificial, artificial intelligence. See Howe (2006) for an early report on Amazon’s Mechanical Turk, and Pontin (2007) for Bezos’s coinage of “artificial, artificial intelligence.” See
http://behind-the-enemy-lines.blogspot.com
for additional information on Mechanical Turk.

22.
 See Mason and Watts (2009) for details on the financial incentives experiment.

23.
 Overall, women in fact earn only about 75 percent as much as men, but much of this “pay gap” can be accounted for in terms of different choices that women make—for example, to work in lower-paying professions, or to take time off from work to raise a family, and so on. Accounting for all this variability, and comparing only men and women who work in comparable jobs under comparable conditions, roughly a 9 percent gap remains. See Bernard (2010) and
http://www.iwpr.org/pdf/C350.pdf
for more details.

24.
 See Prendergast (1999), Holmstrom and Milgrom (1991), and Baker (1992) for studies of “multitasking.” See Gneezy et al. (2009) for a study of the “choking” effect. See Herzberg (1987), Kohn (1993), and Pink (2009) for general critiques of financial rewards.

25.
 Levitt and Dubner (2005, p. 20)

26.
 For details on the unintended consequences of the No Child Left Behind Act, see Saldovnik et al. (2007). For a specific discussion of “educational triage” practices that raise pass rates without impacting overall educational quality, see Booher-Jennings (2005, 2006). See Meyer (2002) for a general discussion on the difficulty of measuring and rewarding performance.

27.
 See Rampell (2010) for the story about politicians.

28.
 This argument has been made most forcefully by Donald Green and Ian Shapiro, who argue that when “everything from conscious calculation to ‘cultural inertia’ may be squared with some variant of rational choice theory … our disagreement becomes merely semantic, and rational choice theory is nothing but an ever-expanding tent in which to house every plausible proposition advanced by anthropology, sociology, or social psychology.” (Green and Shapiro, 2005, p. 76).

BOOK: Everything Is Obvious
8.89Mb size Format: txt, pdf, ePub
ads

Other books

Ridin' Red by Nikki Prince
Cognac Conspiracies by Jean-Pierre Alaux, Noël Balen
Lawked Flame by Erosa Knowles
Swan Peak by James Lee Burke
Please Do Feed the Cat by Marian Babson
Notebooks by Leonardo da Vinci, Irma Anne Richter, Thereza Wells
All for This by Lexi Ryan
Dinner with Buddha by Roland Merullo
It's in the Book by Mickey Spillane