Authors: Lawrence Freedman
The main challenge to the presumption that intended egotistical choices was the best basis from which to understand human behavior, was that it was consistently hard to square with reality. To take a rather obvious example, researchers tried to replicate the prisoner's dilemma in the circumstances in which it was first described.
4
Could prosecutors gain leverage in cases involving codefendants by exchanging a prospect of a reduced sentence in return for information or testimony against other codefendants? The evidence suggested that it made no difference to the rates of pleas, convictions, and incarcerations in robbery cases with or without codefendants. The surmised reason for this was the threat of extralegal sanctions that offenders could impose on each other. The codefendants might be kept separate during the negotiations, but they could still expect to meet again.
5
To the proponents of rational choice, such observations were irrelevant. The claim was not that rational choice replicated reality but that as an assumption it was productive for the development of theory.
By the 1990s, the debate on rationality appeared to have reached a stalemate, with all conceivable arguments exhausted on both sides. It was, however, starting to be reshaped by new research, bringing insights from psychology and neuroscience into economics. The standard critique of rational choice theory was that people were just not rational in the way that the theory assumed. Instead, they were subject to mental quirks, ignorance, insensitivity, internal contradictions, incompetence, errors in judment, over-active or blinkered imaginations, and so on. One response to this criticism was to say that there was no need for absurdly exacting standards of rationality. The theory worked well enough if it assumed people were generally reasonable and sensible, attentive to information, open-minded, and thoughtful about consequences.
6
As a formal theory, however, rationality was assessed in terms of the ideal of defined utilities, ordered preferences, consistency, and a statistical grasp of probabilities when relating specific moves to desired outcomes. This sort of
hyper-rationality was required in the world of abstract modeling. The modelers knew that human beings were rarely rational in such an extreme form, but their models required simplifying assumptions. The method was deductive rather than inductive, less concerned with observed patterns of behavior than developing hypotheses which could then be subjected to empirical tests. If what was observed deviated from what was predicted, that set a research task that could lead to either a more sophisticated model or specific explanations about why a surprising result occurred in a particular case. Predicted outcomes might well be counterintuitive but then turn out to be more accurate than those suggested by intuition.
One of the clearest expositions of what a truly rational action required was set out in 1986 by Jon Elster. The action should be
optimal
, that is, the best way to satisfy desire, given belief. The belief itself would be the best that could be formed, given the evidence, and the amount of evidence collected would be optimal, given the original desire. Next the action should be
consistent
so that both the belief and the desire were free of internal contradictions. The agent must not act on a desire that, in her own opinion, was less weighty than other desires which might be reasons for not acting. Lastly, there was the test of
causality
. Not only must the action be rationalized by the desire and the belief, but it must also be caused by them. This must also be true for the relation between belief and evidence.
7
Except in the simplest of situations, meeting such demanding criteria for rational action required a grasp of statistical methods and a capacity for interpretation that could only be acquired through specialist study. In practice, faced with complex data sets, most people were apt to make elementary mistakes.
8
Even individuals capable of following the logical demands of such an approach were unlikely to be prepared to accept the considerable investment it would involve. Some decisions were simply not worth the time and effort to get them absolutely right. The time might not even be available in some instances. Gathering all the relevant information and evaluating it carefully would use up more resources than the potential gains from getting the correct answer.
If rational choices required individuals to absorb and evaluate all available information and analyze probabilities with mathematical precision, it could never capture actual human behavior. As we have seen, the urge to scientific rigor that animated rational choice theory only really got going once actors sorted out their preferences and core beliefs. The actors came to the point where their calculations might be translated into equations and matrices as formed individuals, with built-in values and beliefs. They were then ready to play out their contrived dramas. The formal theorists remained unimpressed by claims
that they should seek out more accurate descriptions of human behavior, for example, by drawing on the rapid advances in understanding the human brain. One economist patiently explained that this had nothing to do with his subject. It was not possible to “refute economic models” by this means because these models make “no assumptions and draw no conclusions about the physiology of the brain.” Rationality was not an assumption but a methodological stance, reflecting a decision to view the individual as the unit of agency.
9
If rational choice theory was to be challenged on its own terms, the alternative methodological stance had to demonstrate that it not only approximated better to perceived reality but also that it would produce better theories. The challenge was first set out in the early 1950s by Herbert Simon. He had a background in political science and a grasp of how institutions worked. After entering economics through the Cowles Commission, he became something of an iconoclast at RAND. He developed a fascination with artificial intelligence and how computers might replicate and exceed human capacity. This led him to ponder the nature of human consciousness. He concluded that a reliable behavioral theory must acknowledge elements of irrationality and not just view them as sources of awkward anomalies. While at the Carnegie Graduate School of Industrial Administration, he complained that his economist colleagues “made almost a positive virtue of avoiding direct, systematic observations of individual human beings while valuing the casual empiricism of the economist's armchair introspections.” At Carnegie he went to war against neoclassical economics and lost. The economists grew in numbers and power in the institution and had no interest in his ideas of “bounded rationality.”
10
He gave up on economics and moved into psychology and computer science. This idea of “bounded rationality,” however, came to be recognized as offering a compelling description of how people actually made decisions in the absence of perfect information and computational capacity. It accepted human fallibility without losing the predictability that might still result from a modicum of rationality. Simon showed how people might reasonably accept suboptimal outcomes because of the excessive effort required to get to the optimal. Rather than perform exhaustive searches to get the best solution, they searched until they found one that was satisfactory, a process he described as “satisficing.”
11
Social norms were adopted, even when inconvenient, to avoid unwanted conflicts. When the empirical work demonstrated strong and consistent patterns of behavior this might reflect the rational pursuit of egotistical goals, but alternatively these patterns might reflect the influence of powerful conventions that inclined people to follow the pack.
Building upon Simon's work, Amos Tversky and Daniel Kahneman introduced further insights from psychology into economics. To gain credibility,
they used sufficient mathematics to demonstrate the seriousness of their methodology and so were able to create a new field of behavioral economics. They demonstrated how individuals used shortcuts to cope with complex situations, relying on processes that were “good enough” and interpreted information superficially using “rules of thumb.” As Kahneman put it, “people rely on a limited number of heuristic principles which reduce the complex tasks of assessing probabilities and predicting values to simpler judgmental operations. In general, these heuristics are quite useful, but sometimes they lead to severe and systematic errors.”
12
The Economist
summed up what behavioral research suggested about actual decision-making:
[People] fear failure and are prone to cognitive dissonance, sticking with a belief plainly at odds with the evidence, usually because the belief has been held and cherished for a long time. People like to anchor their beliefs so they can claim that they have external support, and are more likely to take risks to support the status quo than to get to a better place. Issues are compartmentalized so that decisions are taken on one matter with little thought about the implications for elsewhere. They see patterns in data where none exist, represent events as an example of a familiar type rather than acknowledge distinctive features and zoom in on fresh facts rather than big pictures. Probabilities are routinely miscalculated, so ⦠people ⦠assume that outcomes which are very probable are less likely than they really are, that outcomes which are quite unlikely are more likely than they are, and that extremely improbable, but still possible, outcomes have no chance at all of happening. They also tend to view decisions in isolation, rather than as part of a bigger picture.
13
Of particular importance were “framing effects.” These were mentioned earlier as having been identified by Goffman and used in explanations of how the media helped shape public opinion. Framing helped explain how choices came to be viewed differently by altering the relative salience of certain features. Individuals compared alternative courses of action by focusing on one aspect, often randomly chosen, rather than keep in the frame all key aspects.
14
Another important finding concerned loss aversion. The value of a good to an individual appeared to be higher when viewed as something that could be lost or given up than when evaluated as a potential gain. Richard Thaler, one of the first to incorporate the insights from behavioral economics into mainstream economics, described the “endowment effect,” whereby the selling price for consumption goods was much higher than the buying price.
15
Another challenge to the rational choice model came from experiments that tested propositions derived from game theory. These were not the same as experiments in the natural sciences which should not be context dependent. Claims that some universal truths about human cognition and behavior were being illuminated needed qualification. The results could only really be considered at all valid for Western, educated, industrialized, rich, and democratic (WEIRD) societies in which the bulk of the experiments were conducted. Nonetheless, while WEIRD societies were admittedly an unrepresentative subset of the world's population, they were also an important subset.
16
One of the most famous experiments was the ultimatum game. It was first used in an experimental setting during the early 1960s in order to explore bargaining behavior. From the start, and to the frustration of the experimenters, the games showed individuals making apparently suboptimal choices. A person (the proposer) was given a sum of money and then chose what proportion another (the responder) should get. The responder could accept or refuse the offer. If the offer was refused, both got nothing. A Nash equilibrium based on rational self-interest would suggest that the proposer should make a small offer, which the responder should accept. In practice, notions of fairness intervened. Responders regularly refused to accept anything less than a third, while most proposers were inclined to offer something close to half, anticipating that the other party would expect fairness.
17
Faced with this unexpected finding, researchers at first wondered if there was something wrong with the experiments, such as whether there had been insufficient time to think through the options. But giving people more time or raising the stakes to turn the game into something more serious made little difference. In a variation known as the dictator game, the responder was bound to accept whatever the proposer granted. As might be expected, lower offers were madeâperhaps about half the average sum offered in the ultimatum game.
18
Yet, at about 20 percent of the total, they were not tiny.
It became clear that the key factor was not faulty calculation but the nature of the social interaction. In the ultimatum game, the responders accepted far less if they were told that the amount had been determined by a computer or the spin of a roulette wheel. If the human interaction was less direct, with complete anonymity, then proposers made smaller grants.
19
A further finding was that there were variations according to ethnicity. The amounts distributed reflected culturally accepted notions of fairness. In some cultures, the proposers would make a point of offering more than half; in others, the responders were reluctant to accept anything. It also made a difference if the transaction
was within a family, especially in the dictator game. Playing these games with children also demonstrated that altruism was something to be learned during childhood.
20
As they grew older, most individuals turned away from the self-regarding decisions anticipated by classical economic theory and become more other-regarding. The exceptions were those suffering from neural disorders such as autism. In this way, as Angela Stanton caustically noted, the canonical model of rational decision-making treated the decision-making ability of children and those with emotional disorders as the norm.
21