ISSN (Print) - 0012-9976 | ISSN (Online) - 2349-8846

A+| A| A-

Experiments on Individual Decision-Making

Part 2 of the survey looks at experimental results dealing with individual choice. The discussion compares the two dominant experimental methodologies that govern individual decision-making experiments in social science. It then discusses decision-making experiments under two main heads - the psychology-oriented experiments (or what has now morphed into behavioural economics) and experiments that test observed behaviour against theoretical benchmarks derived from neoclassical microeconomic theory. The last section provides an overview and looks ahead to the future of experiments in decision-making.

SURVEY OF EXPERIMENTAL ECONOMICS

2 Experiments on Individual Decision-Making

Part 2 of the survey looks at experimental results dealing with individual choice. The discussion compares the two dominant experimental methodologies that govern individual decision-making experiments in social science. It then discusses decision-making experiments under two main heads – the psychology-oriented experiments (or what has now morphed into behavioural economics) and experiments that test observed behaviour against theoretical benchmarks derived from neoclassical microeconomic theory. The last section provides an overview and looks ahead to the future of experiments in decision-making.

Economic & Political Weekly

EPW
August 27, 2011 vol xlvi no 35

2.1 Introduction to Decision-Making Experiments

I
ndividual decision-making is central to many disciplines in the canon of social science as well as in basic sciences and engineering. Any time an agent chooses an option from several that are open to him or her, we say that he or she has exercised a choice or made a decision. Choices generally depend on preferences and the underlying environmental/institutional variables at play. In economic settings, agents allocate their funds between assets of differential risks, choose where to supply their labour and make consumption decisions, all of which have an effect on their welfare. Further, the environments these agents transact in have a certain amount of uncertainty, whereby the outcome that is eventually achieved as a consequence of the decision taken is not known ex ante with certainty. This stochasticity and how the agents process it have led to theorisations and axiomatisations regarding their “rationality”.5 A few competing models of preference attempt to understand how agents, when confronted with such an uncertain choice problem, choose optimally to maximise certain objectives. These could be their (monetary or nonmonetary) pay-offs and social preferences that have them attaching importance to the pay-offs of other individuals in addition to their own.

Though we primarily focus on reviewing individual decisionmaking in economic situations, it is worth mentioning that the first advances in experimentally studying human decision-making came from social psychology and were concerned with violations of normatively understood rational behaviour. Accordingly, null hypotheses that pertained to these normative behavioural paradigms were tested (sometimes using parametric and non-parametric statistical inference techniques) in psychology experiments set up to collect and test observed values of parameters against theoretically calculated ones. This work was referred to as behavioural decision research (Edwards 1961). The goal, as in all experimental research, was to test whether normative decision rules were systematically violated and to posit better alternative theories that explain these divergences.

As is the case of game theory, in decision theory too it was seen that when problems involved more and more complex computation to reach an equilibrium, limits on human computational ability necessitated the use of rule of thumb or “heuristics” to be able to optimise over a small set of variables (a subset of the larger set of variables that govern the problem) that were considered central to solving the problem. Over time, to cite an evolutionary argument, the heuristics that selected the variables that provided the best results were adopted and others were discarded. The origin of this “boundedly rational” approach is from Simon’s (1955) idea of procedural rationality, where agents follow reasonable

SURVEY OF EXPERIMENTAL ECONOMICS

heuristics and on average achieve close to optimal outcomes. This is distinct from substantive rationality where agents consider the entire set of variables to make their decision.6

Analytically modelled choice behaviour used by economic theorists originates with the work of von Neumann and Morgenstern (1944), which was extended by Savage (1954) and is referred to as the Savage-Bayes approach. This approach uses an analytically tractable expected utility (EU) functional form.7 Decision-makers, according to this expected utility theory (EUT), compare the EU of various risky alternatives and choose the alternative that provides them with the highest EU. Over time, laboratory experiments uncovered systematic violations of the EUT (referred to in the literature as paradoxes), which led to various modifications of the extant theory, the most notable of these being the Prospect Theory (PT), which was first posited by Kahneman and Tversky (1979). In the next section, we compare the two dominant experimental methodologies that govern individual decision-making experiments in social science. Following this, we discuss decision-making experiments under two main heads – the psycho logy-oriented experiments (or what has now morphed into behavioural economics) and experiments that test observed behaviour against theoretical benchmarks derived from neo classical microeconomic theory. The last section provides an overview and looks ahead to the future of experiments in decision-making.

2.2 Clash of Two Methodologies?

The methodologies used to study bounded rationality and divergences from normative behavioural paradigms are different in social psychology and experimental economics. So there was initially very little joint experimental work between the two disciplines. Over time, a significant number of economists have begun to appreciate the problems investigated and methodologies used by psychologists, which has led to some convergence between the fields though serious divergences regarding procedure and implications remain. The main objection of economists is that subjects in a majority of psychology experiments are not paid (including the classic Kahneman and Tversky 1979 study on the Prospect Theory) or that subjects are not given a pay that depends on the decision they make (and in game theoretic situations the decisions of other players in the game as well).8 Most psychology experiments are either unpaid or pay subjects a non-salient flat fee.9 The no or low payment paradigm in psychology studies has been defended variously by psychologists, as a large number of individual choice experiments in psychology deal with non-monetisable utilities or even “pay-offs” that are difficult to measure.

A second point of contention deals with deception; that is, misleading subjects regarding the actual objectives of the experiment. Again, according to many psychologists who study boredom, cheating, excitement, pain and anger, the treatments critically hinge on the subjects not being aware of the stimuli they are going to be subjected to, or the purpose behind the experiment, as this would lead to biased actions.10 Another point of divergence between experimental psychology and experimental economics is related to the theoretical underpinnings of various results. Most economists do not really need a precise and accurate theory at the individual

48 level, just as long as it is general enough to explain some of the observed behaviour accurately and generate an aggregate prediction that is more or less accurate. So elegance and generality are generally desirable in economic theory whereas psychologists and cognitive scientists are interested in modelling the precise nuances of behaviour displayed by individuals.

An example should make this clear. A typical economic experiment on attitudes towards risk would have the researcher make an assumption about the form of the utility function (say constant relative risk aversion, or CRRA) that the agents purportedly follow. Using this function and the choice response in the experiment, one can calculate some measure (maybe Arrow-Pratt) of risk aversion and then compare this across agents, over time, cross-culturally, and so on. If anyone questions the validity of using this functional form over another one, most of the time the answer that a theorist or an experimental economist would give you would be that it does not matter as long as everyone’s attitude to risk is measured using the same CRRA specification. Most psychologists and cognitive scientists would be quite unhappy with this “as if” way of evaluation and would be more interested in the cognitive processes that govern the choice made by the decisionmaker, rather than obtaining some measure which has good internal validity but potentially scanty external support.11

Further, according to Camerer (1995), most economists have a one-size-fits-all approach to studying economic problems vis-à-vis psychologists. So if a task involves eliciting a probability, most psychology experiments would frame the problem in a natural setting using a vignette. This would anchor the tasks to certain specific stimuli. Most economists would go ahead and attempt to elicit the same probability using a more decontextualised device such as a pair of dice or a bingo cage. This, in keeping with the “institution free” pedagogy of neoclassical economics, comes up with the elicitation of a probability that is a specific statistical measure rather than an assessment of a contextualised measure of uncertainty. Economists are also interested in static repetition of tasks with a view to studying the convergence to equilibria or some non-equilibrium outcome. On the contrary, psychologists would be the most interested in the behaviour of subjects the first time they experience a stimulus as they feel that quick static repetition overstates the speed and clarity of feedback that would be provided in a natural setting.

With their interest in behaviour in specific contexts and its correlates, it is not uncommon for psychology experiments on behaviour to have a significant interface with other social sciences such as anthropology and the biological sciences, especially evolutionary biology. This, over the years, led to the development of the more hybrid field of behavioural economics in the 1980s, which draws from a lot more than economic theory and attempts to actually provide a handle on the more primitive elements that make up behaviour by studying context-specific biases and documenting deviations to arrive at norms in a precise, detailed manner. Behavioural economists are also interested in finding how behaviour correlates with neural substrates or pathology, which may sometime in the future give us a more precise medical answer to questions such as why some people are risk loving (which is manifested in the theory as a convex utility function) and others

August 27, 2011 vol xlvi no 35

SURVEY OF EXPERIMENTAL ECONOMICS

risk averse (which translates to a concave utility function in economic theory). Indeed, behavioural economics studies behaviour such as procrastination or explores physical or emotional states such as boredom, pain and sexual arousal, which may not have received much (or any) importance in economic theory, but which have huge implications for economic behaviour.12 For more detailed analyses on the methodological differences between experimental psychology and experimental economics, see Camerer (1995), Sonnemans (2007), Ariely and Norton (2007) and Zwick et al (1999).

At the crux of this methodology debate is that the raison d’etre of experimental economics for many decades was to test highly axiomatised economic theories. Not much was asked by way of speculation regarding deviations from normative behaviour. According to Roth (1998), “Economists have traditionally avoided explaining behaviour as less than rational for fear of developing many fragmented theories of mistakes”. Thus, the appeal of utility maximisation and the Nash equilibrium is that they provide useful approximations of great generality, even if they do not precisely model human behaviour (Roth 1996).

2.3 Individual Decision-Making Experiments in Behavioural Economics and Psychology

This section highlights some important work on individual decision-making from the psychology literature. A more detailed account of the literature, which is beyond the scope of this brief overview, can be obtained in Camerer (1995) and Kahneman and Tversky (2000). The study of systematic biases observed in people’s behaviour when confronted with certain stimuli has formed a large part of the literature in this sub-area. Some of these biases may be as a result of employing heuristics because of limitations on computational ability or more generally on judgment and choice. Heuristics allow individuals to coherently make decisions that are mostly correct, but may lead to systematic violations that may be studied to yield better theories with greater external validity.

Kahneman and Tversky (1979) performed experiments where lab subjects playing Allais tasks displayed behaviour that violated the Savage-Bayes approach to decision-making and they explained the deviation from the EUT by an alternative “behavioural” theory they called the Prospect Theory.13 In this they highlighted certain biases that occurred in the behaviour of decision-makers. With respect to processing probabilities there was a “certainty effect”; that is, people overweighed certainty. Further, when people played compound lotteries, often to simplify the set of alternatives, they ignored the components that were common to the lotteries, focusing only on components in which they were different (Tversky 1972). Their theory also took into account their observation from the experiments that subjects displayed a “reflection effect”; that is, risk aversion over gain prizes and risk preference over loss prizes. To explain these deviations they proposed a Value Function (analogous to the EU function) over the changes in wealth from a reference point rather than wealth in the final state (deemed “prospects”). This Value Function was concave over gains and convex over losses, allowing for the reflection effect. Finally, they defined a probability weighting function, which was subjectively inferred from the choice between prospects. This

Economic & Political Weekly

EPW
August 27, 2011 vol xlvi no 35

weighting function need not necessarily be a probability density, allowing it to exhibit sub or super additivity. The combination of a reflected Value Function over changes in wealth and an appropriate probability weighting function allowed the authors to explain their observed deviations from the EUT. This behavioural approach to preferences and utility spawned a large literature that applied the ideas in Kahneman and Tversky (1979). Kahneman and Tversky (1992) further refined the theory by applying probability weighting to cumulative distribution functions (rather than to probability densities as in the original theory). This allowed the Prospect Theory to be field tested in problems like the equity premium puzzle (Benartzi and Thaler 1995) and the status quo bias (Kahneman, Ketsch and Thaler 1991). See Camerer (2000) for a survey of the applications of the Prospect Theory.

The work of Kahneman and Tversky (1979, 1992) was widely read by neoclassical economists because the authors found an innovative way of injecting behavioural assumptions by including free parameters and more flexible functional forms within a framework that was close to the EUT. There are, however, other studies in behavioural economics that examine behaviour that is well outside the realm of economic theory. These studies of “irrationalities” or biases (measured from some normative standard) deal with motivations that range from boredom, fear and sexual arousal to a desire to procrastinate or dishonesty, and so on. These non-measurable or difficult to measure aspects of behaviour significantly affect individual decision-making but have no structural theory within economics. Systematic biases are certain non-normative behaviours or “mistakes” that humans are observed to repeatedly commit even though these tend to engender outcomes that are far removed from the behavioural norm.14 Thus, these cognitive/judgmental biases are strong enough for individuals to not be able to change and display normative behaviour even when they have the history of earlier successes/ failures that occurred as a consequence of the actions available to them. Ariely (2008) labels systematic bias “predictable irrationality”. It is felt that understanding and documenting these biases will ultimately lead to a theory that integrates this non-optimal behaviour into a more sophisticated theory of choice.

Systematic biases have been investigated by many studies in psychology. Studies on calibration measure human subjects’ judgment regarding empirically observed probabilities (for example, precipitation, baseball and basketball results, and so on). The pervasive finding is that people are “locally” overconfident but “globally” more accurate in their predictions either in calibration studies, where they estimate probability, or in a confidence interval compared to most experts such as weather forecasters, whose estimates are generally more accurate (Lichtenstein, Fischhoff and Phillips 1982).15 In a more recent experiment documenting systematic bias, Berg et al (2007) surveyed economists at American Economic Association meetings on their beliefs about the prostate specific antigen (PSA) test for prostate cancer. Further, the economists were asked to estimate two conditional probabilities from given priors to test their degree of logical consistency.16 They obtained the interesting result that there was very weak correlation between accuracy of beliefs and logical consistency, and social influences were better predictors of PSA testing behaviour

SURVEY OF EXPERIMENTAL ECONOMICS

than beliefs about PSA testing, beliefs about cancer risks and logical consistency. Other biases in the Bayesian calculation have also been documented such as the underestimation of base rates (Grether 1991), and “conservatism” or underweighting of likelihood information (Edwards 1968). Humans also have misperceptions of randomness, where they gather little data and overgeneralise from small samples to distributions (Tversky and Kahneman 1971). A different misperception of randomness is an “illusion of control”, where people act as if tasks that involve only chance also require a certain measure of skill (Langer 1975).

In summary, the research on systematic biases suggests a variety of heuristics people use to make judgments in complex situations, often constrained by subjective ideas they possess about a stochastic problem and the capability of memory. However, this connotes that heuristics are a sort of “second best” to solving an optimisation problem, necessarily yielding a suboptimal solution vis-à-vis the full-blown solution obtained by taking into account all information that pertains to a choice problem. But heuristics have evolutionarily persisted in populations and continue to be used even when the cost of gathering extra information is negligible. According to Simon (1956, 1991), human beings display “satisficing” rather than maximising behaviour; that is, they set their aspiration levels, inspect the set of alternatives that are available, and then select the best alternative before terminating the search. How can such a process generate outcomes that are accurate? According to Gigerenzer and Brighton (2009), “ecological rationality” is the major driver that determines whether or not a heuristic succeeds or fails. So certain types of heuristics are better adapted to certain environments than others. To study ecological rationality, Gigerenzer and Brighton (2009) decomposed total prediction error into bias (defined as the difference between the underlying “true” population function and the mean function generated from the data), variance (the sum of the squared differences of the observed individual functions and the mean function) and random noise. Now a “bias-variance dilemma” occurred as over-fitting variables (say polynomials of higher degrees) to small samples tended to make the models more flexible. This made them more susceptible to capturing not just the underlying pattern but also unsystematic patterns such as noise. This idea that less is more; that is, more information and computation (giving rise to more effort) is counterproductive to obtaining accurate predictions, was explored by Payne et al (1993) and documented in Gigerenzer and Selten (2001). Thus, in certain environments, biased minds may make better judgments using a smaller set of determining variables and using a “fast and frugal” computation (Gigerenzer and Goldstein 1996).

2.4 Individual Decision-Making Experiments in Economics

Individual decision-making experiments have a shorter history in economics compared to psychology. In the realm of experimental decision-making under uncertainty, there is a large overlap between the literature published in psychology journals and that in refereed journals in economics. One very important strain of the literature is that emanating from Kahneman and Tversky (1979). Behavioural benchmarks from the Cumulative Prospect Theory have over the

50 last decade been adopted to explore laboratory and field behaviour in economics experiments. In general, though, studies of systematic bias and heuristics (explored in the last section) are still uncommon in economics journals that publish experiments.17

The first experimental test of the EU model came very soon after von Neumann and Morgenstern (1944) and was done by Preston and Baratta (1948) and Mosteller and Nogee (1951), who attempted to estimate certainty equivalents of gambles by asking subjects to “bid for” or “bet on” binary lotteries. These and other studies such as Davidson et al (1957) and Tversky (1967) operationalised the separation of utility and probability weights in the EU model. The EU model and its axioms satisfied initial testing but in the 1950s two paradoxes were posited that over time affected the theory in a big way. Allais (1952) and Ellsberg (1961) used simple pairwise lotteries to show how humans violate the “sure-thing principle” or the independence axiom of the EUT.18 We look at the behavioural violations of the EUT in a little more detail. Table 1 details a set of lotteries that display Allais’ “common consequence effect”.

Table 1: The Allais Paradox

Lottery A Lottery B
A1 A2 B1 B2
Outcome Prob Outcome Prob Outcome Prob Outcome Prob
1,000,000 1 1,000,000 0.89 0 0.89 0 0.90
0 0.01 1,000,000 0.11 5,000,000 0.10
5,000,000 0.10

The frequent choice pattern that emerged from experimental observation is A1 •A2 and B• B1. With a little algebra it is easy to

2

show that these choices violate the EUT. The first Allais replications by MacCrimmon (1965) and Slovic and Tversky (1974) found about 40% and 60% EUT violations respectively. Subsequently MacCrimmon and Larsson (1979) established this bias more robustly. Kahneman and Tversky (1979) posited a certainty effect and sub-additive probability weights to provide an explanation for this paradox.19

Knight (1921) distinguished between measurable uncertainty, or risk, which may be represented by precise odds or probabilities of occurrence, and immeasurable uncertainty, which cannot be thus represented. In situations of immeasurable uncertainty, where individuals are partly ignorant of the precise statistical frequencies of events that are relevant to their decisions, casual empiricism suggests that people tend to impute numerical probabilities related to their beliefs regarding the likelihood of particular outcomes (Ellsberg 1961). Thus, this ambiguity has been referred to as “uncertainty about uncertainties” (Einhorn and Hogarth 1986).

Let us explore Ellsberg’s paradox with a version of his two urn example. There are two urns, A and B. Urn A contains 50 red and

available at

Delhi Magazine Distributors Pvt Ltd

110, Bangla Sahib Marg New Delhi 110 001 Ph: 41561062/63

August 27, 2011 vol xlvi no 35

SURVEY OF EXPERIMENTAL ECONOMICS

50 yellow balls. Urn B contains 100 balls with an unknown mixture of red and yellow balls and nothing else. Using the urns, you have to bet on a colour and draw a ball to match it (and win Rs 100). Otherwise you get nothing. Experiments (including Ellsberg’s original study) repeatedly revealed that decision-makers were indifferent to betting on red and yellow in urn A (p(R) = p(Y) = 0.5). They were also indifferent to betting on red or yellow in urn B (q(R) = q(Y) = 0.5). But if they were asked to choose which urn to base their bet on, many of them go for urn A. Assume that a decision-maker bets on red and likes urn A. Then it must be that p(R) q(R). Hence it must be that q(R) + q(Y) 1. If on the other hand he or she likes urn B, then it must be that p(R) q(R). Hence it must be that q(R) + q(Y) 1. Thus, in the presence of ambiguity it may become impossible to get a probability representation, making decisions inconsistent with the subjective expected utility (SEU) theory. Theories of capacities such as those of Choquet (1955), Schmeidler (1989) and Fishburn (1993) extended the EUT framework to non-additive priors. They, however, took a rather pessimistic stand on the way a decision-maker subjectively evaluated uncertainty.20 More recently, Klibanoff et al (2005) provided a theory of “smooth” ambiguity aversion where they clearly distinguished between the three separate aspects of the decision problem in the face of uncertainty – the attitude to risk (the Arrow-Pratt measure); the attitude to ambiguity (as captured by the curvature of the EU function); and additive subjective beliefs regarding the likelihood of different states under uncertainty. Experimental tests of ambiguity aversion include Einhorn and Hogarth (1986), Fox and Tversky (1995), Becker and Brownson (1964), and Hogarth and Kunreuther (1985) and they showed that decision-making under uncertainty might be governed by individual attitudes to ambiguity and the subjective ways in which decision-makers inferred probabilities in situations in which there was limited or no information regarding underlying the distributions of probabilities. More recent experimental studies of the Ellsberg paradox include Halevy (2007) and Chakravarty and Roy (2009). The latter operationalised Klibanoff et al’s model to obtain an estimable ambiguity aversion parameter from subject responses.

Measuring Risk Attitudes

An important strand of the experimental literature in decisionmaking under uncertainty is in the area of measuring risk attitudes using pair-wise lottery choices. Risk attitudes that are elicited from such laboratory/survey procedures can be used to classify populations, predict the possible outcomes arising from various policies, compare different institutions with respect to how agents running them behave when confronted with risky choices, and so on. A crucial issue in this calibration exercise is the effect of incentives. Binswanger (1980) used the ordered lottery selection (OLS) method for lottery choice data from a field experiment in rural Punjab to draw the conclusion that agents (who are risk-averse on average) became more risk averse as the prizes in the lotteries are increased.21,22 A controversial paper by Rabin and Thaler (2001) posited the implausibility of the EUT by showing that an absurdly high degree of large-stakes risk aversion could coexist with a modest amount of small-stakes

Economic Political Weekly

EPW
August 27, 2011 vol xlvi no 35

risk aversion. However, Cox and Sadiraj (2006) demonstrated that though the EUT was not consistent with the EU of the terminal wealth model (used by Rabin and Thaler 2001), it was consistent with the EU of the income model, where a moderate amount of small-stakes risk aversion did not necessarily imply huge risk aversion over large stakes.23

Kachelmeier and Shehata (1992) employed the Becker-DeGroot-Marschak (BDM, 1964) mechanism in a high stakes field experiment in Beijing University.24 Once again, as in Binswanger (1980), there was a marked trend from risk-loving or risk-neutral preferences to risk averse, as pay-offs increased. Holt and Laury (2002) used a multiple price list (MPL) instrument to study attitudes towards risk as incentives (in the form of lottery prizes) go up from hypothetical prizes to $346.50. In the MPL design each decisionmaker made 10 decisions. Each of these had a “safe” (or lower spread) lottery and a “risky” (or higher spread) lottery. The first decision had the safe lottery dominating the risky one in terms of expected value. Then the probabilities were serially altered to make the safe expected value decrease and the risky expected value increase. At a point they were equal. Beyond that point a risk-neutral individual would switch to the risky lottery. Individuals who stayed with the safe lottery beyond this point could be calibrated as risk averse; that is, they were trading off some return to be safe. Individuals who switched to the risky lottery when it had a lower return could likewise be calibrated as risk loving. The subjects showed increasing degrees of risk aversion in the high-pay-off treatments compared to the low-pay-off treatments. However, this effect was not observed in the hypothetical treatments, casting doubts on the validity of using hypothetical questionnaires to elicit preferences regarding high stakes.25

Hey and Orme (1994) developed an instrument using pair-wise lotteries where subject choices over 100 such lotteries were used to estimate individual expected utility functionals. Thus, because of the repeated within-subjects design, their data could be used to estimate utility functions for each individual rather than relying on pooled data over individuals and the assumption that unobserved heterogeneity (after conditioning for any collected individual characteristics, such as sex, race and income) was random. The various generalisations of the EU preference functional that were estimated could be compared using the Akaike information criterion (AIC). Likelihood ratio tests were then conducted to investigate the statistical superiority of the various generalisations. An important methodological contribution of Hey and Orme was to demonstrate that different choice behaviours could coexist in the same population. Harrison and Rutstrom (2009) and Harrison et al (2009) investigated the possibility whether within an experimental cohort different groups of decision-makers were characterised by different competing models of individual choice (which might imply different risk attitudes). They used a finite mixture model and let the data determine the fraction of the subjects that each model explained.26

An area of experimental economics that has perhaps not received as much attention as it should is that of preference reversals. Preference reversals refer to inconsistent choices made at different times by a decision-maker that make his or her behaviour inconsistent with preference theories. An example by

SURVEY OF EXPERIMENTAL ECONOMICS

Grether and Plott (1979) can be described as follows. There were two lotteries, A and B. A (deemed the P or probability bet) has a very high chance of realising $4 and a very negligible chance of realising the low prize of $0. On the other hand, B (the $ bet) had two prizes, $0 and $16, where the lower prize had a somewhat higher probability of realisation than the higher prize. Earlier psychology experiments such as Lichtenstein and Slovic (1971, 1973) had found that individuals prefer Pto $ but put a higher value on $. This behaviour was not a violation of an axiom of the EUT like the Allais or Ellsberg paradoxes, but a more serious problem; that is, the preference measured one way was the reverse of the preference measured in a different but theoretically compatible way. If present, it negated the idea of any form of optimisation, which took as a primitive the assumption that an individual would place a higher reservation on the object he or she preferred. Grether and Plott’s (1979) experiments (more suited to an economics audience vis-à-vis the earlier psychology studies) found this inconsistency in laboratory subjects and concluded that these “exceptions” to the preference theory existed and did not negate it though theorists needed to study them to design theories that take into account these inconsistencies. What were the most likely factors that caused these inconsistencies? According to Loomes et al (2002), the explanations of preference reversals that many decision theorists found the most acceptable were the kind suggested by Holt (1986), Karni and Safra (1987), and Segal (1988). Although they required the relaxation of the independence axiom (or, in Segal’s case, the reduction principle for compound lotteries), they maintained two assumptions that many decision theorists regarded as much more basic

– that individuals had well-ordered preferences (transitivity), and that they had sufficiently good knowledge of these preferences to operate consistently across a whole range of decision tasks, irrespective of the particular ways in which problems might be framed or responses elicited (description invariance). However, as in Loomes (1990), much of the evidence pointed in the opposite direction and suggested that preference reversals represented intransitivity, or non-invariance, or some combination of the two.

Description invariance, or simply invariance, is the assumption that an agent’s choice from a feasible set of alternatives should be unaffected by any re-description of it that leaves all objective characteristics unchanged. While this principle has considerable normative and theoretical appeal, there is evidence that it fails descriptively in simple laboratory settings; that is, choices do depend on how a task is framed. The “Asian Disease Problem” of Tversky and Kahneman (1981) is a well-known example of a “framing problem”. Describing a choice between medical programmes in terms of lives saved or lives lost led to dramatically different answers, although the problems were logically equivalent. See Levin et al (1998) for an overview and classification of framing effects found in numerous laboratory and survey studies, both in the context of individual decision-making as well as in game theory experiments. Gaechter et al (2009) extended this literature by testing for the existence of a framing effect within a natural field experiment.27 In their experiment, participants were junior and senior experimental researchers (mostly economists) who registered for the 2006 Economic Science Association (ESA) meeting in Nottingham, UK.28 The participants were divided into two groups and sent an acceptance email, with all but one line that differed between the groups. That line was about the early and late conference registration fee (the latter exceeded the former). The first group had the “discount” frame for the early registration fee; that is, registering earlier than a certain date got a participant a discount. The second group had the “penalty” frame; that is, registering later than a certain date (same as in the “discount” frame) got the participant a penalty for late registration. Of course, both the frames objectively described the same situation, albeit in a positive manner to the first group and in a negative manner to the second group. The results showed that though overall there was no statistically significant difference in early registrations between the subjects in the discount and penalty frames, the subgroup of junior economists (mainly PhD students) who got the penalty frame had a significantly higher rate of early registration than those who got the discount frame.29

Individual decision-making over time is another area that has not got much prominence in experimental economics largely due to the difficulty of not being able to pay subjects immediately. Inter-temporal choice experiments have an inbuilt bias that subjects may not trust the experimenter to pay them, especially when the duration between decision tasks and payment gets longer. Some studies such as Loewenstein (1988) have found similar behaviour in paid experiments with hypothetical choice violating the basic monetary-incentives-driving-behaviour assumptions of experimental economics. The other interesting aspect of intertemporal choice experiments is the discovery of several systematic violations of economic theory. First, people display hyperbolic discounting; that is, valuations fall rapidly for small delays in consumption, but then fall slowly for longer delay periods. This violates the standard idea of exponential discounting in which valuation falls by a constant quantum per unit of delay, for all lengths of delay. Second, discount rates are larger for smaller amounts of income or consumption and are larger for gains than losses of equal magnitude. Third, people demand more to delay consumption than they will pay to speed it up. Loewenstein and Prelec (1992) proposed a generalisation of discounted utility using a value function akin to the one in the Prospect Theory that was able to explain all these anomalies.

2.5 New Developments and the Way Forward

Today, almost half a century after the first individual choice experiments were performed, we stand at an interesting juncture in the field where the desire to test (sometimes narrowly axiomatised)

available at

CNA Enterprises Pvt Ltd

407, Shreyas Chambers 175 D N Road Mumbai 400 001 Ph: 66393468

August 27, 2011 vol xlvi no 35

SURVEY OF EXPERIMENTAL ECONOMICS

theory has given way to attempting to understand behaviour and its underpinnings. This has led to economists collaborating with researchers from other fields in basic and social sciences such as cognitive psychology, evolutionary biology and sociology/anthropology. Henrich et al (2001) was the first major study that investigated the impact of demographics on human cooperation using the ultimatum game.30 Their result showed that far from the “rational-actor” framework of the canonical microeconomic model, people’s cooperative behaviour was not exogenous and critically depended on the economic and social realities of everyday life. That culture and demographics matter in determining human choice is here to stay in economics and several studies after Henrich et al (2001) such as Kurzban and Houser (2001), Harrison et al (2002), Sosis and Ruffle (2003), Cardenas and Ostrom (2004) and Andersen et al (2008) investigated the effect of culture and individual (non-economic) characteristics on economic choice behaviour. Not all of these were individual choice experiments but these studies and others of their type set a precedent for the way behaviour is studied in experimental economics. An important consequence of the diversification of the subject pool brought about by these field experiments is the comparisons that are starting to be made between results obtained from decades of laboratory experiments in the behavioural sciences that involve primarily undergraduate students from industrial nations and the newer results from societies

in developing nations. Henrich et al (2010) showed that the behaviours in a variety of decision-making situations differed significantly in this larger slice of humanity from that displayed by subjects from what they referred to as the “weird” societies.31 They concluded that members of weird societies were among the least representative populations one could find for generalising about human behaviour.

The prevailing atmosphere of interdisciplinarity has also led to collaborations between economists and clinical neurologists, leading to the sub-field of neuroeconomics, which attempts to find the roots of behaviour as reflected in the working of neural substrates. Both game and decision theory problems have been investigated by projects in neuroeconomics that have both social scientists as well clinicians who are familiar with the working of functional magnetic resonance imaging (fMRI) machines. The idea is simple – put subjects in MRI machines with the electrodes connected, give them tasks to do and observe which sets of neurons fire up. McCabe et al (2001) was the first major research work in neuroeconomics and theorised that mentalising was important in games involving trust and cooperation.32 They found that players who were more trusting and cooperative showed more brain activity in the Brodmann area 10 (thought to be the locus of mentalising) and more activity in the limbic system, which processes emotions. In a Smith et al (2002) experiment, pay-offs and outcomes were manipulated independently during choice tasks in the form of gambles (involving risk or ambiguity) as brain activity was measured with positron emission tomography (PET) scans. Their analyses indicated that the interaction between belief structure (whether the prospect is ambiguous or risky) and pay-off structure (whether it is a gain frame or loss frame) shaped the distribution of brain activity during choice.

Economic Political Weekly

EPW
August 27, 2011 vol xlvi no 35

Accordingly, there were two disparate, but functionally integrated, choice systems with sensitivity to loss. A neocortical dorsomedial system was related to loss processing when evaluating risky gambles, while a more primitive ventromedial system was related to processing other stimuli. See Camerer et al (2005) for a detailed survey of neuroeconomics studies and their impact on mainstream economics. Though neuroeconomics claims that many fundamental insights in economics can be generated from these imaging studies, Harrison (2008) and Rubinstein (2008) criticised this sub-field for adding no fundamental insight to our understanding of how economic decisions were made, and referred to it as a faddist technological gimmick, attempting to provide “hard” evidence for violations from normative behaviour. Moreover, the designs were weak, the results inconclusive and the insights, if any, far from reshaping the way we think about economics. The crux of the problem was presented in Rubinstein (2008), which said that even if we knew the exact centre of the brain that engaged (and the extent to which it engaged) when we perform certain activities, it was unclear how that would help us design mechanisms or devise strategies (short of surgical intervention) to help humans make better decisions. Further, unless the imaging techniques available allowed us to monitor the brain activity of all humans in real time (a proposition from sci-fi hell), it would be really difficult to use this information for anything meaningful.

Conclusions

In conclusion, the recognition that behaviour as modelled in economics cannot be treated independently of human behaviour as observed in studies in psychology or anthropology or even medical science has greatly augmented the breadth and depth of experimental work in economics. This has also spurred numerous interdisciplinary collaborations, some of which have been more fruitful than others, but the writing on the wall is clear – it is no longer possible to think of economic problems in individual choice to be mutually exclusive from those studied in other disciplines in social and biological science. A healthy concomitant of this is the relatively smaller weight put on narrow results arising from specific parameterisations of problems that are studied in research programmes. Behavioural researchers today seem more interested in the direction of the results than the magnitude of their divergence from some field-specific theoretical norm. This is inevitable because we do ultimately need to correlate our results to those from other disciplines that have different parameterisations of the same problem. Thus, it is increasingly becoming clearer that just numerical averages on behaviour help us very little in terms of generating insights pertaining to populations. As researchers, we have to spend some time connecting the dots and synthesising observations from two to three fields in an attempt to gain an insight into the behaviour being explored.

This section has taken an overview of important experimental results related to individual decision-making. There has been a substantial body of work applying experiments to macroeconomics. The following section overviews the important results in finance and macroeconomics.

Dear Reader,

To continue reading, become a subscriber.

Explore our attractive subscription offers.

Click here

Or

To gain instant access to this article (download).

Pay INR 50.00

(Readers in India)

Pay $ 6.00

(Readers outside India)

Back to Top