首页    期刊浏览 2025年03月01日 星期六
登录注册

文章基本信息

  • 标题:Ambiguous solicitation: ambiguous prescription.
  • 作者:Gazzale, Robert ; Jamison, Julian ; Karlan, Alexander
  • 期刊名称:Economic Inquiry
  • 印刷版ISSN:0095-2583
  • 出版年度:2013
  • 期号:January
  • 语种:English
  • 出版社:Western Economic Association International
  • 关键词:Economic research;Labor market;Wages;Wages and salaries

Ambiguous solicitation: ambiguous prescription.


Gazzale, Robert ; Jamison, Julian ; Karlan, Alexander 等


I. INTRODUCTION

Sample selection issues are relevant for any empirical exercise with human subjects. We study this problem directly, in the context of laboratory experiments in economics. However, the issue is at least as salient for "field experiments" (see Harrison and List 2004 for an overview). Among other issues, they still need to recruit their subjects, and thus there is the possibility for selection bias. Indeed, no sample is likely to be fully representative; Gronau (1974) is an early paper that worries about just such an effect on wage selectivity in labor markets. We discuss further in the conclusion the relevance for both field experiments and for fully naturally occurring observed data.

To directly assess such effects, we hypothesize in advance that a particular recruitment procedure will affect the composition of the subjects who show up to participate. We use standard laboratory protocols for comparability, but the implications are similar for either the lab or the field. In particular, we vary the amount of information (about the task to be performed and/or the expected payment) revealed at the time of recruitment. This is a dimension that varies in any case, but that is not often explicitly considered or controlled for. It also has a natural theoretical link to ambiguity aversion: an aversion to uncertainty over states of the world about which the probabilities are unknown. (1) We hypothesize that potential subjects who are more ambiguity averse will be less likely to choose to participate if they have less information about their possible outcomes. Although we focus on this single aspect, we stress that our concern is broader. Selection biases are likely to be present in almost all situations, along a variety of dimensions, and by definition they are unusually difficult to test for and to control for.

To test for this effect here, we begin by inducing a representative sample of undergraduates, namely almost all students in several pre-occurring groups, to voluntarily participate in the first phase of our experiment in which we measure ambiguity preferences (specific procedures are described in Section II). This is by no means representative of the population at large, but if anything it is more homogeneous--making it more difficult for us to observe selection effects within that group. In fact, even in this case, we do find a significant selection effect when those same students are invited to participate in a follow-up experiment via a randomly varied recruitment e-mail. In particular, none of the e-mails that we used successfully led to the same underlying distribution of types as existed in the base population (sample frame).

The general issue of potential bias both in subject pools and in subject behaviors has been considered by experimental psychologists for many years (Orne 1962; Rosenthal and Rosnow 1973). Note that there are two distinct considerations: Who volunteers to participate in an experiment to begin with? And does their behavior change relative to other settings? The latter effect is sometimes referred to as a demand characteristic, with many studies finding that subjects appear to conform their behavior to that which is "demanded" by the researcher. But of course, without good evidence as to what the baseline population looks like, it is difficult or impossible to separate these effects. Experimental economics may suffer slightly less from both effects: the first because there is always payment for participating and the second because often we have been interested not in individual differences but rather in comparing institutions or testing theories that are supposed to apply to everyone equally. Even within economics, this potential problem was discussed quite early (e.g., Kagel, Battalio, and Walker 1979), but it has received little attention.

As it matures, however, experimental economics has become increasingly interested in behavior differences among groups, and here the selection effects are more acute. For instance, there are (or have appeared to be) robust gender differences in a variety of behaviors. One of these is risk aversion, and in particular bidding behavior in first-price auctions. Women bid higher than men do, which is less risky, and therefore often earn less money. Chen, Katuscak, and Ozdenoren (2009) replicate this finding, but then show that, if one controls for the stage of the women's menstrual cycle, the differences disappear. There is an obvious selection story to back this up: nonmenstruating women have higher estrogen levels, leading them to be both more likely to participate in an experiment in the first place (indicative of pro-social or affiliative behavior) and to be less "aggressive" once they do.

Of course this is not proof of a selection effect, but by definition it is difficult to test that portion of the population that tends not to participate in experiments. One indirect approach is to look at sorting behavior among subjects who have already agreed to participate in general but who can endogenously choose what task to perform (e.g., what game to play, or indeed whether or not to play a game at all). A number of recent papers have explored this issue and found significant differences in behavior between those who were assigned to a treatment versus those who chose into it. (2) Some of these papers are also able to relate individual differences (e.g., overconfidence or risk aversion) to the choice of treatment, confirming the idea that underlying preferences affect not only people's behavior, holding the environment fixed, but also what environments people end up in. In particular, two papers have looked at endogenous entry into auctions: Reiley (2005) manipulates reserve prices in a field setting; Palfrey and Pevnitskaya (2008) study the link between risk tolerance, auction participation, and bidding behavior. All of these clearly have implications for the interpretation of any experimental results that wish to test for differences involving exactly those underlying preferences.

Three recent papers directly address the issue of determining which subjects actually physically show up at an experiment, although of course neither of them can fully compare with the (unknowable) general population--and neither do we. The first to do this is Harrison, Lau, and Rutstrom (2009), which uses a field experiment setting to look at selection effects depending on risk aversion. They find small effects overall, but a noticeable difference from the use of a guaranteed show-up fee: not surprisingly, such guarantees are more attractive to those who are more risk averse. The second is Jamison, Karlan, and Schechter (2008), which studies the effect of deception in laboratory experiments. They find that deceived women and deceived low earners are less likely to show up for a second (nominally unrelated) experiment. However, they do not focus on the specifics of the recruitment procedure itself. Finally, Malani (2008) looks at self-selection into randomized medical trials, finding a link between optimism concerning treatment efficacy (which is correlated with the treatment's effect because of unobserved individual heterogeneity) and enrollment into the trial.

II. EXPERIMENT DESIGN AND RESULTS

A. Phase 1 Design: Measuring Baseline Ambiguity Aversion

Our goal in Phase 1 was to determine the ambiguity preferences of a sample of subjects into which there would be little or no self-selection specific to our experiment. Our starting sample frame was the population from which economics experiments typically solicit: undergraduate students. We recruited subjects using two methods, both of which are particularly common in economics experiments. First, we asked students in a number of introductory economics courses to complete an ambiguity-aversion survey in the final 10 minutes of a class. Second, a researcher approached every student he/she encountered at campus libraries and asked each student to complete the same ambiguity-aversion survey. There was almost no Phase 1 self-selection (beyond taking undergraduate economics and frequenting the library): all 94 students in the selected economics classes completed the survey, and 109 out of 111 of the approached library students completed the survey. (3)

Both in the classroom and in the library, the researcher first asked students whether they were willing to complete a brief survey in which one in three students earns money based on her responses. All students agreed to participate and signed an informed consent form. The researcher handed each student brief instructions, read the instructions aloud, invited subjects to ask questions, and gave them a survey to complete. Subjects were not informed that they would receive a future solicitation to participate in a subsequent experiment.

The survey (Appendix A) contains 10 of Ellsberg's hypothetical urn gambles and collects basic demographic data. In each of five scenarios, there is a hypothetical urn containing 100 balls whose distribution of black and red balls is clearly stated (the known urn), and a second hypothetical urn also containing 100 red and black balls in total, but whose distribution of red and black balls is clearly stated as unknown (the ambiguous urn). (4) The five known urns are offered in order: 50 red and 50 black balls; 40 red and 60 black bails; 30 red and 70 black balls; 20 red and 80 black balls; and 10 red and 90 black balls. For each scenario, we presented the subject with two gambles:

(1) If we paid you $10 for pulling a red ball on your first try, would you pick from the "known" or the "ambiguous" urn?

(2) If we paid you $10 for pulling a black ball on your first try, would you pick from the "known" or the "ambiguous" urn?

Thus we presented 10 gambles to each subject, and for each gamble asked from which urn the subject would draw. Asking each gamble twice, once for red and once for black, eliminated the chance the subject had a preference for a color or mistrusted the administrator.

B. Phase 1 Ambiguity-Aversion Classification

On the basis of Phase 1 survey responses, we place all Phase 1 participants into one of three categories: more ambiguity averse (Table 1; 27%), not more ambiguity averse (69%), and unable to classify (4%). To classify subjects, we look at the first "known" urn distribution at which a subject chooses the ambiguous urn when betting on whether a red ball will be the first ball drawn. (5) We classify as "more ambiguity averse" those students who first choose the ambiguous urn for the red-ball bet when the known urn contains 30 or fewer red balls, and who continue to choose the ambiguous urn for the red-ball bet when the known urn contains even fewer red balls. These subjects chose the known urn for both bets when the known um contains either 50 or 40 red.

Seven subjects switched from the ambiguous urn to the known urn even though the known urn in each round became worse. We presume these individuals were not paying attention, or were not understanding the questions well, and thus categorize them as "unable to classify" and drop them from the analysis.

We classify all other subjects as "not more ambiguity averse." Note that we do not have a category "ambiguity neutral" or "ambiguity seeking" because only nine and five subjects, respectively, would have been categorized as such. We thus categorize as "not more ambiguity averse" the combination of ambiguity seeking (five subjects), ambiguity neutral (nine subjects), and those that switched at the first round, when the odds were 40% for the known urn (122 individuals).

In Table 2 we show, with both ordinary least squares and probit specifications, the (lack of) correlation between this classification and location of experiment, gender, year in school, and major.

C. Phase 2 Solicitation: Observing the Decision to Participate

The real "experiment" in Phase 2 is simply the decision to respond to our e-mail solicitation. Our goal is to determine whether ambiguity preferences affect the decision to participate in laboratory experiments, and then more specifically whether different e-mail solicitations affect this selection decision differentially.

We randomly assigned subjects to one of four recruitment treatments (i.e., ambiguity classification is orthogonal to treatment assignment). Treatments differ only in the amount of detail provided in the invitation e-mail. We employ a 2 x 2 design, with each respondent receiving either an ambiguous or detailed description of their task and either an ambiguous or detailed description of their payout. The "standard" e-mail sent at Williams College, and elsewhere (Davis and Holt 1992), is closest to the ambiguous task/ambiguous pay e-mail.

Details on each e-mail treatment are as follows (the full text is in Appendix B):

(1) Ambiguous task/ambiguous pay: "I am writing to inform you of an opportunity to participate in an Economics Department experiment on Tuesday, February 20th from 7:00 p.m. until 10:00 p.m. in Hopkins 108. You will earn either $10 or $20 by participating in this experiment and the session will last about 30 minutes." (6)

(2) Ambiguous task/detailed pay: Here, the payout section from the ambiguous e-mail is replaced by the following: "You will earn either $10 or $20 by participating in this experiment. The experiment is designed so that you have a 50% chance of earning $10 and a 50% chance of earning $20."

(3) Detailed task/ambiguous pay: Here, each participant is also informed that they will play a game in which they will decide how much of their participation fee they want to contribute to charity, and then will play games of uncertainty, choosing between known and ambiguous urns.

(4) Detailed task/detailed pay: The detailed task and detailed pay, as noted from the above.

Appendix C provides the detail on the activities in Phase 2. We do not use any of these data in this article, because the sample size is too small for meaningful distributional analysis (to observe whether the differential selection drew in different people, which then would lead to different analytical results for the Phase 2 games themselves).

Table 3 presents the key selection results as comparison of means, and Table 4 presents them in a probit specification. First note that Table 3, Row A demonstrates orthogonality of ambiguity aversion to assignment to each e-mail treatment, and Table A1 demonstrates the same for other known demographic variables.

We have two key hypotheses:

Hypothesis 1: Those who participate in laboratory experiments do not differ with respect to ambiguity aversion from those who do not participate.

Table 3 column 1 tests this hypothesis with mean comparison, and Table 4 column 1 tests this hypothesis with a probit specification. We cannot reject the null hypothesis. Those who participate in Phase 2 are no more or less ambiguity averse than those who do not. Note that this is a pooled analysis, across all four treatment solicitation e-mails. Thus, although we cannot reject the null hypothesis, this null is under the setting of a blend of solicitation approaches. We now turn to examine heterogeneity generated by the different solicitations.

Hypothesis 2: The level of ambiguity in each solicitation does not generate differential selection on ambiguity personality characteristics with respect to who participates.

In Table 3 columns 2, 3, 5, and 6, we test this hypothesis with respect to ambiguity on both task and payout. Row A in columns 2 and 3 shows the average ambiguity aversion for those who received either the ambiguous task or the detailed task is 28.4% in both cases (the randomization was stratified on ambiguity, hence the perfect orthogonality).

However, comparing Rows B and C shows that the ambiguous task generates differential selection toward those less ambiguity averse (p-value 0.079), and that the detailed task similarly generates reverse selection toward the more ambiguity averse (p-value 0.014). Similar tests for ambiguity on the payment, however, do not yield statistically significant differences (nor are they signed as predicted). We discuss this in the conclusion with conjectures as to why the ambiguous payout treatment did not generate differential selection, whereas the ambiguous task treatment did.

Table 4 shows similar results, in a probit specification. Column 2 shows that the ambiguous task generates a 6.9% point higher participation rate (not statistically significant) and the ambiguous payout generates a 9.4% point higher participation rate (significant at 10%). Column 3 presents the key results on heterogeneity induced by the solicitation. Here, we interact the e-mail treatment with whether the individual is ambiguity averse or not. We find the same pattern shown in Table 3 that the ambiguous task treatment deters the ambiguity averse from participating (significant at 1%), but that the ambiguous payout treatment does not generate differential selection patterns.

Of particular interest to the laboratory experimentalist is the extent to which the standard recruitment e-mail (ambiguous task/ambiguous pay) draws in an unbiased sample of the subject pool. Column 4 shows estimation results from a probit model where observations are limited to the ambiguous task/ambiguous pay treatment. The participation rate is almost 17% points lower for the more ambiguity averse group. Therefore the standard solicitation method does not elicit a representative sample.

III. CONCLUSION

We examine a new facet of selection into laboratory experiments: ambiguity aversion. First, we find that our laboratory instrument measuring ambiguity aversion does help predict real-world behavior (the decision of whether to participate in an economics experiment). Furthermore, the choices made on this instrument cannot be predicted using subject characteristics normally collected in experiments. Second, we find that the method of solicitation generates potentially important heterogeneity with respect to ambiguity aversion in the sample frame that participates in experiments. Thus if ambiguity aversion could influence the choices participants make, experimentalists should note that the solicitation used could generate higher or lower participation rates, depending on how much information is given in the solicitation. Further research could shed insight on two areas. First, with a larger sample size, further analysis could be performed to examine whether analytical results change depending on the solicitation method. Second, it would be useful to know whether other solicitation methods could generate a more representative population. For instance, paying more money could generate higher participation rates; or alternative wording, somewhere in between our ambiguous and detailed treatments, may yield the optimal results. In our setting, it is key to note that we found no treatment, including the standard ambiguous text/ambiguous pay, which drew in a representative population.

Although we find differential selection from the ambiguous task treatment, we do not find differential selection from the ambiguous payout treatment. We have two conjectures for why this may be. First, perhaps the information was simply not ambiguous enough. We stated that they would win between $10 and $20, and although this adheres to the canonical urn question fairly accurately, if read quickly it could be perceived as more informative than intended. Second, it is possible that recipients of the e-mail did not trust the researcher (whereas in the urn questions, mistrust of the researcher should not confound the analysis by design), and thus simply assumed that this really means $10, except for very few who would win the $20.

Nonrandom selection into experiments poses significant challenges in extrapolating findings to the real world (i.e., external validity). A researcher quantifying a preference parameter of interest (e.g., a risk-aversion coefficient) faces a baseline hurdle in that this parameter may differ between the population from which he/she is sampling and their population of interest. Biased selection into the experiment only decreases their ability to make inferences about the population of interest. Although these selection effects are likely less important to the researcher documenting a treatment effect, they cannot be ignored. Even if the researcher documents an effect in some sample, it may not be clear who exactly is in that sample and whether the treatment effect owes its existence to (uncontrolled for) sample composition. For instance, an intervention to improve social cohesion may appear not to work only because the self-selected participants are more pro-social than typical (which is unlikely to be correlated with standard demographic characteristics) and therefore have less need of such help.

These results are important not just for laboratory experiments; similar issues apply to field experiments. We think about two types of field experiments here, "artifactual" or "framed" field experiments, using the Harrison and List (2004) taxonomy, and "natural" field experiments or for that matter any observational data collection. First, with respect to those cognizant of being "researched" (artifactual or framed), the results here are potentially just as applicable in the field as in the laboratory: no method of solicitation in our experiment generated a representative sample frame. Typical methods in the field could have similar issues, for example, those who are more social, those who are more likely to think the games could lead to NGO handouts, those who are more curious, and so forth are all more likely to participate, and the method of solicitation could exacerbate any of these issues. (7) Regarding "natural" field experiments or any surveying process, the issues raised here are also relevant, particularly when subjects are asked to select into a novel product or institution. The fundamental idea is not new: external validity. This paper sheds insight into how the solicitation method can generate more (or, ideally, less) selection which then influences the external validity of this study.

doi: 10.1111/j.1465-7295.2011.00383.x

APPENDIX A. PHASE 1 EXPERIMENT INSTRUCTIONS

In this experiment you will be asked to make a series of choices. In each scenario, there are two urns. Both will always contain 100 balls, each ball being either red or black. In each scenario, you will know the exact number of black and red balls in one urn, but you will not know the number of each color in the second urn, only that there are 100 balls in the second urn and every ball is either red or black. The balls are well mixed so that each individual ball is as likely to be drawn as any other.

After all questionnaires have been completed, the experimenter will select at random one-third of all questionnaires. For each questionnaire selected, the experimenter will randomly select one of the five scenarios, with each scenario as likely to be drawn as any other. The experimenter will then randomly select one of the two questions within the selected scenario, with each question as likely to be selected as the other. Finally, if your questionnaire is selected, a ball will be drawn on your behalf, with each ball as likely to be drawn as any other.

Finally, please note that there are no tricks in this experiment. Although in each scenario there is an urn for which you do not know the number of black and red balls, the number of unknown balls of each type already has been selected at random and is on file with the Williams College Department of Economics. Likewise, the pull of a ball from a chosen urn will truly be performed at random via a process overseen by the Department of Economics.

[We present five of the following questions, with M = 1,2,3,4,5.]
Decision # {M}

Urn A: 100 balls: {50-(M - 1)*10} red,
{50+(M - 1)*10} black
Urn C: 100 balls: ? red, ? black

If I were to give you $10 if you pulled a red ball on your
first try, from which urn would you choose to draw?

[] Urn A [] Urn C

If I were to give you $10 if you pulled a black ball on
your first try, from which urn would you choose to draw?

[] Urn A [] Urn C

TABLE A1
Verification of Orthogonality of Assignments to Treatments to Data
from First Experiment Means

 Received Received
 Ambiguous Detailed
 Payout Payout
 Full E-mail E-mail
 Sample Solicitation Solicitation
 (1) (2) (3)

More ambiguity 0.29 0.29 0.29
 averse in first
 experiment
Female 0.48 0.44 0.50
First experiment 0.52 0.57 0.48
 conducted in
 library
Freshman 0.38 0.36 0.39
Sophomore 0.28 0.27 0.29
Junior 0.19 0.20 0.17
Economics major 0.12 0.12 0.11
Psychology major 0.02 0.02 0.02
Number of 197 99 98
 observations

 Received
 Chi-Square, Ambiguous
 p-Value Task E-mail
 (2) [not equal to] (3) Solicitation
 (4) (5)

More ambiguity 0.00 (0.96) 0.29
 averse in first
 experiment
Female 0.61 (0.44) 0.40
First experiment 1.46 (0.23) 0.46
 conducted in
 library
Freshman 0.12 (0.73) 0.40
Sophomore 0.04 (0.84) 0.28
Junior 0.26 (0.61) 0.20
Economics major 0.04 (0.85) 0.16
Psychology major 0.00 (0.99) 0.03
Number of 99
 observations

 Received
 Detailed Chi-Square,
 Task E-mail p-Value
 Solicitation (5) [not equal to] (6)
 (6) (7)

More ambiguity 0.29 0.00 (0.96)
 averse in first
 experiment
Female 0.54 3.70 (0.06)
First experiment 0.58 2.70 (0.10)
 conducted in
 library
Freshman 0.35 0.68 (0.41)
Sophomore 0.28 0.01 (0.91)
Junior 0.17 0.26 (0.61)
Economics major 0.07 3.89 (0.05)
Psychology major 0.01 1.00 (0.32)
Number of 98
 observations

Note: The definition of "more ambiguity averse" is given in detail in
Table 1.


APPENDIX B. TEXT OF INVITATION E-MAILS

Ambiguous E-mail

I am writing to inform you of an opportunity to participate in an Economics Department experiment on Tuesday, February 20th from 7:00 p.m. until 10:00 p.m. in Hopkins 108. You will earn either $10 or $20 by participating in this experiment and the session will last about 30 minutes.

Detailed Task, Ambiguous Payment E-mail

I am writing to inform you of an opportunity to participate in an Economics Department experiment on Tuesday, February 20th from 7:00 p.m. until 10:00 p.m. in Hopkins 108. You will earn either $10 or $20 by participating in this experiment, and the session will last about 30 minutes. The experiment will consist of two sections.

In the first section, you will have the opportunity, in private, to donate part of your show-up fee to a charity. The amount you give to the charity will be matched by the experimenter.

In the second section, you will be asked to make a series of decisions. For each decision, you will be asked to choose one of two options where the outcome of each option is uncertain. In some decisions, you will know the probability of each outcome within each option. In other decisions, you will not know the probability of each outcome for one of the options. After you have made your decisions, we will randomly select some of your decisions and you will be paid according to your choices.

Ambiguous Task, Detailed Payment E-mail

I am writing to inform you of an opportunity to participate in an Economics Department experiment on Tuesday, February 20th from 7:00 p.m. until 10:00 p.m. in Hopkins 108. You will earn either $10 or $20 by participating in this experiment. This experiment is designed so that you have a 50% chance of earning $10 and a 50% chance of earning $20. The session will last about 30 minutes.

Detailed Task, Detailed Payment E-mail

I am writing to inform you of an opportunity to participate in an Economics Department experiment on Tuesday, February 20th from 7:00 p.m. until 10:00 p.m. in Hopkins 108. You will earn either $10 or $20 by participating in this experiment. This experiment is designed so that you have a 50% chance of earning $10 and a 50% chance of earning $20. The session will last about 30 minutes. The experiment will consist of two sections.

In the first section, you will have the opportunity, in private, to donate part of your show-up fee to a charity. The amount you give to the charity will be matched by the experimenter.

In the second section, you will be asked to make a series of decisions. For each decision, you will be asked to choose one of two options where the outcome of each option is uncertain. In some decisions, you will know the probability of each outcome within each option. In other decisions, you will not know the probability of each outcome for one of the options. After you have made your decisions, we will randomly select some of your decisions and you will be paid according to your choices.

All Four E-mails

To sign up to participate in this experiment, please click on the link below.

APPENDIX C. PHASE 2 EXPERIMENT INSTRUCTIONS

[Italicized text in brackets details how subject instructions vary. Text in braces identifies the alternative text.]

Welcome to this experiment on decision making and thank you for being here. You will be compensated for your participation in this experiment, although the exact amount you will receive will depend on the choices you make, and on random chance. Even though you will make 20 decisions, only one of these will end up being used to determine your payment. Please pay careful attention to these instructions, as a significant amount of money is at stake.

Information about the choices that you make during the experiment will be kept strictly confidential. To maintain privacy and confidentiality, please do not speak to anyone during the experiment and please do not discuss your choices with anyone even after the conclusion of the experiment.

This experiment has four parts. First, you will be asked to make a series of decisions regarding charitable donations. In the second and third sections, you will be asked to make a series of choices between options, where the outcome of each option is not known with certainty. Finally, you will be asked a series of questions which you will either agree or disagree with along a scale. More detailed instructions will follow in each section.

Part 1

Today you received four envelopes: a "Start" envelope, a "Me" envelope, a "1" envelope, and a "2" envelope. In the start envelope you will find 10 $1 bills and 10 dollar-size pieces of blank paper. You will now have the opportunity to share part or all of the $10 with one or both of two charities. Any money that you donate to either charity will be matched, meaning every dollar you donate will result in the charity receiving two dollars. You may donate as much or as little of the $10 to each of these charities as you wish by placing dollar bills in the corresponding envelopes. At the end of the experiment, you will keep the "Me" envelope and any dollar bills you place in that envelope.

[A subject receives one of four versions. In half Oxfam is the known charity while Habitat for Humanity is ambiguously described, whereas in the other half. Habitat for Humanity is the known charity while Oxfam is ambiguously described. We controlled for order effects. The ambiguous description is in braces.]

(1) Envelope 1: Habitat for Humanity, a nonprofit organization that builds homes for those in need which has been instrumental in Hurricane Katrina relief efforts in the United States. {A nonprofit organization that works to help victims of natural disasters in the United States.}

(2) Envelope 2: Oxfam, a nonprofit organization that works to minimize poverty through relief and development work in Africa, committed to creating lasting solutions to global poverty, hunger, and social injustice. {A nonprofit organization that works to alleviate poverty in Africa.}

Part II

You will be making 10 choices between two lotteries, such as those represented as "Option A" and "Option B" below. The money prizes are determined by the computer equivalent of rolling a 10-sided die. Each outcome, 1-10, is equally likely. A computer generated "roll" for that decision will be made and you will be paid based on your decision.

Finally, please note that there are no tricks in this experiment. The roll will truly be performed at random via a process overseen by the Department of Economics.

[We present 10 of the following questions, with N = 1,2, ..., 9,10.]
Decision {N}

If you choose Option A in the row shown below, you will
have a {N} in 10 chance of earning $5.50 and a {10-N} in
10 chance of earning $4.40. Similarly, Option B offers a
{N} in 10 chance of earning $10.60 and a {10-N} in 10
chance of earning $0.28.

 Option A Option B

$5.50 if the die is 1 $10.60 if the die is 1
$4.40 if the die is 2 - 10 $0.28 if the die is 2 - 10

[] Option A [] Option B


Part III

In this section you will be asked to make a series of choices. In each scenario, there are two urns. Both will always contain 100 balls, each ball being either red or black. In each scenario, you will know the exact number of black and red balls in one urn, but you will not know the number of each color in the second urn, only that there are 100 balls in the second urn and every ball is either red or black. The balls are well mixed so that each individual ball is as likely to be drawn as any other.

For each questionnaire selected, the experimenter will randomly select one of the five scenarios, with each scenario as likely to be drawn as any other. The experimenter will then randomly select one of the two questions within the selected scenario, with each question as likely to be selected as the other. Finally, if your questionnaire is selected, a ball will be drawn on your behalf, with each ball as likely to be drawn as any other.

Finally, please note that there are no tricks in this experiment. Although in each scenario there is an urn for which you do not know the number of black and red balls, the number of unknown balls of each type already has been selected at random and is on file with the Williams College Department of Economics. Likewise, the pull of a ball from a chosen urn will truly be performed at random via a process overseen by the Department of Economics.

[We present five of the following questions, with M = 1,2,3,4,5.]
Decision # {M}

Urn A: 100 balls: {50-(M-1)*10} red, {50+(M - 1)*10}
black
Urn C: 100 balls: ? red, ? black

If I were to give you $10 if you pulled a red ball on your
first try, from which urn would you choose to draw?

[] Urn A [] Urn C

If I were to give you $10 if you pulled a black ball on
your first try, from which urn would you choose to draw?

[] Urn A [] Urn C


Part IV

Please answer the below questions according to your own feelings, rather than how you think "most people" would answer. Please be as honest and accurate as you can throughout, there is no right or wrong answer. In addition, please try not to let your response to one statement influence your responses to other statements. Think about each statement on its own. Please circle your response in the table.

A = I agree a lot

B = I agree a little

C = I neither agree nor disagree

D = I disagree a little

E = I disagree a lot
In uncertain times, I usually A B C D E
 expect the best.
It's easy for me to relax. A B C D E
If something can go wrong, it A B C D E
 will.
I'm always optimistic about the A B C D E
 future.
I enjoy my friends a lot. A B C D E
It's important for me to keep A B C D E
 busy.
I hardly ever expect things to go A B C D E
 my way.
I don't get upset too easily. A B C D E
I rarely count on good things A B C D E
 happening to me.
Overall, I expect more good A B C D E
 things to happen to me than
 bad.


REFERENCES

Ahn, D., S. Choi, D. Gale, and S. Kariv. "Estimating Ambiguity Aversion in a Portfolio Choice Experiment." ELSE Working Papers 294, ESRC Centre for Economic Learning and Social Evolution, University College London, 2007.

Camerer, C., and D. Lovallo. "Overconfidence and Excess Entry: An Experimental Approach." American Economic Review, 89(1), 1999, 306-18.

Camerer, C., and M. Weber. "Recent Developments in Modeling Preferences: Uncertainty and Ambiguity." Journal of Risk and Uncertainty, 5(4), 1992, 325-70.

Chen, Y., P. Katuscak, and E. Ozdenoren. "Why Can't a Woman Bid More Like a Man?" Working Paper, University of Michigan, 2009.

Davis, D.D., and C. A. Holt. Experimental Economics. Princeton, NJ: Princeton University Press, 1992.

Ellsberg, D. "Risk, Ambiguity, and the Savage Axioms." Quarterly Journal of Economics, 75(4), 1961, 643-69.

Eriksson, T., S. Teyssier, and M.-C. Villeval. "Self-Selection and the Efficiency of Tournaments." Economic Inquiry, 47(3), 2009, 530-48.

Falk, A., and T. J. Dohmen. "Performance Pay and Multi-Dimensional Sorting: Productivity, Preferences and Gender." IZA Discussion Paper No. 2001, Institute for the Study of Labor, 2006.

Gaudecker, H.-M., A. Van Soest, and E. Wengstrom. "Selection and Mode Effects in Risk Preference Elicitation Experiments." CentER Discussion Paper No. 2008-11, CentER for Economic Research, Tilburg University, 2008.

Gronau, R. "Wage Comparisons--A Selectivity Bias." Journal of Political Economy, 82(6), 1974, 1119-43.

Harrison, G. W., M. I. Lau, and E. E. Rutstrom. "Risk Attitudes, Randomization to Treatment, and Self-Selection into Experiments." Journal of Economic Behavior and Organization, 70(3), 2009, 498-507.

Harrison, G. W., and J. A. List. "Field Experiments." Journal of Economic Literature, 42(4), 2004, 1009-55.

Jamison, J., D. Karlan, and L. Schechter. "To Deceive or Not to Deceive: The Effect of Deception on Future Behavior in Laboratory Experiments." Journal of Economic Behavior and Organization, 68(3-4), 2008, 477-88.

Kagel, J. H., R. C. Battalio, and J. M. Walker. "Volunteer Artifacts in Experiments in Economics; Specification of the Problem and Some Initial Data from a Small-Scale Field Experiment," in Research in Experimental Economics, edited by V. L. Smith. Greenwich, CT: JAI Press, 1979, 169-97.

Lazear, E. P., U. Malmendier, and R. A. Weber. "Sorting in Experiments with Application to Social Preferences." National Bureau of Economic Research Working Paper No. W12041, 2006.

Malani, A. "Patient Enrollment in Medical Trials: Selection Bias in a Randomized Experiment." Journal of Econometrics, 144(2), 2008, 341-51.

Orne, M. T. "On the Social Psychological Experiment: With Particular Reference to Demand Characteristics and Their Implications." American Psychologist, 17(11), 1962, 776-83.

Palfrey, T., and S. Pevnitskaya. "Endogenous Entry and Self-Selection in Private Value Auctions: An Experimental Study." Journal of Economic Behavior and Organization, 66(3-4), 2008, 731-47.

Reiley, D. "Experimental Evidence on the Endogenous Entry of Bidders in Internet Auctions," in Experimental Business Research, Vol. II, edited by A. Rapoport and R. Zwick. Dordrecht, The Netherlands: Springer, 2005, 103-21.

Rosenthal, R., and R. L. Rosnow. The Volunteer Subject. New York: John Wiley and Sons, 1973.

(1.) Ellsberg (1961) provides the canonical thought experiment, suggesting that individuals, if forced to choose between two lotteries with different amounts of information available, will prefer to bet on one with a known but unfavorable probability of winning rather than on one with an unknown probability. Camerer and Weber (1992) review the early experimental evidence generally confirming this intuition. In a more recent study, Ahn et al. (2007) compare various empirical measures.

(2.) Some examples include Camerer and Lovallo (1999), Lazear, Malmendier, and Weber (2006), Eriksson, Teyssier, and Villeval (2009), Falk and Dohmen (2006), and Gaudecker, Van Soest, and Wengstrom (2008).

(3.) We subsequently dropped six subjects from our sample frame. Four were not on campus when we conducted Phase 2, and two learned of the study's research objectives and revealed this through informal communication with one of the researchers.

(4.) We inform subjects that the distribution of red and black balls in the ambiguous um is constant for all decisions. We do not refer to urns as ambiguous or known.

(5.) Recall that the first-known urn contains 50 red and 50 black balls, and each subsequent known urn contains 10 fewer red balls and 10 more black balls.

(6.) Although this is more information than is usually provided about expected earnings, unlike the detail pay e-mails, the exact distribution of payouts is unknown.

(7.) Recall Malani (2008), which finds exactly such a problem in the context of medical trials.

ROBERT GAZZALE, JULIAN JAMISON, ALEXANDER KARLAN and DEAN KARLAN *

* The authors thank Williams College for funding.

Gazzale: Assistant Professor of Economics, Williams College, 24 Hopkins Hall Drive, Williamstown, MA 01267. Phone 413-597 4375, Fax 413-597-4045, E-mail [email protected]

Jamison: Senior Economist, Research Center for Behavioral Economics, Federal Reserve Bank of Boston, 600 Atlantic Avenue, Boston, MA 02210. Phone 617-973-3017, Fax 617-973-3957, E-mail: Julian.Jamison@bos. frb.org

Karlan: Williams College, 24 Hopkins Hall Drive, Williamstown, MA 01267. E-mail alexander.s.karlan@ gmail.com

Karlan: Professor of Economics, Yale University, P.O. Box 208269, New Haven, CT 06520. Phone 203-432-4479, Fax 203-432-5591, E-mail [email protected]
TABLE 1
Full Distribution of Ambiguity Urn Decisions
from Phase 1

 Frequency

Chose ambiguous urn for 5 Coded as "0" in
 both red and black at 2.5% binary variable
 40/60 "More Ambiguity
 Averse"
Chose ambiguous urn for 6
 both red and black at 3.0%
 50/50
Chose ambiguous urn for 3
 either red or black at 1.5%
 50/50
Switched to ambiguous 122
 urn at 40/60 61.9%
Switched to ambiguous 46 Coded as "1" in
 urn at 30/70 23.4% binary variable
 "More Ambiguity
 Averse"
Switched to ambiguous 3
 urn at 20/80 1.5%
Switched to ambiguous 0
 urn at 10/90 0.0%
Never chose ambiguous 5
 urn 2.5%
Unable to classify (i.e., 7
 switched back and 3.6%
 forth)
Total 197

TABLE 2
Determinants of Ambiguity Classification

 Ordinary Least
Model Probit Squares

 More Ambiguity More Ambiguity
 Averse in First Averse in First
Dependent Variable Experiment Experiment
 (1) (2)

Female 0.038 (0.068) 0.039 (0.068)
First experiment 0.086 (0.075) 0.087 (0.077)
 conducted in library
First experiment Omitted Omitted
 conducted in
 economics class
Freshman 0.193 (0.122) 0.174 (0.111)
Sophomore 0.192 (0.125) 0.172 (0.111)
Junior 0.117 (0.133) 0.100 (0.116)
Senior + graduate Omitted Omitted
Economics major 0.093 (0.115) 0.089 (0.109)
Psychology major -0.073 (0.208) -0.073 (0.238)
[R.sup.2] 0.022 0.025
Observations 190 190

Note: Marginal effects are reported for probit
specification. Robust standard errors are in parentheses.

Had anything been significant statistically, then * would
have indicated significance at 10%, ** at 5%, and *** at
1%. Information on major is unavailable for freshmen and
sophomores.

TABLE 3
Analysis of Who Participated in Second Experiment Mean and Standard
Error

 Solicitation Treatment in Second Experiment

 Full Sample Ambiguous Detailed
 in First Task Task
 Experiment Solicitation Solicitation
 (1) (2) (3)

Number of 190 95 95
 observations in first
 experiment
Number participated 34 22 12
 in second
 experiment
Percent participated in 18% 23% 13%
 second experiment (0.028) (0.044) (0.034)
(A) Proportion more 0.284 0.284 0.284
 ambiguity averse in (0.033) (0.047) (0.047)
 first experiment
(B) More ambiguity 0.294 0.136 0.583
 averse in first (0.079) (0.075) (0.149)
 experiment AND
 participated in
 second experiment
(C) More ambiguity 0.282 0.329 0.241
 averse in first (0.036) (0.055) (0.047)
 experiment AND
 did NOT participate
 in second
 experiment
Chi-square test, 0.888 0.079 0.014
 p-value (B)=(C)

 Solicitation Treatment in
 Second Experiment

 Chi-Square, Ambiguous
 p-Value Payment
 (2) = (3) Solicitation
 (4) (5)

Number of -- 94
 observations in first
 experiment
Number participated -- 21
 in second
 experiment
Percent participated in 0.058 22%
 second experiment (0.043)
(A) Proportion more 1.000 0.287
 ambiguity averse in (0.047)
 first experiment
(B) More ambiguity -- 0.333
 averse in first (0.105)
 experiment AND
 participated in
 second experiment
(C) More ambiguity -- 0.274
 averse in first (0.053)
 experiment AND
 did NOT participate
 in second
 experiment
Chi-square test, -- 0.596
 p-value (B)=(C)

 Solicitation Treatment in
 Second Experiment

 Detailed Chi-Square,
 Payment p-Value
 Solicitation (5) = (6)
 (6) (7)

Number of 96 --
 observations in first
 experiment
Number participated 13 --
 in second
 experiment
Percent participated in 14% 0.114
 second experiment (0.035)
(A) Proportion more 0.281 0.927
 ambiguity averse in (0.046)
 first experiment
(B) More ambiguity 0.231 --
 averse in first (0.122)
 experiment AND
 participated in
 second experiment
(C) More ambiguity 0.289 --
 averse in first (0.050)
 experiment AND
 did NOT participate
 in second
 experiment
Chi-square test, 0.663 --
 p-value (B)=(C)

Note: Observations include only those subjects whose Phase I ambiguity
preferences we were able to classify.

TABLE 4
Determinants of Whether a Subject Participated in Second Experiment
Probit

 Participated in
 Second Experiment

Binary Dependent Variable (1) (2)

More ambiguity averse in first 0.012 0.011
 experiment (0.060) (0.060)
E-mail with ambiguous task -- 0.069
 (0.054)
E-mail with ambiguous payout -- 0.094
 (0.054) *
E-mail with ambiguous task -- --
 AND recipient "more
 ambiguity averse" in first
 experiment
E-mail with ambiguous payout -- --
 AND recipient "more
 ambiguity averse" in first
 experiment
Controls for year gender, year in Yes Yes
 school, major, and location of
 first experiment
Pseudo-[R.sup.2] 0.098 0.125
Number of observations 190 190

 Participated in Second
 Experiment

Binary Dependent Variable (3) (4)

More ambiguity averse in first 0.208 -0.166
 experiment (0.140) (-0.083) ***
E-mail with ambiguous task 0.164 --
 (0.063) ***
E-mail with ambiguous payout 0.080 --
 (0.062)
E-mail with ambiguous task -0.184 --
 AND recipient "more (0.039) ***
 ambiguity averse" in first
 experiment
E-mail with ambiguous payout 0.012 --
 AND recipient "more (0.121)
 ambiguity averse" in first
 experiment
Controls for year gender, year in Yes Yes
 school, major, and location of
 first experiment
Pseudo-[R.sup.2] 0.174 0.393
Number of observations 190 45

Note: We exclude those subjects whose Phase 1 ambiguity we were unable
to classify. Model (4) observations are those subjects in the ambiguous
task/ambiguous pay treatment. Marginal effects are reported for
coefficients. Robust standard errors are in parentheses.

Significance * at 10%, ** at 5%, and *** at 1%.


联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有