Inducing risk-aversion in economics experiements

Last registered on July 08, 2022

Pre-Trial

Trial Information

General Information

Title
Inducing risk-aversion in economics experiements
RCT ID
AEARCTR-0009336
Initial registration date
July 05, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
July 08, 2022, 9:28 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
UiB

Other Primary Investigator(s)

PI Affiliation
UC Berkeley

Additional Trial Information

Status
On going
Start date
2022-05-01
End date
2023-01-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
The proverb "easy come, easy go" tells us that the regret from losing something depends upon how hard we worked to get it. Normative economic theory assumes, however, that liquid wealth is fungible irrespective of its source; how a dollar is obtained should not affect what we buy with that dollar or the risk we are willing to take investing that dollar. Thaler and Johnson (1990) dispute that claim with a series of experiments demonstrating that people make different choices with money that has been easily or unexpectedly obtained. Thaler argues that people behave as if income and expenses are assigned to separate mental accounts with limited fungibility between accounts (Thaler 1999; Shefrin and Thaler 1988). Money easily gained is likely to end up in a mental account from which money is easily spent and readily wagered. Hard earned money is likely to land in a mental account from which money is more carefully spend and less readily wagered. For most people, most money is of the earned variety.

In economics laboratory experiments, participants are typically given an endowment equivalent to a couple of hours’ wages. Such endowments encourage participants to pay attention, exert more effort, and try to make choices that lead to higher earnings within the design of the experiment. However, Thaler and Johnson (1990) argue that when people lose money they consider to be a windfall gain, the loss is likely to be coded as a reduction in the gain which “doesn’t hurt as much as losing one’s own cash” (p. 657). Thus laboratory participants who mentally code money given to them in a laboratory as a windfall gain, distinct and separate from their earned income and savings, may display much less risk aversion in the laboratory than they do in their daily lives.

Our purpose is to demonstrate that in an experimental laboratory setting subjects asked to make risky choices take more risk with money they were given by the experimenter than money they earned. These differences in risk taking are dramatic and raise questions about inferences drawn from prior risk taking experiments with endowed money.
External Link(s)

Registration Citation

Citation
Hvide, Hans and Terrance Odean. 2022. "Inducing risk-aversion in economics experiements ." AEA RCT Registry. July 08. https://doi.org/10.1257/rct.9336-1.0
Experimental Details

Interventions

Intervention(s)
Subjects in the XLab subject pool at UC Berkeley are invited to participate on a first-come-first-serve basis. There are no other inclusion/exclusion criteria used. We plan to run approximately 20 sessions, with approximately 20 subjects per session each, for a total of 400 subjects.

Subjects from the XLab subject pool are invited via email by XLab staff to participate in one single session, with no follow-up sessions. The experiment should take less than one hour to complete per session. Data collection is through a computer.

Each session will follow one of two treatment protocols:
Protocol for Treatment 1:
1. Subjects join the experiment via Zoom and Sona Systems.
2. The experimenter has no interaction with subjects until the beginning of the experiment.
3. Sona Systems directs subjects to the experiment program hosted on Heroku. Sona systems passes an identification number to the experiment program. At the end of the experiment identification number and payment information is sent to the Xlab to process subject payments.
4. Subjects read and accept—if they choose to— a consent form laying out the procedures for their session in detail. There will be time to clarify any and all questions at this point and subjects will explicitly be instructed that they can ask questions at any time, and also stop their participation in the study at any time without any detrimental consequences.
5. All subjects agreeing to participate will sign the consent form. All subjects will be told that they will be paid a $5 “show-up” fee. Subjects may drop out of the study at any stage. Subjects who drop out will be given the show-up fee.
6. The experimenter will read out selected instruction screens aloud and explicitly ask if there are any questions concerning the instructions.
7. Subjects will be given a task completion goal of 200 CAPTCHAS during a period of 27 to 30 minutes (e.g., McMahon, 2015). There is a minimum of 8 seconds spent on each CAPTCHA and a maximum of 30 seconds. Subjects will be told that if they achieve the goal of 200 CAPTCHAs they will earn $11 (in addition to the “show-up” fee). The goal will be chosen such that all or most subjects can reach it in the allotted time with moderate effort. We want subjects to achieve the goal but to exert effort to do so.
8. Subjects who do not achieve the work goal will receive a pay and drawing payoffs proportional to the number of CAPTCHAS completed and continue the experiment. E.g., a subject who completes 190 CAPTCHAS will be paid $10.45. This completes the first stage of the experiment.
9. The second stage of the experiment is a modified version of the procedure in Holt & Laury (2002). Subjects are faced with 11 decision problems, represented by 11 rows in a table. For each row, subjects can pick the certain option of keeping the $11.00 payment from the first stage of the experiment, or pick a lottery option which pays $22 with probability x or $.50 with probability 1-x. The probability x equals 0% in Row 1 and increases by increments of 10% to 100% in Row 11.
10. After subjects make their selections for each of the 11 decision problems (rows), the computer randomly selects one of the rows and, if for that row the subject chose the lottery over the certain amount, the outcome ($22 or $0.50) will be randomly generated by the computer. The subject will be paid that amount, in addition to the show-up fee. If the subject chose the safe choice of keeping $11 for that drawing option, the subject will be paid that amount, in addition to the show-up fee.
11. The experiment ends with subjects being asked to answer, on the computer, questions about their gender, age, and field of study. Answering these questions is voluntary and not a requirement of payment.

The Protocol for Treatment 2 matches that of Treatment 1 in all respects except:
1. Subjects are asked to complete 10 Captchas within 5 minutes. After the five minutes session of completing CAPTCHAs, subjects proceed to the second stage of the experiment--even if they do not complete the 10 Captchas. In the second stage subjects are given (endowed with) $11.00 (irrespective of whether they completed. the 10 Captchas). They then proceed as in step 9 above.
We will look at the row at which each subject switches from the certain option to the lottery option (if the subject switches). Our hypothesis is that subjects in Treatment 2 will, on average, switch to the lottery sooner. We will drop subjects who make inconsistent choices as recommended in Charness, Gneezy, and Imas (2013).

References:
Augenblick, Ned, Muriel Niederle, and Charles Sprenger, 2015. "Working over time: Dynamic inconsistency in real effort tasks." The Quarterly Journal of Economics. 130. 1067-1115.
Charness, Gary, Uri Gneezy, and Alex Imas. (2013) “Experimental methods: Eliciting risk preferences” Journal of Economic Behavior & Organization. 87 43-51.
Holt, Charles A., and Susan K. Laury. 2002. Risk aversion and incentive effects. American Economic Review. 92(5) 1644-1655.
McMahon, Matthew, 2015. "Better lucky than good: The role of information in other-regarding preferences." Working paper.
Intervention Start Date
2022-07-20
Intervention End Date
2022-10-31

Primary Outcomes

Primary Outcomes (end points)
Our key outcome variables are the mean the rows at which subjects in Treatment 1 and Treatment 2 switch from picking the safe option to picking the risky option across the two treatments.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Explained under "Intervention".
Experimental Design Details
Randomization Method
By computer.
Randomization Unit
We randomize at three levels:
(i)which subjects are assigned to Treatment 1 and Treatment 2, respectively
(ii)the row being picked in the second stage
(iii)whether a subject receives the high ($22) payoff or the low ($0.50) payoff
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
We intend to have 400 subjects who are randomly assigned to one of two treatments. Subjects are not clustered.
Sample size: planned number of observations
We have about 400 subjects each of whom provide one observation (i.e., the row at which they switch from the certain outcome to the lottery outcome. Each subject makes makes 11 decisions, so in all we observe about 4400 decisions. We intend to drop subjects who make inconsistent choices, e.g., choose the lottery when the chance of winning $22 is 40% and then choose the certain $11 when the chance of winning $22 in the lottery is 50% and switch back to choosing the lottery when the chance of winning $22 in the lottery is 60%.
Sample size (or number of clusters) by treatment arms
200 individuals for each of two treatments
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Our primary analysis will be to regress the row at which a subject switched from the certain choice to the lottery on an indicator variable for treatment. This is equivalent to a t-test of difference in means between switching point for the treated versus the control group. The standard error of the test statistic equals sqrt(s1^2/n1 + s2^2/n2) where s1^2 and s2^2 are the sample variances and n1 and n2 are sample sizes for the two groups. Assuming, conservatively, that the choice of switching point is uniformly distributed on [1, 11], the population variances are about 8.2. Assuming further that n1=n2=n, which is what we plan for the experiment, we get the following standard error of our test statistic: SE = sqrt(8.2/n + 8.2/n) = 4/sqrt(n). With n=200, we get SE(200) = 4/sqrt(200) = 4/14=0.3. With n=100, we get SE(100)=4/sqrt(100)=4/10=0.4. We plan to have a sample size of 400, i.e., n1=n2=200. The ensuing SE, i.e., about 0.3, will make us able to detect relatively small differences in behavior between the treated and control groups. For example, a difference in mean switching point of 0.6 will result in a t-statistic of about 2 (in the previous experiments, we obtained a difference in mean switching point of about 1.5). The actual power of the experiment will be higher because subjects will have much less dispersed choices than uniform on [1,11]. In the regressions, we include a dummy for gender in addition to a dummy for treated status. This will reduce power but only by a small amount.
IRB

Institutional Review Boards (IRBs)

IRB Name
UC Berkeley Committee for the Protection of Human Subjects
IRB Approval Date
2020-10-26
IRB Approval Number
2020-09-13681

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials