The American Economic Association's registry for randomized controlled trials
Please fill out this
short user survey
of only 3 questions in order to help us improve the site. We appreciate your feedback!
Projection bias in an aversive environment
Last registered on October 26, 2020
View Trial History
Projection bias in an aversive environment
Initial registration date
October 22, 2020
October 26, 2020 8:19 AM EDT
University of Zurich
Contact Primary Investigator
Other Primary Investigator(s)
Additional Trial Information
This project aims to demonstrate the existence of projection bias (Loewenstein et al.,2003), the behavioral tendency to project current preferences into the future, in an aversive environment. The main objective is to collect the first stage result in order to calculate the effect size needed for further studies.
Zhang, Sili. 2020. "Projection bias in an aversive environment." AEA RCT Registry. October 26.
Sponsors & Partners
Bad state treatment: participants will have a loud background sound played constantly in the headset while making evaluation choices.
Good state treatment: participants will not have the background sound played in the headset while making evaluation choices.
Intervention Start Date
Intervention End Date
Primary Outcomes (end points)
Willingness-to-pay to reduce the volume of the background sound in a 20-minute transcription task
Primary Outcomes (explanation)
Secondary Outcomes (end points)
Predicted self-reported unpleasantness of the task with and without volume reduction.
Secondary Outcomes (explanation)
The experiment aims to detect state dependence as predicted by projection bias (Loewenstein et al.,2003). The state is varied by the presence of an aversive event, annoying noises. Specifically, participants will be required to complete a few transcription tasks (cover-up tasks) and sit through 20 minutes with an annoying background sound that will come and go for about half of the time in different time intervals (less than 80dB; around 5 seconds each). The main outcome variable of interest is participants' willingness-to-pay (WTP) to lower the volume from loud to moderate, which will be elicited using an incentive-compatible BDM procedure.
Depending on the treatment, the willingness-to-pay choices will be elicited in one of the two decision states, with the loud noises played constantly in the headset (in a bad state treatment) or without (in a good state treatment). Participants will have full experience of the sound with both full and reduced volume before making any decisions. Therefore, any treatment effect cannot be attributed to rational learning.
The remaining part of the experiment aims to collect a variety of measures in order to pin down mechanisms. For instance, it includes projection bias measured by a real effort task allocation in the spirit of Augenblick and Rabin (2019). It also includes two self-reported measures based on the idea of shopping on an empty stomach and buyer's remorse, each of which reflects the nature of earlier studies on projection bias (e.g. Read and van Leeuwen, 1998; Colin et al., 2007; Busse et al., 2014). For sake of power, I will only conduct heterogeneity analysis if I obtain the first stage result with enough significance.
General noise sensitivity will also be included as a control variable. One concern in the current design is that people may have very subjective perceptions of aversiveness and this within-subject heterogeneity may dominate the effect induced by the between-subject treatment. If a plain comparison of means across treatment groups did not work as worried, I will have to separately analyze subgroups with different levels of noise sensitivity or rely on a regression framework with controls.
Experimental Design Details
randomization done by a computer
Was the treatment clustered?
Sample size: planned number of clusters
4 experimental sessions with 18 subjects in each session (in case of full attendance)
Note that this number is not based on any power calculation. It is merely a feasible number that I estimate based on the current covid-19 situation, which is constantly evolving. Unfortunately, it has been an extremely difficult period to plan any offline experiment due to the exponentially rising number of covid-19 cases. There may be another new covid-19 policy implemented at any moment, which may unavoidably lead to the change of the plan.
Sample size: planned number of observations
Sample size (or number of clusters) by treatment arms
36 in bad state treatment and 36 in good state treatment
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials
INSTITUTIONAL REVIEW BOARDS (IRBs)
Human Subjects Committee of the Faculty of Economics, Business Administration, and Information Technology
IRB Approval Date
IRB Approval Number
OEC IRB # 2020-070
Post Trial Information
Is the intervention completed?
Is data collection complete?
Is public data available?
Reports, Papers & Other Materials
REPORTS & OTHER MATERIALS