Projection bias in an aversive environment

Last registered on October 26, 2020

Pre-Trial

Trial Information

General Information

Title
Projection bias in an aversive environment
RCT ID
AEARCTR-0006659
Initial registration date
October 22, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 26, 2020, 8:19 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
University of Zurich

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2020-10-23
End date
2020-11-30
Secondary IDs
Abstract
This project aims to demonstrate the existence of projection bias (Loewenstein et al.,2003), the behavioral tendency to project current preferences into the future, in an aversive environment. The main objective is to collect the first stage result in order to calculate the effect size needed for further studies.
External Link(s)

Registration Citation

Citation
Zhang, Sili. 2020. "Projection bias in an aversive environment." AEA RCT Registry. October 26. https://doi.org/10.1257/rct.6659-1.0
Experimental Details

Interventions

Intervention(s)
Bad state treatment: participants will have a loud background sound played constantly in the headset while making evaluation choices.
Good state treatment: participants will not have the background sound played in the headset while making evaluation choices.
Intervention Start Date
2020-10-23
Intervention End Date
2020-11-30

Primary Outcomes

Primary Outcomes (end points)
Willingness-to-pay to reduce the volume of the background sound in a 20-minute transcription task
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Predicted self-reported unpleasantness of the task with and without volume reduction.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The experiment aims to detect state dependence as predicted by projection bias (Loewenstein et al.,2003). The state is varied by the presence of an aversive event, annoying noises. Specifically, participants will be required to complete a few transcription tasks (cover-up tasks) and sit through 20 minutes with an annoying background sound that will come and go for about half of the time in different time intervals (less than 80dB; around 5 seconds each). The main outcome variable of interest is participants' willingness-to-pay (WTP) to lower the volume from loud to moderate, which will be elicited using an incentive-compatible BDM procedure.

Depending on the treatment, the willingness-to-pay choices will be elicited in one of the two decision states, with the loud noises played constantly in the headset (in a bad state treatment) or without (in a good state treatment). Participants will have full experience of the sound with both full and reduced volume before making any decisions. Therefore, any treatment effect cannot be attributed to rational learning.

The remaining part of the experiment aims to collect a variety of measures in order to pin down mechanisms. For instance, it includes projection bias measured by a real effort task allocation in the spirit of Augenblick and Rabin (2019). It also includes two self-reported measures based on the idea of shopping on an empty stomach and buyer's remorse, each of which reflects the nature of earlier studies on projection bias (e.g. Read and van Leeuwen, 1998; Colin et al., 2007; Busse et al., 2014). For sake of power, I will only conduct heterogeneity analysis if I obtain the first stage result with enough significance.

General noise sensitivity will also be included as a control variable. One concern in the current design is that people may have very subjective perceptions of aversiveness and this within-subject heterogeneity may dominate the effect induced by the between-subject treatment. If a plain comparison of means across treatment groups did not work as worried, I will have to separately analyze subgroups with different levels of noise sensitivity or rely on a regression framework with controls.
Experimental Design Details
Randomization Method
randomization done by a computer
Randomization Unit
individual level
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
4 experimental sessions with 18 subjects in each session (in case of full attendance)

Note that this number is not based on any power calculation. It is merely a feasible number that I estimate based on the current covid-19 situation, which is constantly evolving. Unfortunately, it has been an extremely difficult period to plan any offline experiment due to the exponentially rising number of covid-19 cases. There may be another new covid-19 policy implemented at any moment, which may unavoidably lead to the change of the plan.
Sample size: planned number of observations
72
Sample size (or number of clusters) by treatment arms
36 in bad state treatment and 36 in good state treatment
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Human Subjects Committee of the Faculty of Economics, Business Administration, and Information Technology
IRB Approval Date
2020-10-19
IRB Approval Number
OEC IRB # 2020-070

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials