Losses make you think

Last registered on June 18, 2021

Pre-Trial

Trial Information

General Information

Title
Losses make you think
RCT ID
AEARCTR-0007839
Initial registration date
June 18, 2021

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
June 18, 2021, 12:53 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Middlebury College

Other Primary Investigator(s)

PI Affiliation
Middlebury College

Additional Trial Information

Status
In development
Start date
2021-06-18
End date
2021-07-31
Secondary IDs
Abstract
A foundational result in behavioral economics is that decision-makers treat losses differently than gains. Until now, this difference has largely been hypothesized to be driven by an underlying “loss aversion” value function (Kahneman and Tversky, 1979). We hypothesize that losses may also focus decision-makers more effectively than gains, causing them to expend more cognitive effort.
External Link(s)

Registration Citation

Citation
Carpenter, Jeffrey and David Munro. 2021. "Losses make you think." AEA RCT Registry. June 18. https://doi.org/10.1257/rct.7839-1.0
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2021-06-21
Intervention End Date
2021-07-31

Primary Outcomes

Primary Outcomes (end points)
Primary Outcomes:
• Time spent deliberating:
o Qualtrics quietly keeps track of how long participants spend on each of the seven CRT questions and the total amount of time spent on the entire survey. Our first primary outcome is the total time spent on the seven questions.
o We may also control for the total time spent on the entire survey.
o We will winsorize the time spent data if it is sufficiently noisy (e.g., some people may just click through, and others might get distracted or walk away from their computer briefly).

• Number of correct versus intuitive choices:
o We will count the number of correct responses.
o We will also count the number of intuitively wrong CRT answers a participant submits as a measure of system I versus system II usage.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes:
• NASA responses:
o Responses to the five NASA cognitive load/effort questions are recorded on a scale between 0 and 10, which we will treat cardinally.
o We will focus on responses to the prompts, “How mentally demanding were these questions?”
o “How hard did you have to work to accomplish your level of performance?”
o “How much attention were you paying to the questions as you answered them?”
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We examine how financial incentives affect cognitive effort and outcomes.
Experimental Design Details
Control treatment: We will conduct a short (<10 minute) Qualtrics survey in which we collect data on people’s basic demographic characteristics (e.g., education, gender, age, income), and then ask them to answer 7 cognitive reflection task (CRT) questions (Frederick, 2005) for a fixed bonus of one additional dollar, regardless of how they perform. After answering the CRT questions, participants then answer 5 questions about the difficulty of the task and how much effort they expended using the NASA-TLX protocol (Colligan et al., 2015).

Gain treatment: The gain treatment is identical to the control except for how participants are compensated for doing the CRT questions. Here participants can earn a $0.25 bonus for each questions answered correctly. The total bonus is, therefore, $1.75 beyond the additional dollar all participants get.

Loss treatment: The loss treatment is identical to the control except for how participants are compensated for doing the CRT questions. Here participants start with an endowment of $1.75 and will lose $0.25 for each question answered incorrectly. The total bonus is, therefore, still $1.75 beyond the additional dollar all participants get.
Randomization Method
By Qualtrics
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
600 Individuals
Sample size: planned number of observations
600 Individuals
Sample size (or number of clusters) by treatment arms
200 per arm
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Middlebury College IRB
IRB Approval Date
2021-05-11
IRB Approval Number
N/A
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials