x

Please fill out this short user survey of only 3 questions in order to help us improve the site. We appreciate your feedback!
Salience, Risk, and Effort
Last registered on October 28, 2020

Pre-Trial

Trial Information
General Information
Title
Salience, Risk, and Effort
RCT ID
AEARCTR-0006654
Initial registration date
October 27, 2020
Last updated
October 28, 2020 10:38 AM EDT
Location(s)

This section is unavailable to the public. Use the button below to request access to this information.

Request Information
Primary Investigator
Affiliation
Harvard University
Other Primary Investigator(s)
Additional Trial Information
Status
In development
Start date
2020-10-28
End date
2021-05-01
Secondary IDs
Abstract
I investigate how changing incentives schemes can affect worker effort by manipulating salience. I test the predictions of a salience model in an online experiment, where I vary the salience of different aspects of an incentive scheme, holding constant the objective expected incentives. I test whether by doing so workers can be made to exert more effort relative to the Bayesian benchmark.
External Link(s)
Registration Citation
Citation
Conlon, John. 2020. "Salience, Risk, and Effort." AEA RCT Registry. October 28. https://doi.org/10.1257/rct.6654-2.0.
Experimental Details
Interventions
Intervention(s)
Intervention Start Date
2020-10-28
Intervention End Date
2020-11-02
Primary Outcomes
Primary Outcomes (end points)
How often respondents choose to do more tasks rather than fewer.
Primary Outcomes (explanation)
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
Participants on Amazon Mechanical Turk will make a series of binary effort choices, where they can either complete 1 or 6 "tasks". Each task consists of transcribing blurry greek letters. Each option comes with a lottery, where the lottery for 6 tasks weakly dominates the lottery for doing 1 task. Holding the expected benefit of completing extra tasks constant, I vary the riskiness of exerting effort so that the upside of doing so becomes more or less salient.
Experimental Design Details
Not available
Randomization Method
I use a timer to implement the lotteries. Respondents get to stop a timer that counts up from zero at a time of their choosing. The last two digits of the timer (which includes milliseconds) are the numbers used to determine their payoffs in the earnings. In piloting, I have found both that respondents are unable to control these last two digits (which appear uniformly distributed as we would expect) and do not appear to believe they can do so (e.g., they do not systematically pick risky lotteries that would be especially attractive if they could control the value of the timer better than they in fact can).

To randomly assign respondents to treatments, I use the randomizer function in Qualtrics.
Randomization Unit
Across individuals, I randomize whether p (the parameter described above) is 0.10, 0.50, or 0.90. Within individuals, I randomize the order in which the binary effort choices are presented to them.

Randomization occurs within individual. In particular, each participant sees all twelve combinations of expected incentive strength in a random order. However, to account for correlation across choices within individual, I will cluster my standard errors at the individual level where relevant.
Was the treatment clustered?
Yes
Experiment Characteristics
Sample size: planned number of clusters
I plan to include 400 individuals per treatment arm, or 1,200 overall.
Sample size: planned number of observations
Each respondent makes 12 effort decisions, so the effective number of observations is 14,400.
Sample size (or number of clusters) by treatment arms
400 people per treatment arm.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
The unit of the main outcome is binary: whether or not they choose to exert effort. Call the mean of this variable a_bar. The variance is therefore a_bar*(1-a_bar). Using simulations, I estimate that the planned sample size will be sufficient to detect differences between incentive shapes of 5 percentage points at p < 0.05 and with power 0.80. I use simulation because I will control in the regression for individual fixed effects and expected strength of incentives, and cluster at the individual level.
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
Harvard University-Area Committee on the Use of Human Subjects
IRB Approval Date
2020-05-13
IRB Approval Number
IRB20-0772