Salience, Risk, and Effort

Last registered on October 28, 2020

Pre-Trial

Trial Information

General Information

Title
Salience, Risk, and Effort
RCT ID
AEARCTR-0006654
Initial registration date
October 27, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 28, 2020, 9:13 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
October 28, 2020, 10:38 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
Harvard University

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2020-10-28
End date
2021-05-01
Secondary IDs
Abstract
I investigate how changing incentives schemes can affect worker effort by manipulating salience. I test the predictions of a salience model in an online experiment, where I vary the salience of different aspects of an incentive scheme, holding constant the objective expected incentives. I test whether by doing so workers can be made to exert more effort relative to the Bayesian benchmark.
External Link(s)

Registration Citation

Citation
Conlon, John. 2020. "Salience, Risk, and Effort." AEA RCT Registry. October 28. https://doi.org/10.1257/rct.6654-2.0
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2020-10-28
Intervention End Date
2020-11-02

Primary Outcomes

Primary Outcomes (end points)
How often respondents choose to do more tasks rather than fewer.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Participants on Amazon Mechanical Turk will make a series of binary effort choices, where they can either complete 1 or 6 "tasks". Each task consists of transcribing blurry greek letters. Each option comes with a lottery, where the lottery for 6 tasks weakly dominates the lottery for doing 1 task. Holding the expected benefit of completing extra tasks constant, I vary the riskiness of exerting effort so that the upside of doing so becomes more or less salient.
Experimental Design Details
Each participant will make 12 binary choices, as described above. A random one of these choices will be implemented, to make all choices incentive compatible. I vary within individual both the objective expected benefit from doing more tasks (4 different incentive levels), and the "shape" of the incentives, which can be linear (earn $X extra no matter what), convex (earn a big bonus with a small probability) or concave (avoid missing out on a big bonus with a small probability).

The respondents' payoff is (implicitly: it is not described to them this way) a function of their effort, which in my model allows them to produce a unit of output, and whether they by chance produce an additional unit of output (regardless of their effort decision). A key parameter is how likely they are to produce this additional output, which I denote p. This parameter is important because when p is low (in my experiment, 0.10), convex incentives should have a positive effect on effort, and when p is intermediate (0.50) or high (0.90) concave incentives should. I therefore have three treatments corresponding to these three cases.
Randomization Method
I use a timer to implement the lotteries. Respondents get to stop a timer that counts up from zero at a time of their choosing. The last two digits of the timer (which includes milliseconds) are the numbers used to determine their payoffs in the earnings. In piloting, I have found both that respondents are unable to control these last two digits (which appear uniformly distributed as we would expect) and do not appear to believe they can do so (e.g., they do not systematically pick risky lotteries that would be especially attractive if they could control the value of the timer better than they in fact can).

To randomly assign respondents to treatments, I use the randomizer function in Qualtrics.
Randomization Unit
Across individuals, I randomize whether p (the parameter described above) is 0.10, 0.50, or 0.90. Within individuals, I randomize the order in which the binary effort choices are presented to them.

Randomization occurs within individual. In particular, each participant sees all twelve combinations of expected incentive strength in a random order. However, to account for correlation across choices within individual, I will cluster my standard errors at the individual level where relevant.
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
I plan to include 400 individuals per treatment arm, or 1,200 overall.
Sample size: planned number of observations
Each respondent makes 12 effort decisions, so the effective number of observations is 14,400.
Sample size (or number of clusters) by treatment arms
400 people per treatment arm.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
The unit of the main outcome is binary: whether or not they choose to exert effort. Call the mean of this variable a_bar. The variance is therefore a_bar*(1-a_bar). Using simulations, I estimate that the planned sample size will be sufficient to detect differences between incentive shapes of 5 percentage points at p < 0.05 and with power 0.80. I use simulation because I will control in the regression for individual fixed effects and expected strength of incentives, and cluster at the individual level.
IRB

Institutional Review Boards (IRBs)

IRB Name
Harvard University-Area Committee on the Use of Human Subjects
IRB Approval Date
2020-05-13
IRB Approval Number
IRB20-0772

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials