The targeted assignment of incentive schemes

Last registered on September 12, 2021

Pre-Trial

Trial Information

General Information

Title
The targeted assignment of incentive schemes
RCT ID
AEARCTR-0008212
Initial registration date
September 10, 2021
Last updated
September 12, 2021, 11:29 PM EDT

Locations

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information

Primary Investigator

Affiliation
University of Cologne

Other Primary Investigator(s)

PI Affiliation
University of Cologne
PI Affiliation
Frankfurt School of Finance & Management
PI Affiliation
University of Cologne

Additional Trial Information

Status
In development
Start date
2021-09-13
End date
2022-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
A central question in designing optimal policies concerns the assignment of individuals with different observable characteristics to different treatments (policies). We study this question in the context of increasing worker’s performance by using the appropriate incentives. Concretely, we study whether and to what extent the impact of incentive schemes on performance can be improved through a targeted assignment of the implemented scheme to the characteristics of the respective worker. To do so, we will run a set of large-scale real-effort experiments with approximately 9,000 workers on Amazon MTurk.
External Link(s)

Registration Citation

Citation
Opitz, Saskia et al. 2021. "The targeted assignment of incentive schemes." AEA RCT Registry. September 12. https://doi.org/10.1257/rct.8212-1.0
Experimental Details

Interventions

Intervention(s)
The interventions are seven different incentive schemes, partly borrowed from DellaVigna and Pope (2018) but also different ones.
Intervention Start Date
2021-09-13
Intervention End Date
2021-12-31

Primary Outcomes

Primary Outcomes (end points)
The primary outcome variable is the average effort provided in the different treatments, i.e. the number of points scored in 10 minutes.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Experiment 1:
Subjects have to provide demographic information (age, gender, education). Furthermore, we elicit information on subjects’ preferences through the global preference module (Falk et al. 2018), Big-5 personality traits, and other scales. Precisely, we will elicit the subjects’ preferences on risk, altruism, reciprocity, social comparison, loss aversion, and competition.
In a next step, subjects have to work for 10 minutes on a real-effort task similar to DellaVigna and Pope (2018). In this task, subjects have to press the buttons “a” and “b” alternately on their keyboard. For each correct alternation of button presses, they receive one point. Subjects are randomly assigned to a control group or to one of six different incentive schemes: (1) a piece rate (2) social incentive (3) goal (4) gift (5) bonus loss (6) real time feedback. The data for the real time feedback has been obtained from a pilot experiment (n=209) which was conducted to ensure clarity of instructions and to test code quality of the experimental software.

Experiment 2:
In the second step, we will run a second round of experiments on MTurk with a different set of subjects where we first again elicit the respective workers’ characteristics (the same characteristics as in the first experiment). In a control group, all workers will work under the scheme that generated the highest average performance in the experiment of the first round. In the two treatment groups, workers will be exposed to the scheme that is predicted to yield the highest performance conditional on the specific characteristics of each individual worker. The treatment groups differ in the algorithm used for the prediction. The key expected insights of the experiment are thus (i) whether and (ii) to what extent algorithmic assignment of the specific incentive scheme adopted can improve performance. Details regarding the algorithms used will be preregistered before the start of the second experiment.

General Experimental Design:
Before participating, subjects will be provided with a brief description of the task (complete a survey and a working task) as well as with the technical requirements (a physical keyboard) and guaranteed payment upon successful submission ($1 flat-pay + $1.50 guaranteed minimum bonus). Furthermore, they will be asked for their consent to participate in the study from which they know they can withdraw at any time.
The final sample will exclude subjects that:
(1) do not complete the MTurk task within 90 minutes of starting;
(2) are not approved;
(3) do not score at least one point;
(4) scored 4000 or more points (since this would indicate cheating)
(5) scored 400 or more points in 1 minute (since this would indicate cheating)

Restriction (2)-(4) are the same as in DellaVigna and Pope (2018). Restriction (1) is similar to the restriction in DellaVigna and Pope (2018), however, the maximum completion time is longer due to the survey included in our study. Restriction (5) is equivalent to restriction (4) broken down to individual minutes for which we will collect data as well.
Experimental Design Details
Not available
Randomization Method
In experiment 1, the assignment to the treatments is determined as follows. We construct strata based on the entry time of the subjects to the study, i.e. the first seven subjects to click on the link and thus enter the study belong to one stratum, the seven subjects entering afterwards belong to another stratum and so on. Within these strata, treatments 1 to 7 are assigned in a random order such that in each stratum each treatment is assigned once.
In experiment 2, the same process determines the assignment to the treatments. As experiment 2 consists of three different treatments, the strata also consist of three subjects each in this part of the study.
Randomization Unit
Individual subject
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
The number of clusters is the same as the number of observations (please see below).
Sample size: planned number of observations
In our first experiment, ideally, 6300 individual subjects (900 per treatment) complete the survey and the task. As we have to account for cases where we have to exclude subjects from the analysis (see above), we plan to advertise the task for 6600 subjects. We conclude the sampling either when we reach 6600 subjects or if it takes longer than three weeks, we stop as soon as we reach at least 4200 subjects (600 per treatment). The second experiment, we will advertise for at least 2000 subjects. The exact number can be larger depending on the power analysis we will conduct after the first experiment. The exact number will be preregistered before the start of second experiment.
Sample size (or number of clusters) by treatment arms
First experiment: ideally 900 individual subjects in each treatment (at least 600 individual subjects; see above)
Second experiment: at least 600 individual subjects in each treatment (exact number will be preregistered before the start of the second experiment; see above)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
University of Cologne Ethics Board
IRB Approval Date
2021-07-06
IRB Approval Number
210022SO