The targeted assignment of incentive schemes – Part 2

Last registered on November 01, 2021

Pre-Trial

Trial Information

General Information

Title
The targeted assignment of incentive schemes – Part 2
RCT ID
AEARCTR-0008440
Initial registration date
October 29, 2021

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 01, 2021, 12:45 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
University of Cologne

Other Primary Investigator(s)

PI Affiliation
University of Cologne
PI Affiliation
Frankfurt School of Finance & Management
PI Affiliation
University of Cologne

Additional Trial Information

Status
In development
Start date
2021-09-13
End date
2022-12-31
Secondary IDs
Prior work
This trial is based on or builds upon one or more prior RCTs.
Abstract
A central question in designing optimal policies concerns the assignment of individuals with different observable characteristics to different treatments (policies). We study this question in the context of increasing worker’s performance by using the appropriate incentives. Concretely, we study whether and to what extent the impact of incentive schemes on performance can be improved through a targeted assignment of the implemented scheme to the characteristics of the respective worker. To do so, we will run a set of large-scale real-effort experiments with approximately 6,000 workers on Amazon MTurk (see AEARCTR-0008212 for the first part of the study).
External Link(s)

Registration Citation

Citation
Opitz, Saskia et al. 2021. "The targeted assignment of incentive schemes – Part 2." AEA RCT Registry. November 01. https://doi.org/10.1257/rct.8440-1.0
Experimental Details

Interventions

Intervention(s)
The interventions are three different incentive schemes, partly borrowed from DellaVigna and Pope (2018) but also different ones.
Intervention Start Date
2021-11-02
Intervention End Date
2022-01-31

Primary Outcomes

Primary Outcomes (end points)
The primary outcome variable is the average effort provided in the different treatments, i.e. the number of points scored in 10 minutes.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
For this experiment, we draw on the data from our previous RCT (see AEARCTR-0008212). As preregistered as Experiment 1, we ran an experiment on MTurk, in which we not only asked subjects for demographic information (age, gender, education) but also elicited Big-5 personality traits, preferences on risk, altruism, reciprocity, social comparison, loss aversion and competition. After this elicitation, subjects had to work for 10 minutes on a button-pressing task similar to DellaVigna and Pope (2018). Subjects were randomly assigned to a control group or to one of six different incentive schemes: (1) a piece rate (2) social incentive (3) goal (4) gift (5) bonus loss (6) real time feedback.

Based on the data of this first experiment, we will run a second round of experiments with a different set of subjects on MTurk. We already mentioned this second experiment in the preregistration of the first experiment and will now preregister it in detail.

In the second experiment, we first again elicit the respective subjects’ characteristics (the same characteristics as in the first experiment). We will randomly allocate the subjects to one of three groups: one of two control groups or the treatment group. The first control group is the same as the one in the first experiment, i.e. no incentive will be provided. In the second control group, all workers will work under the scheme that generated the highest average performance in the experiment of the first round, i.e. bonus loss. In the treatment group, workers will be exposed to the scheme that is predicted to yield the highest performance conditional on the specific characteristics of each individual worker. For the predictions, we will use an indirect estimation approach based on random forests, which we trained on the data of the first experiment. We will restrict the set of candidate schemes to three of the six schemes in experiment 1: (1) bonus loss, (2) social incentive and (3) real time feedback. The key expected insights of the experiment are (i) whether and (ii) to what extent algorithmic assignment of the specific incentive scheme adopted can improve performance.

GENERAL EXPERIMENTAL DESIGN
Before participating, subjects will be provided with a brief description of the task (complete a survey and a working task) as well as with the technical requirements (a physical keyboard) and guaranteed payment upon successful submission ($1 flat-pay + $1.50 guaranteed minimum bonus). Furthermore, they will be asked for their consent to participate in the study from which they know they can withdraw at any time.
The final sample will exclude subjects that:
(1) do not complete the MTurk task within 90 minutes of starting;
(2) are not approved;
(3) do not score at least one point;
(4) scored 4000 or more points (since this would indicate cheating)
(5) scored 400 or more points in 1 minute (since this would indicate cheating)

Restriction (2)-(4) are the same as in DellaVigna and Pope (2018). Restriction (1) is similar to the restriction in DellaVigna and Pope (2018), however, the maximum completion time is longer due to the survey included in our study. Restriction (5) is equivalent to restriction (4) broken down to individual minutes for which we will collect data as well.
Experimental Design Details
Randomization Method
The assignment to the treatments is determined as follows. First, we randomly assign subjects to the first control group (i.e. no incentive scheme) or to receiving an incentive scheme. For the subject who receive an incentive scheme, we construct strata based on the entry time of the subjects to the study, i.e. the first two subjects to click on the link and thus enter the study belong to one stratum, the two subjects entering afterwards belong to another stratum and so on. Within these strata, we randomly assign one individual to the overall (on average) best performing treatment in experiment 1 and one individual to the treatment suggested by the algorithm.
Randomization Unit
Individual subject
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
The number of clusters is the same as the number of observations (please see below).
Sample size: planned number of observations
In our experiment, we aim for 6200 subjects (200 for control group without incentive, 3000 for control group with incentive and treatment group) complete the survey and the task. As we have to account for cases where we have to exclude subjects from the analysis (see above), we plan to advertise the task for 6800 subjects.
Sample size (or number of clusters) by treatment arms
We aim for 200 subjects in control group without incentive, 3000 subjects in control group with incentive and 3000 subjects in treatment group (see above)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
A power analysis based on the data of the first experiment shows that given the planned sample size as stated above, the experiment is sufficiently powered.
IRB

Institutional Review Boards (IRBs)

IRB Name
University of Cologne Ethics Board
IRB Approval Date
2021-07-06
IRB Approval Number
210022SO

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials