The Selection and Incentive Effects of Money and Recognition for Problem Solvers

Last registered on October 04, 2017

Pre-Trial

Trial Information

General Information

Title
The Selection and Incentive Effects of Money and Recognition for Problem Solvers
RCT ID
AEARCTR-0002303
Initial registration date
October 04, 2017

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 04, 2017, 12:51 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
Technical University of Munich

Other Primary Investigator(s)

PI Affiliation
Harvard University
PI Affiliation
UCLA Anderson School of Management

Additional Trial Information

Status
In development
Start date
2017-10-05
End date
2018-08-17
Secondary IDs
Abstract
This field experiment employs a two-stage design to study the selection and treatment effects of recognition benchmarked against money. The context is a large online crowdsourcing platform where knowledge workers engage in coding challenges. The incentives consist in a fixed wage and a recognition (awards) treatment, both of which are dependent on completion but not on performance.
External Link(s)

Registration Citation

Citation
Gallus, Jana, Karim Lakhani and Eric Zimmerling. 2017. "The Selection and Incentive Effects of Money and Recognition for Problem Solvers." AEA RCT Registry. October 04. https://doi.org/10.1257/rct.2303-1.0
Former Citation
Gallus, Jana, Karim Lakhani and Eric Zimmerling. 2017. "The Selection and Incentive Effects of Money and Recognition for Problem Solvers." AEA RCT Registry. October 04. https://www.socialscienceregistry.org/trials/2303/history/22021
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
The treatment groups will be:
(1) a standard invitation (baseline)
(2) a $5 reward, which the platform providers and a subsample of frequent users have identified as a fair wage in the context of the algorithm competitions in question
(3) recognition in the form of an award. The award was designed to yield generalizable results.



Intervention Start Date
2017-10-12
Intervention End Date
2018-08-17

Primary Outcomes

Primary Outcomes (end points)
(1) Participation probability and performance in target algorithm competition
(2) Platform engagement after target algorithm competition
Primary Outcomes (explanation)
(1) Participation probability: sign-up, completion of target algorithm competition
(2) Platform engagement / retention main DV: any interaction with platform, including opening of emails and logging in on platform
Effort:
Time on platform [if organizational partner sets this up as planned]
Number of “Single Round Matches” and for-profit competitions completed
Coordination work and cooperation: e.g., forum posts, survey participation
Performance:
Change in Topcoder rating
Quality of contributions [if organizational partner can provide measurements for given challenges]

We will explore whether subjects become more likely to engage in new segments, but this will not be a major outcome variable of interest for this analysis.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
This field experiment employs a two-stage randomized design that involves monetary and non-monetary treatments both as ex ante incentives and ex post rewards.
Experimental Design Details
The experiment has two stages: in the first stage, subjects receive a recruitment message announcing either a monetary or non-monetary (award) incentive for participating in an algorithm competition; the control group just receives an invitation. In the second stage, after the target activity is completed, subjects receive the previously announced incentive, and a random subset will receive a surprise reward. The design is similar to Karlan & Zinman (2009) who used randomization along three dimensions to distinguish between hidden information and hidden action frictions in a consumer credit field experiment (see also, e.g., Ashraf, Berry & Shapiro 2010). Randomization is stratified based on prior participation in algorithm competitions (binary) and the last login date (binary: before or after Jan 1, 2016).

Our experimental context is an online innovation crowdsourcing platform that hosts algorithm competitions. Those challenges are timed competitions where all contestants compete online and are given the same problems to solve under the same time constraints. The activity of participants and the processes on the platform will not be altered.

Operationally, the recruitment email will be sent out 1 week prior to the target algorithm competition. The recruitment email will allow participants to sign up for the target algorithm competition through an online link. Additionally, participants can sign up for the target algorithm competition within the regular registration phase, which opens three hours before the beginning of the target algorithm competition. As naturally done by the platform provider, reminder messages will be sent out 24h prior to the target algorithm competition; these will contain the treatment that the subject had been assigned to initially.

Subjects who complete the target algorithm competition by submitting a solution will proceed to the second-stage randomization. In the second stage, we will randomly subdivide the groups of participants in the control, monetary and non-monetary incentive arms, respectively. Half of the subjects in the monetary incentive arm will additionally be provided with the non-monetary reward. Similarly, half of the subjects in the non-monetary incentive arm will be randomly selected to additionally receive the monetary reward. Half of the subjects in the stage-1 control arm will receive the monetary and non-monetary rewards by surprise. The remaining half of the stage-1 control arm will receive the standard email message thanking them for their participation.

We will run these target algorithm competitions about every other week and up to the point where we reach the minimum required sample size at stage 2 of the experiment. The schedule of competitions will be kept as constant as possible (e.g., day of week and time of day, with similar time periods between single competitions), but will need to accommodate the field partner’s constraints. We will host at least one more algorithm competition after the necessary sample size has been reached, without sending out treatment invitations, in order to give subjects in the last treatment round the same chance of participating in algorithm competitions. The platform naturally will continue to provide algorithm competitions after that. The problem difficulty will be held constant across all the algorithm competitions.

We expect recruitment into the Control arms to be the most difficult. Hence, if we do not reach a minimum of 100 subjects in each of the two Control arms who sign up for the first target algorithm competition, we will merge both Control arms. We will then only retain the treatment arm where subjects receive the surprise monetary and non-monetary rewards upon completion of the algorithm competition. We can assume that current algorithm participation rates provide a reasonable baseline estimate.

Randomization Method
randomization done in office by a computer
Randomization Unit
individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
2400 individual participants who receive the stage 2 treatment
Sample size: planned number of observations
2400 individual participants
Sample size (or number of clusters) by treatment arms
Control: ≥800 subjects, split into 2 subgroups in stage 2
Money: ≥800 subjects, split into 2 subgroups in stage 2
Award: ≥800 subjects, split into 2 subgroups in stage 2
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
The Institutional Review Board (IRB) of the Harvard University-Area
IRB Approval Date
2017-08-14
IRB Approval Number
IRB17-1042

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials