The Selection and Incentive Effects of Money and Recognition for Problem Solvers
Last registered on October 04, 2017


Trial Information
General Information
The Selection and Incentive Effects of Money and Recognition for Problem Solvers
Initial registration date
October 04, 2017
Last updated
October 04, 2017 12:51 PM EDT
Primary Investigator
Technical University of Munich
Other Primary Investigator(s)
PI Affiliation
Harvard University
PI Affiliation
UCLA Anderson School of Management
Additional Trial Information
In development
Start date
End date
Secondary IDs
This field experiment employs a two-stage design to study the selection and treatment effects of recognition benchmarked against money. The context is a large online crowdsourcing platform where knowledge workers engage in coding challenges. The incentives consist in a fixed wage and a recognition (awards) treatment, both of which are dependent on completion but not on performance.
External Link(s)
Registration Citation
Gallus, Jana, Karim Lakhani and Eric Zimmerling. 2017. "The Selection and Incentive Effects of Money and Recognition for Problem Solvers." AEA RCT Registry. October 04.
Former Citation
Gallus, Jana et al. 2017. "The Selection and Incentive Effects of Money and Recognition for Problem Solvers." AEA RCT Registry. October 04.
Sponsors & Partners

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Experimental Details
The treatment groups will be:
(1) a standard invitation (baseline)
(2) a $5 reward, which the platform providers and a subsample of frequent users have identified as a fair wage in the context of the algorithm competitions in question
(3) recognition in the form of an award. The award was designed to yield generalizable results.

Intervention Start Date
Intervention End Date
Primary Outcomes
Primary Outcomes (end points)
(1) Participation probability and performance in target algorithm competition
(2) Platform engagement after target algorithm competition
Primary Outcomes (explanation)
(1) Participation probability: sign-up, completion of target algorithm competition
(2) Platform engagement / retention main DV: any interaction with platform, including opening of emails and logging in on platform
Time on platform [if organizational partner sets this up as planned]
Number of “Single Round Matches” and for-profit competitions completed
Coordination work and cooperation: e.g., forum posts, survey participation
Change in Topcoder rating
Quality of contributions [if organizational partner can provide measurements for given challenges]

We will explore whether subjects become more likely to engage in new segments, but this will not be a major outcome variable of interest for this analysis.
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
This field experiment employs a two-stage randomized design that involves monetary and non-monetary treatments both as ex ante incentives and ex post rewards.
Experimental Design Details
The experiment has two stages: in the first stage, subjects receive a recruitment message announcing either a monetary or non-monetary (award) incentive for participating in an algorithm competition; the control group just receives an invitation. In the second stage, after the target activity is completed, subjects receive the previously announced incentive, and a random subset will receive a surprise reward. The design is similar to Karlan & Zinman (2009) who used randomization along three dimensions to distinguish between hidden information and hidden action frictions in a consumer credit field experiment (see also, e.g., Ashraf, Berry & Shapiro 2010). Randomization is stratified based on prior participation in algorithm competitions (binary) and the last login date (binary: before or after Jan 1, 2016).

Our experimental context is an online innovation crowdsourcing platform that hosts algorithm competitions. Those challenges are timed competitions where all contestants compete online and are given the same problems to solve under the same time constraints. The activity of participants and the processes on the platform will not be altered.

Operationally, the recruitment email will be sent out 1 week prior to the target algorithm competition. The recruitment email will allow participants to sign up for the target algorithm competition through an online link. Additionally, participants can sign up for the target algorithm competition within the regular registration phase, which opens three hours before the beginning of the target algorithm competition. As naturally done by the platform provider, reminder messages will be sent out 24h prior to the target algorithm competition; these will contain the treatment that the subject had been assigned to initially.

Subjects who complete the target algorithm competition by submitting a solution will proceed to the second-stage randomization. In the second stage, we will randomly subdivide the groups of participants in the control, monetary and non-monetary incentive arms, respectively. Half of the subjects in the monetary incentive arm will additionally be provided with the non-monetary reward. Similarly, half of the subjects in the non-monetary incentive arm will be randomly selected to additionally receive the monetary reward. Half of the subjects in the stage-1 control arm will receive the monetary and non-monetary rewards by surprise. The remaining half of the stage-1 control arm will receive the standard email message thanking them for their participation.

We will run these target algorithm competitions about every other week and up to the point where we reach the minimum required sample size at stage 2 of the experiment. The schedule of competitions will be kept as constant as possible (e.g., day of week and time of day, with similar time periods between single competitions), but will need to accommodate the field partner’s constraints. We will host at least one more algorithm competition after the necessary sample size has been reached, without sending out treatment invitations, in order to give subjects in the last treatment round the same chance of participating in algorithm competitions. The platform naturally will continue to provide algorithm competitions after that. The problem difficulty will be held constant across all the algorithm competitions.

We expect recruitment into the Control arms to be the most difficult. Hence, if we do not reach a minimum of 100 subjects in each of the two Control arms who sign up for the first target algorithm competition, we will merge both Control arms. We will then only retain the treatment arm where subjects receive the surprise monetary and non-monetary rewards upon completion of the algorithm competition. We can assume that current algorithm participation rates provide a reasonable baseline estimate.

Randomization Method
randomization done in office by a computer
Randomization Unit
Was the treatment clustered?
Experiment Characteristics
Sample size: planned number of clusters
2400 individual participants who receive the stage 2 treatment
Sample size: planned number of observations
2400 individual participants
Sample size (or number of clusters) by treatment arms
Control: ≥800 subjects, split into 2 subgroups in stage 2
Money: ≥800 subjects, split into 2 subgroups in stage 2
Award: ≥800 subjects, split into 2 subgroups in stage 2
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB Name
The Institutional Review Board (IRB) of the Harvard University-Area
IRB Approval Date
IRB Approval Number
Post Trial Information
Study Withdrawal
Is the intervention completed?
Is data collection complete?
Data Publication
Data Publication
Is public data available?
Program Files
Program Files
Reports, Papers & Other Materials
Relevant Paper(s)