Experimental Design Details
The experiment has two stages: in the first stage, subjects receive a recruitment message announcing either a monetary or non-monetary (award) incentive for participating in an algorithm competition; the control group just receives an invitation. In the second stage, after the target activity is completed, subjects receive the previously announced incentive, and a random subset will receive a surprise reward. The design is similar to Karlan & Zinman (2009) who used randomization along three dimensions to distinguish between hidden information and hidden action frictions in a consumer credit field experiment (see also, e.g., Ashraf, Berry & Shapiro 2010). Randomization is stratified based on prior participation in algorithm competitions (binary) and the last login date (binary: before or after Jan 1, 2016).
Our experimental context is an online innovation crowdsourcing platform that hosts algorithm competitions. Those challenges are timed competitions where all contestants compete online and are given the same problems to solve under the same time constraints. The activity of participants and the processes on the platform will not be altered.
Operationally, the recruitment email will be sent out 1 week prior to the target algorithm competition. The recruitment email will allow participants to sign up for the target algorithm competition through an online link. Additionally, participants can sign up for the target algorithm competition within the regular registration phase, which opens three hours before the beginning of the target algorithm competition. As naturally done by the platform provider, reminder messages will be sent out 24h prior to the target algorithm competition; these will contain the treatment that the subject had been assigned to initially.
Subjects who complete the target algorithm competition by submitting a solution will proceed to the second-stage randomization. In the second stage, we will randomly subdivide the groups of participants in the control, monetary and non-monetary incentive arms, respectively. Half of the subjects in the monetary incentive arm will additionally be provided with the non-monetary reward. Similarly, half of the subjects in the non-monetary incentive arm will be randomly selected to additionally receive the monetary reward. Half of the subjects in the stage-1 control arm will receive the monetary and non-monetary rewards by surprise. The remaining half of the stage-1 control arm will receive the standard email message thanking them for their participation.
We will run these target algorithm competitions about every other week and up to the point where we reach the minimum required sample size at stage 2 of the experiment. The schedule of competitions will be kept as constant as possible (e.g., day of week and time of day, with similar time periods between single competitions), but will need to accommodate the field partner’s constraints. We will host at least one more algorithm competition after the necessary sample size has been reached, without sending out treatment invitations, in order to give subjects in the last treatment round the same chance of participating in algorithm competitions. The platform naturally will continue to provide algorithm competitions after that. The problem difficulty will be held constant across all the algorithm competitions.
We expect recruitment into the Control arms to be the most difficult. Hence, if we do not reach a minimum of 100 subjects in each of the two Control arms who sign up for the first target algorithm competition, we will merge both Control arms. We will then only retain the treatment arm where subjects receive the surprise monetary and non-monetary rewards upon completion of the algorithm competition. We can assume that current algorithm participation rates provide a reasonable baseline estimate.