Back to History

Fields Changed

Registration

Field Before After
Trial Status in_development completed
Abstract This project examines how preferences for rewards for innovation vary with individual perceptions about their relative capabilities, and how innovators' performance is affected by the match between rewards structures and preferences. We study this by randomly varying reward structures among individuals who have indicated a desire to participate in an innovative task, and by randomly altering beliefs about their relative quality across reward structures. Successful innovation is essential for the survival and growth of organizations but how best to incentivize innovation is poorly understood. We compare how two common incentive schemes affect innovative performance in a field experiment run in partnership with a large life sciences company. We find that a winner-takes-all compensation scheme generates significantly more novel innovation relative to a compensation scheme that offers the same total compensation, but shared across the ten best innovations. Moreover, we find that the elasticity of creativity with respect to compensation schemes is much larger for teams than individual innovators.
Last Published March 25, 2019 10:55 AM January 28, 2020 01:51 PM
Study Withdrawn No
Intervention Completion Date April 29, 2019
Data Collection Complete Yes
Final Sample Size: Number of Clusters (Unit of Randomization) Total number of participants: 132
Was attrition correlated with treatment status? No
Final Sample Size: Total Number of Observations 132 participants (93 individual participants, 39 teams)
Final Sample Size (or Number of Clusters) by Treatment Arms 66 in multiple prize arm, 66 in single prize arm
Data Collection Completion Date May 30, 2019
Intervention (Public) Our study implements two treatments within a digital hackathon to test the innovative performance implications of a winner-takes-all relative to a multiple prize reward structure. To do this, we randomly assign reward structures within a digital hackathon. Our treatment groups are as follows: 1) Winner-takes-all reward structure 2) Multiple prizes reward structure
Primary Outcomes (End Points) 1) whether or not a participant submits a project to the contest 2) the quality of projects conditional on submitting 1) whether or not a participant submits a project to the contest 2) the quality of projects conditional on submitting - quality will be measured by the combined ranking of the five categories being judged, and by the individual categories with a particular focus on novelty given the difference in risk implied by the two prize structures.
Primary Outcomes (Explanation) Quality of project submissions will be measured primarily by the scores judges assign submissions. Judge scores are based on 5 categories: Functionality, User Friendliness, Wide Scope of Use Cases, Novelty, Addresses Contest Problem. Each category is scored on a scale of 1-5 according to a rubric provided to all judges. We will generate normalized aggregate scores and rankings using judge evaluations. We will also measure whether participants subsequently commercialize or sell their submissions for commercialization as a proxy for submission quality. Quality of project submissions will be measured primarily by the scores judges assign submissions. Judge scores are based on 5 categories: Functionality, User Friendliness, Wide Scope of Use Cases, Novelty, Addresses Contest Problem. Each category is scored on a scale of 1-5 according to a rubric provided to all judges. We will generate normalized aggregate scores and rankings using judge evaluations. Novelty is judged relative to what is currently available on the market, and, is thus a market based measure of novelty.
Experimental Design (Public) We are running an RCT within an innovation contest that is being run for the purposes of this research, and in partnership with Thermo Fisher’s office in Tijuana, Mexico. The contest is open to all Baja California, Mexico residents over the age of 18. It is a digital hackathon in which participants work on a specific problem that requires a software-based solution remotely, and submit their projects digitally. The contest is being promoted as a Hackathon that is part of a research study because we are not permitting participants to request to remove themselves from the data once they sign up for the contest. Furthermore, the contest is advertised as having up to $15,000 in prizes to be won, but the specific structure of rewards is not disclosed. Everyone interested in signing up for the contest must first complete a survey that asks about their demographics, educational and work experience backgrounds, their programming knowledge, their experience in other innovation contests, their beliefs about their relative capabilities, and their risk preferences. They will also be asked to consent to have their survey and contest performance data used for research purposes. The sign-up deadline is approximately 48 hours before the start of the contest. Following the sign-up deadline, participants will be randomly assigned to one of two reward structures: an all or nothing structure in which the first place winner receives the full $15,000 and a multiple prize reward structure in which there are prizes for the top ten winners. In the multiple prize reward structure, first prize is awarded $6,000, second prize $3,000, third prize $1,500, 4th place $900, and those who place in the 5th-10th place will receive $600. Contest participants will be notified about their reward structure by email at the start of the contest. The email will also provide information about the contest problem they are being asked to provide a solution for (information about the specific contest problem is not given until the contest start time to avoid people who signed up earlier having a mechanical advantage over those who signed up later). Importantly, participants in each of the two reward structures will be ranked relative to those in their reward structure. However, the same set of judges will judges both sets of projects to allow for judge fixed effects. The contest is 54 hours after the start time. Contest submissions will be judged by academic and industry leaders in computer science and healthcare who will not be aware submitter treatment status. We will use data on gender, innovator team composition (size of team and team member differences, and prior experience in innovative activities to examine heterogeneous treatment effects. From the working paper draft: In order to test how prize structure impacts the quantity and quality of innovation, we ran a randomized control trial (RCT) within an innovation contest that we hosted in partnership with Thermo Fisher Scientific, a large biotechnology company with a market cap in excess of \$100 billion US. The innovation contest was hosted by their Mexico office in Baja California and was open to all non-management employees of the firm as well as employees at other technology firms in the region. To increase participation and help foster Thermo Fisher's recruitment interests, it was also promoted to STEM students at local universities. The contest was advertised over a 45-day period. Promotion materials included information about the general topic area of the innovation challenge, the competition dates, and the total prize purse available to participants. The promotion materials also informed potential participants that the contest was being co-hosted by UC San Diego and Thermo Fisher, and that it was part of a research study on motivations for innovation. We were required to disclose that the contest was part of a research study by UC San Diego's Institutional Review Board. We opted to disclose during recruitment rather than after the competition was complete because ex post disclosure would require that participants are given the option to remove themselves from the study and we were concerned that this could lead to selective attrition based on competition outcomes. Participation was open to individuals or teams of up to three people. At the start of the competition, the innovation challenge details were announced and participants were given 54 hours (from 6 pm on a Friday until midnight the following Sunday) to submit their entries. Submissions were made through DevPost, a popular commercial platform for hosting software innovation contests. The challenge was focused on addressing local health technology needs, with the specifics determined through a consultative process between the study authors and research managers at Thermo Fisher to ensure commercial relevance to the industry. The contest problem was chosen to ensure that reasonable progress could be made during the time allotted for the competition. In particular, participants were provided with the following text at the opening of the competition window: Mexico has many small health care providers and research and clinical laboratories that, on their own, cannot afford expensive equipment that would allow them to provide the highest quality care possible. We believe that the proliferation of digital and cloud technologies can help to solve this problem. We are asking you to show us how you think these technologies can be used to support access to high-quality medical equipment even for these small health care providers and labs. To generate random variation in the prize structure, we randomly assigned participants to one of two prize menus both with a total of 15,000USD available to contest winners. The first prize structure was a winner-takes-all design in which a single prize of 15,000USD would be given to the highest ranked submission. The second prize structure, provided awards to the ten highest ranked submissions. Submissions ranked first, second, third, and fourth received $6,000, $3,000, $1,500, and $900 respectively, and submissions ranked fifth to tenth received $600. Given an equal number of competitors in both study arms, the expected return for would be innovators is identical across the two arms, but competitors under the winner-takes-all arm faced a higher risk of failure. Randomization was performed following the enrollment deadline and stratified by team and individual participants. Participants were given information about the prize structure they would face at the same time they were provided details on the innovation challenge. Judges were told about the different prize structures at the same time the participants were to ensure they did not disclose the prize structures to participants beforehand. The exception to this was one of the Thermo Fisher judges who was involved in the planning of the contest and was aware there would be two contest arms. However, she was not told who would be placed in which arm, and we have no evidence that she disclosed any information about the contest prizes to participants. To avoid concerns that participants would feel betrayed if they only learned about the alternative prize structures through incidental conversations with other competitors, we disclosed the design upfront. Participants were told that the contest organizers had disagreed over the optimal prize structure and, as a result, had decided to randomly divide participants into two separate and equally sized groups with distinct prize structures. They were also assured that they would only be judged relative to others facing the same prize structure and therefore would only be competing with half of the total participant pool. Participants were instructed to turn in their complete or incomplete computer scripts, written explanations, and any other non-script output by the end of the competition deadline in order to be eligible for a prize. Contest submissions were judged by six industry experts, including high-level managers at Thermo Fisher, Teradata (a software company headquartered in San Diego, California), and computer science faculty who actively consult with technology companies in the Baja region. Submissions were judged on a 5-point scale of across five, equally weighted categories: novelty relative to existing products on the market, functionality, user friendliness, the scope of use cases, and the degree to which it addresses the innovation challenge. All submissions were reviewed by 3 of the 6 judges to whom they were randomly assigned. To ensure comparability of judge rankings across prize structures, all submissions were pooled before being randomly assigned to judges. All judges were blinded to all information about the incentive structure under which proposals were submitted. As advertised to participants, awards were determined by rank within each study arm. Our experiment design allows us to control for selection into contest participation based on prize structure. In addition to deciding whether to enroll in the competition prior to prize structure randomization, all participants were required to decide whether they would like to compete as a team or as an individual before prize structures were allocated. They also completed a pre-contest survey under the same conditions. This timing ensures the following three features in our empirical analysis: 1) we are able to observe differences in effort and performance across prize structures among statistically identical populations; 2) our measures of participant characteristics are not biased by the experimental treatment; and 3) selection into teams is not affected by the prize structures.
Randomization Unit Randomization into both treatments is at the individual level. Individuals will first be randomized into reward structure, and, within each reward structure, individuals will be randomized into information treatment. Randomization into both treatments is at the participant level; randomization is stratified by teams and individual participants
Planned Number of Clusters 400 individuals 200 participants
Planned Number of Observations 400 individuals 200 participants
Sample size (or number of clusters) by treatment arms 100 individuals in winner-takes-all reward structure, no information group 100 individuals in winner-takes-all reward structure, information group 100 individuals in multiple prizes reward structure, no information group 100 individuals in multiple prizes reward structure, information group 100 participants in the winner-takes-all reward structure 100 participants in the multiple prizes reward structure
Keyword(s) Firms And Productivity, Labor, Other Education, Firms And Productivity, Labor, Other
Intervention (Hidden) Our study implements two treatments within a digital hackathon to test whether observable characteristics can predict what types of people innovate better under a winner-takes-all versus a multiple prize reward structure and whether providing innovators information about their competitors changes the relationship between innovator performance and reward structure. To do this, we randomly assign reward structures and information with a digital hackathon. Our four treatment groups are as follows: 1) Winner-takes-all reward structure, no information about average competitor 2) Winner-takes-all reward structure, information about average competitor 3) Multiple prizes reward structure, no information about average competitor 4) Multiple prizes reward structure, information about average competitor Our study implements two treatments within a digital hackathon to test the innovative performance implications of a winner-takes-all relative to a multiple prize reward structure. To do this, we randomly assign reward structures within a digital hackathon. Our treatment groups are as follows: 1) Winner-takes-all reward structure 2) Multiple prizes reward structure
Secondary Outcomes (End Points) 1) short-median run labor market outcomes of participants
Secondary Outcomes (Explanation) We will measure the labor market outcomes of participants using follow-up surveys and LinkedIn. We will measure whether they made any changes to their employment, and if so, whether they changed job title, industry, or moved out of the labor market.
Back to top