Confidence and Preferences over Rewards for Innovating: Field Experimental Evidence

Last registered on January 29, 2020

Pre-Trial

Trial Information

General Information

Title
Confidence and Preferences over Rewards for Innovating: Field Experimental Evidence
RCT ID
AEARCTR-0004026
Initial registration date
March 18, 2019

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 23, 2019, 8:15 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
January 29, 2020, 11:14 AM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Primary Investigator

Affiliation
UC San Diego

Other Primary Investigator(s)

PI Affiliation
UC San Diego & NBER

Additional Trial Information

Status
Completed
Start date
2019-03-19
End date
2019-08-31
Secondary IDs
Abstract
Successful innovation is essential for the survival and growth of organizations but how best to incentivize innovation is poorly understood. We compare how two common incentive schemes affect innovative performance in a field experiment run in partnership with a large life sciences company. We find that a winner-takes-all compensation scheme generates significantly more novel innovation relative to a compensation scheme that offers the same total compensation, but shared across the ten best innovations. Moreover, we find that the elasticity of creativity with respect to compensation schemes is much larger for teams than individual innovators.
External Link(s)

Registration Citation

Citation
Graff Zivin, Joshua and Elizabeth Lyons. 2020. "Confidence and Preferences over Rewards for Innovating: Field Experimental Evidence." AEA RCT Registry. January 29. https://doi.org/10.1257/rct.4026-2.2
Former Citation
Graff Zivin, Joshua and Elizabeth Lyons. 2020. "Confidence and Preferences over Rewards for Innovating: Field Experimental Evidence." AEA RCT Registry. January 29. https://www.socialscienceregistry.org/trials/4026/history/61628
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Our study implements two treatments within a digital hackathon to test the innovative performance implications of a winner-takes-all relative to a multiple prize reward structure. To do this, we randomly assign reward structures within a digital hackathon. Our treatment groups are as follows:

1) Winner-takes-all reward structure
2) Multiple prizes reward structure

As highlighted in our trial history, we initially planned to include an information treatment but did not reach the sample size required to do so.
Intervention Start Date
2019-04-25
Intervention End Date
2019-05-24

Primary Outcomes

Primary Outcomes (end points)
1) whether or not a participant submits a project to the contest
2) the quality of projects conditional on submitting - quality will be measured by the combined ranking of the five categories being judged, and by the individual categories with a particular focus on novelty given the difference in risk implied by the two prize structures.
Primary Outcomes (explanation)
Quality of project submissions will be measured primarily by the scores judges assign submissions. Judge scores are based on 5 categories: Functionality, User Friendliness, Wide Scope of Use Cases, Novelty, Addresses Contest Problem. Each category is scored on a scale of 1-5 according to a rubric provided to all judges. We will generate normalized aggregate scores and rankings using judge evaluations.


Novelty is judged relative to what is currently available on the market, and, is thus a market based measure of novelty.


Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
From the working paper draft:

In order to test how prize structure impacts the quantity and quality of innovation, we ran a randomized control trial (RCT) within an innovation contest that we hosted in partnership with Thermo Fisher Scientific, a large biotechnology company with a market cap in excess of \$100 billion US. The innovation contest was hosted by their Mexico office in Baja California and was open to all non-management employees of the firm as well as employees at other technology firms in the region. To increase participation and help foster Thermo Fisher's recruitment interests, it was also promoted to STEM students at local universities.

The contest was advertised over a 45-day period. Promotion materials included information about the general topic area of the innovation challenge, the competition dates, and the total prize purse available to participants. The promotion materials also informed potential participants that the contest was being co-hosted by UC San Diego and Thermo Fisher, and that it was part of a research study on motivations for innovation. We were required to disclose that the contest was part of a research study by UC San Diego's Institutional Review Board. We opted to disclose during recruitment rather than after the competition was complete because ex post disclosure would require that participants are given the option to remove themselves from the study and we were concerned that this could lead to selective attrition based on competition outcomes. Participation was open to individuals or teams of up to three people.

At the start of the competition, the innovation challenge details were announced and participants were given 54 hours (from 6 pm on a Friday until midnight the following Sunday) to submit their entries. Submissions were made through DevPost, a popular commercial platform for hosting software innovation contests. The challenge was focused on addressing local health technology needs, with the specifics determined through a consultative process between the study authors and research managers at Thermo Fisher to ensure commercial relevance to the industry. The contest problem was chosen to ensure that reasonable progress could be made during the time allotted for the competition.

In particular, participants were provided with the following text at the opening of the competition window: Mexico has many small health care providers and research and clinical laboratories that, on their own, cannot afford expensive equipment that would allow them to provide the highest quality care possible. We believe that the proliferation of digital and cloud technologies can help to solve this problem. We are asking you to show us how you think these technologies can be used to support access to high-quality medical equipment even for these small health care providers and labs.

To generate random variation in the prize structure, we randomly assigned participants to one of two prize menus both with a total of 15,000USD available to contest winners. The first prize structure was a winner-takes-all design in which a single prize of 15,000USD would be given to the highest ranked submission. The second prize structure, provided awards to the ten highest ranked submissions. Submissions ranked first, second, third, and fourth received $6,000, $3,000, $1,500, and $900 respectively, and submissions ranked fifth to tenth received $600. Given an equal number of competitors in both study arms, the expected return for would be innovators is identical across the two arms, but competitors under the winner-takes-all arm faced a higher risk of failure.

Randomization was performed following the enrollment deadline and stratified by team and individual participants. Participants were given information about the prize structure they would face at the same time they were provided details on the innovation challenge. Judges were told about the different prize structures at the same time the participants were to ensure they did not disclose the prize structures to participants beforehand. The exception to this was one of the Thermo Fisher judges who was involved in the planning of the contest and was aware there would be two contest arms. However, she was not told who would be placed in which arm, and we have no evidence that she disclosed any information about the contest prizes to participants. To avoid concerns that participants would feel betrayed if they only learned about the alternative prize structures through incidental conversations with other competitors, we disclosed the design upfront. Participants were told that the contest organizers had disagreed over the optimal prize structure and, as a result, had decided to randomly divide participants into two separate and equally sized groups with distinct prize structures. They were also assured that they would only be judged relative to others facing the same prize structure and therefore would only be competing with half of the total participant pool.

Participants were instructed to turn in their complete or incomplete computer scripts, written explanations, and any other non-script output by the end of the competition deadline in order to be eligible for a prize. Contest submissions were judged by six industry experts, including high-level managers at Thermo Fisher, Teradata (a software company headquartered in San Diego, California), and computer science faculty who actively consult with technology companies in the Baja region. Submissions were judged on a 5-point scale of across five, equally weighted categories: novelty relative to existing products on the market, functionality, user friendliness, the scope of use cases, and the degree to which it addresses the innovation challenge.

All submissions were reviewed by 3 of the 6 judges to whom they were randomly assigned. To ensure comparability of judge rankings across prize structures, all submissions were pooled before being randomly assigned to judges. All judges were blinded to all information about the incentive structure under which proposals were submitted. As advertised to participants, awards were determined by rank within each study arm.

Our experiment design allows us to control for selection into contest participation based on prize structure. In addition to deciding whether to enroll in the competition prior to prize structure randomization, all participants were required to decide whether they would like to compete as a team or as an individual before prize structures were allocated. They also completed a pre-contest survey under the same conditions. This timing ensures the following three features in our empirical analysis: 1) we are able to observe differences in effort and performance across prize structures among statistically identical populations; 2) our measures of participant characteristics are not biased by the experimental treatment; and 3) selection into teams is not affected by the prize structures.
Experimental Design Details
Randomization Method
Randomization using Stata's (gen uniform()) command with an arbitrarily set and recorded seed.
Randomization Unit
Randomization into both treatments is at the participant level; randomization is stratified by teams and individual participants
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
200 participants
Sample size: planned number of observations
200 participants
Sample size (or number of clusters) by treatment arms
100 participants in the winner-takes-all reward structure
100 participants in the multiple prizes reward structure
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Using the baseline submission rate of 10% (s.d. 0.03) from a previously run innovation contest, we would need a difference of 19% to detect a difference between two treatment group's output quantity at the 95% level 80% of the time.
IRB

Institutional Review Boards (IRBs)

IRB Name
UC San Diego Human Research Protections Program
IRB Approval Date
2019-02-05
IRB Approval Number
180938

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
Yes
Intervention Completion Date
April 29, 2019, 12:00 +00:00
Data Collection Complete
Yes
Data Collection Completion Date
May 30, 2019, 12:00 +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
Total number of participants: 132
Was attrition correlated with treatment status?
No
Final Sample Size: Total Number of Observations
132 participants (93 individual participants, 39 teams)
Final Sample Size (or Number of Clusters) by Treatment Arms
66 in multiple prize arm, 66 in single prize arm
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials