Back to History Current Version

Confidence and Preferences over Rewards for Innovating: Field Experimental Evidence

Last registered on March 25, 2019

Pre-Trial

Trial Information

General Information

Title
Confidence and Preferences over Rewards for Innovating: Field Experimental Evidence
RCT ID
AEARCTR-0004026
Initial registration date
March 18, 2019

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 23, 2019, 8:15 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
March 25, 2019, 10:55 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Primary Investigator

Affiliation
UC San Diego

Other Primary Investigator(s)

PI Affiliation
UC San Diego & NBER

Additional Trial Information

Status
In development
Start date
2019-03-19
End date
2019-08-31
Secondary IDs
Abstract
This project examines how preferences for rewards for innovation vary with individual perceptions about their relative capabilities, and how innovators' performance is affected by the match between rewards structures and preferences. We study this by randomly varying reward structures among individuals who have indicated a desire to participate in an innovative task, and by randomly altering beliefs about their relative quality across reward structures.
External Link(s)

Registration Citation

Citation
Graff Zivin, Joshua and Elizabeth Lyons. 2019. "Confidence and Preferences over Rewards for Innovating: Field Experimental Evidence." AEA RCT Registry. March 25. https://doi.org/10.1257/rct.4026-2.0
Former Citation
Graff Zivin, Joshua and Elizabeth Lyons. 2019. "Confidence and Preferences over Rewards for Innovating: Field Experimental Evidence." AEA RCT Registry. March 25. https://www.socialscienceregistry.org/trials/4026/history/44117
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2019-04-25
Intervention End Date
2019-05-24

Primary Outcomes

Primary Outcomes (end points)
1) whether or not a participant submits a project to the contest
2) the quality of projects conditional on submitting
Primary Outcomes (explanation)
Quality of project submissions will be measured primarily by the scores judges assign submissions. Judge scores are based on 5 categories: Functionality, User Friendliness, Wide Scope of Use Cases, Novelty, Addresses Contest Problem. Each category is scored on a scale of 1-5 according to a rubric provided to all judges. We will generate normalized aggregate scores and rankings using judge evaluations.

We will also measure whether participants subsequently commercialize or sell their submissions for commercialization as a proxy for submission quality.


Secondary Outcomes

Secondary Outcomes (end points)
1) short-median run labor market outcomes of participants
Secondary Outcomes (explanation)
We will measure the labor market outcomes of participants using follow-up surveys and LinkedIn. We will measure whether they made any changes to their employment, and if so, whether they changed job title, industry, or moved out of the labor market.

Experimental Design

Experimental Design
We are running an RCT within an innovation contest that is being run for the purposes of this research, and in partnership with Thermo Fisher’s office in Tijuana, Mexico. The contest is open to all Baja California, Mexico residents over the age of 18. It is a digital hackathon in which participants work on a specific problem that requires a software-based solution remotely, and submit their projects digitally. The contest is being promoted as a Hackathon that is part of a research study because we are not permitting participants to request to remove themselves from the data once they sign up for the contest. Furthermore, the contest is advertised as having up to $15,000 in prizes to be won, but the specific structure of rewards is not disclosed.

Everyone interested in signing up for the contest must first complete a survey that asks about their demographics, educational and work experience backgrounds, their programming knowledge, their experience in other innovation contests, their beliefs about their relative capabilities, and their risk preferences. They will also be asked to consent to have their survey and contest performance data used for research purposes. The sign-up deadline is approximately 48 hours before the start of the contest.

Following the sign-up deadline, participants will be randomly assigned to one of two reward structures: an all or nothing structure in which the first place winner receives the full $15,000 and a multiple prize reward structure in which there are prizes for the top ten winners. In the multiple prize reward structure, first prize is awarded $6,000, second prize $3,000, third prize $1,500, 4th place $900, and those who place in the 5th-10th place will receive $600. Contest participants will be notified about their reward structure by email at the start of the contest. The email will also provide information about the contest problem they are being asked to provide a solution for (information about the specific contest problem is not given until the contest start time to avoid people who signed up earlier having a mechanical advantage over those who signed up later). Importantly, participants in each of the two reward structures will be ranked relative to those in their reward structure. However, the same set of judges will judges both sets of projects to allow for judge fixed effects.

The contest is 54 hours after the start time. Contest submissions will be judged by academic and industry leaders in computer science and healthcare who will not be aware submitter treatment status.

We will use data on gender, innovator team composition (size of team and team member differences, and prior experience in innovative activities to examine heterogeneous treatment effects.
Experimental Design Details
Within each reward structure, half of the participants will be randomly selected to receive additional information about the skill sets of competitors. This information will be based on average survey responses to our programming skills and innovation contest experience questions. This information will also be included in the email announcing the contest problem. As we have individual responses to the same questions, we will have a measure of whether the information provided demonstrates a participant is above or below the capabilities of her average competitor.
Randomization Method
Randomization using Stata's (gen uniform()) command with an arbitrarily set and recorded seed.
Randomization Unit
Randomization into both treatments is at the individual level. Individuals will first be randomized into reward structure, and, within each reward structure, individuals will be randomized into information treatment.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
400 individuals
Sample size: planned number of observations
400 individuals
Sample size (or number of clusters) by treatment arms
100 individuals in winner-takes-all reward structure, no information group
100 individuals in winner-takes-all reward structure, information group
100 individuals in multiple prizes reward structure, no information group
100 individuals in multiple prizes reward structure, information group
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Using the baseline submission rate of 10% (s.d. 0.03) from a previously run innovation contest, we would need a difference of 19% to detect a difference between two treatment group's output quantity at the 95% level 80% of the time.
IRB

Institutional Review Boards (IRBs)

IRB Name
UC San Diego Human Research Protections Program
IRB Approval Date
2019-02-05
IRB Approval Number
180938

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials