Please fill out this short user survey of only 3 questions in order to help us improve the site. We appreciate your feedback!
What Motivates Teams?: Evidence from Experts and Workers
Last registered on December 09, 2020


Trial Information
General Information
What Motivates Teams?: Evidence from Experts and Workers
Initial registration date
December 07, 2020
Last updated
December 09, 2020 10:54 AM EST

This section is unavailable to the public. Use the button below to request access to this information.

Request Information
Primary Investigator
Max Planck Institute
Other Primary Investigator(s)
PI Affiliation
Max Planck Institute
Additional Trial Information
On going
Start date
End date
Secondary IDs
We are interested in incentives and motivation for individuals and teams. We launch a large-scale online experiment to measure worker motivation across 20 treatments, contributing to literature on individual motivation and performance in real effort tasks and extending relevant findings to teams. We then run a prediction competition between lay experts (managers in US firms) and academic experts (researchers and professors at universities) in management and economics. Do intuitive managerial theories of what motivates team behavior align with worker performance in a real effort paradigm, and what can we learn about the role of team membership, what we call "team-based incentives," to increase the power of incentives with similar stake size relative to individual incentives? Our results are among the first to document the role of small but meaningful incentives for teams compared to individuals in a large-scale experiment. In addition to literature on incentives and gift exchange, we make a substantial contribution to literature on team behavior and framing effects in psychology and economics.

External Link(s)
Registration Citation
Maddix, Nathaniel and Matthias Sutter. 2020. "What Motivates Teams?: Evidence from Experts and Workers." AEA RCT Registry. December 09. https://doi.org/10.1257/rct.6861-1.0.
Experimental Details
We run a large-scale experiment in an online labor market with a real effort task. We implement monetary incentives, non-monetary incentives, gift exchange incentives, tournament incentives, and managerial incentives.
Intervention Start Date
Intervention End Date
Primary Outcomes
Primary Outcomes (end points)
Part A: Real Effort Experiment

Primary outcome variable: Total points scored (correct CAPTCHA items)

We will analyze ANOVA comparisons between all the treatments and the control group to estimate Average Treatment Effects from the baseline control groups. We will also analyze pairwise comparisons between individual and team conditions for each level of incentives (for example, piece rate team condition vs. piece rate individual condition).

Part B: Prediction Contest

Primary outcome variable: Estimate of change in effort for each condition relative to control group

In the prediction competition, we present individual and team incentives side by side for each incentive pair and present them to experts. Then respondents will predict how much change will occur relative to the baseline value with a slider.
Primary Outcomes (explanation)
Secondary Outcomes
Secondary Outcomes (end points)
- Biased beliefs about performance
- Number of questions attempted
- Team membership (closeness)
Secondary Outcomes (explanation)
Biased Beliefs - We ask participants to report their expected score and the scores of others. We are interested in whether there may be biased beliefs between incentive conditions for oneself or others. First, personal beliefs will simply be the self-reported number of questions believed correct after the task. Social beliefs will simply be the number of items participants believed others have answered correctly. We will measure bias by differencing the beliefs from the actual performance in items completed (points earned).

We will analyze the number of questions attempted not only correct in order to understand how treatment effects may have increased the propensity for respondents to skip question items.

For team membership, we will use a 7-point Likert borrowed from the Self in Other Scale to evaluate how team treatments changed perceptions of closeness between team members and their teams.
Experimental Design
Experimental Design
We randomly assign participants to one of twenty treatment arms and implement incentives. Then participants complete a real effort task for a duration of time.
Experimental Design Details
Not available
Randomization Method
Randomization is performed by a computer program in Qualtrics
Randomization Unit
Individual workers
Was the treatment clustered?
Experiment Characteristics
Sample size: planned number of clusters
5,000 MTurk Online Workers
Sample size: planned number of observations
5,000 MTurk Online Workers
Sample size (or number of clusters) by treatment arms
250 workers for individual control group; 250 workers for team control group; 250 workers per treatment arm (x18)

See attached document for language presented to participants in each treatment arm.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
To detect a small effect size at power = .99 and alpha = .05 for ANOVA between groups analysis, the present study required 3,960 participants. Based on previous research (Dellavigna and Pope 2018), online large-scale studies may have a nontrivial percentage of observations with poor data quality due to server errors and attrition from online workers. For this reason, we chose to overpower the study and recruit 250 participants per treatment arm, allowing 20% of the participant data to be reserved for any issues that may arise.
Supporting Documents and Materials

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
IRB Name
Max Planck Advisory Board
IRB Approval Date
IRB Approval Number