What Motivates Teams?: Evidence from Experts and Workers

Last registered on December 09, 2020

Pre-Trial

Trial Information

General Information

Title
What Motivates Teams?: Evidence from Experts and Workers
RCT ID
AEARCTR-0006861
Initial registration date
December 07, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
December 09, 2020, 10:54 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Max Planck Institute

Other Primary Investigator(s)

PI Affiliation
Max Planck Institute

Additional Trial Information

Status
On going
Start date
2020-12-06
End date
2021-08-31
Secondary IDs
Abstract
We are interested in incentives and motivation for individuals and teams. We launch a large-scale online experiment to measure worker motivation across 20 treatments, contributing to literature on individual motivation and performance in real effort tasks and extending relevant findings to teams. We then run a prediction competition between lay experts (managers in US firms) and academic experts (researchers and professors at universities) in management and economics. Do intuitive managerial theories of what motivates team behavior align with worker performance in a real effort paradigm, and what can we learn about the role of team membership, what we call "team-based incentives," to increase the power of incentives with similar stake size relative to individual incentives? Our results are among the first to document the role of small but meaningful incentives for teams compared to individuals in a large-scale experiment. In addition to literature on incentives and gift exchange, we make a substantial contribution to literature on team behavior and framing effects in psychology and economics.

External Link(s)

Registration Citation

Citation
Maddix, Nathaniel and Matthias Sutter. 2020. "What Motivates Teams?: Evidence from Experts and Workers." AEA RCT Registry. December 09. https://doi.org/10.1257/rct.6861-1.0
Experimental Details

Interventions

Intervention(s)
We run a large-scale experiment in an online labor market with a real effort task. We implement monetary incentives, non-monetary incentives, gift exchange incentives, tournament incentives, and managerial incentives.
Intervention Start Date
2020-12-06
Intervention End Date
2020-12-24

Primary Outcomes

Primary Outcomes (end points)
Part A: Real Effort Experiment

Primary outcome variable: Total points scored (correct CAPTCHA items)

We will analyze ANOVA comparisons between all the treatments and the control group to estimate Average Treatment Effects from the baseline control groups. We will also analyze pairwise comparisons between individual and team conditions for each level of incentives (for example, piece rate team condition vs. piece rate individual condition).

Part B: Prediction Contest

Primary outcome variable: Estimate of change in effort for each condition relative to control group

In the prediction competition, we present individual and team incentives side by side for each incentive pair and present them to experts. Then respondents will predict how much change will occur relative to the baseline value with a slider.
Primary Outcomes (explanation)
N/A

Secondary Outcomes

Secondary Outcomes (end points)
- Biased beliefs about performance
- Number of questions attempted
- Team membership (closeness)
Secondary Outcomes (explanation)
Biased Beliefs - We ask participants to report their expected score and the scores of others. We are interested in whether there may be biased beliefs between incentive conditions for oneself or others. First, personal beliefs will simply be the self-reported number of questions believed correct after the task. Social beliefs will simply be the number of items participants believed others have answered correctly. We will measure bias by differencing the beliefs from the actual performance in items completed (points earned).

We will analyze the number of questions attempted not only correct in order to understand how treatment effects may have increased the propensity for respondents to skip question items.

For team membership, we will use a 7-point Likert borrowed from the Self in Other Scale to evaluate how team treatments changed perceptions of closeness between team members and their teams.

Experimental Design

Experimental Design
We randomly assign participants to one of twenty treatment arms and implement incentives. Then participants complete a real effort task for a duration of time.
Experimental Design Details
We recruit 5,000 participants on Amazon's Mechanical Turk (MTurk). We randomly assign participants to one of twenty treatment arms and implement monetary, non-monetary, and behavioral incentives. Participants work in a real effort task for 15 minutes and complete as many CAPTCHA tasks as they can within the timeframe. They earn 1 point for each correct response. Items are randomly presented to participants out of a pool of 200+ CAPTCHA items. Participants are allowed to skip items to answer as many as they can within the time limit.
Randomization Method
Randomization is performed by a computer program in Qualtrics
Randomization Unit
Individual workers
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
5,000 MTurk Online Workers
Sample size: planned number of observations
5,000 MTurk Online Workers
Sample size (or number of clusters) by treatment arms
250 workers for individual control group; 250 workers for team control group; 250 workers per treatment arm (x18)

See attached document for language presented to participants in each treatment arm.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
To detect a small effect size at power = .99 and alpha = .05 for ANOVA between groups analysis, the present study required 3,960 participants. Based on previous research (Dellavigna and Pope 2018), online large-scale studies may have a nontrivial percentage of observations with poor data quality due to server errors and attrition from online workers. For this reason, we chose to overpower the study and recruit 250 participants per treatment arm, allowing 20% of the participant data to be reserved for any issues that may arise.
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
Max Planck Advisory Board
IRB Approval Date
2020-11-16
IRB Approval Number
N/A

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials