Cognitive Biases and Team Incentives

Last registered on November 30, 2021

Pre-Trial

Trial Information

General Information

Title
Cognitive Biases and Team Incentives
RCT ID
AEARCTR-0006596
Initial registration date
November 28, 2021

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 30, 2021, 6:01 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
November 30, 2021, 6:21 PM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Primary Investigator

Affiliation
Max Planck Institute

Other Primary Investigator(s)

PI Affiliation
Max Planck Institute for Research on Collective Goods

Additional Trial Information

Status
On going
Start date
2021-10-13
End date
2022-01-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
In this experiment, we are interested in the effects of individual and group factors on decision-making. Participants are recruited in a lab experiment to take part in a 2-hour session to measure how cognitive factors and incentives influence decisions.
External Link(s)

Registration Citation

Citation
Maddix, Nathaniel and Matthias Sutter. 2021. "Cognitive Biases and Team Incentives." AEA RCT Registry. November 30. https://doi.org/10.1257/rct.6596
Experimental Details

Interventions

Intervention(s)
In this experiment, we are interested in the effects of individual and group factors on decision-making. Participants are recruited in a lab experiment to take part in a 2-hour session to measure how cognitive factors and incentives influence decisions.
Intervention Start Date
2021-10-13
Intervention End Date
2021-12-30

Primary Outcomes

Primary Outcomes (end points)
Number of decision problems answered correctly (out of 24)
By cognitive bias, number correct (out of 3)
Primary Outcomes (explanation)
* Cognitive biases, scored by category: 24 questions address 8 major cognitive biases/processes. Due to the nature of each decision problem, each cognitive bias has a separate scoring rule that is explained to participants. This is either a direct match with the answer or determined by +/- 2 for numeric responses (from 0 to 100)
* Total correct (out of 24) for all decision problems

Note: The score in each of the cognitiv biases will be analyzed based on the cognitive bias type, for 7 results for cognitive biases and 1 result for cognitive reflection test. For more information, please see the attached protocol and study overview in supporting materials.


Secondary Outcomes

Secondary Outcomes (end points)
We have the following secondary outcomes of interest in our study:
1) Reaction times: The time participants take to submit for each cognitive bias category (set of items) and each individual item. We will also analyze the reaction times in the cognitive measures, such as the Raven's matrices, to understand how effort affects accuracy and performance.
2) Cooperation: The amount shared with a random partner (out of 8 tokens) in a prisoner's dilemma cooperation game.
3) Competitiveness: Choice to compete in a math competition or instead choose piecerate (binary outcome)
4) Cognitive Reflection Test (non-incentivized) as part of the Abbreviated Numeracy Scale. We will test for learning effects and perform a robustness test of the cognitive reflection items across our experimental conditions (within subjects differences).
Secondary Outcomes (explanation)
See details above and attached protocol.

Experimental Design

Experimental Design
Participants are recruited to take part in decision-making tasks in the Economic Science Laboratory at the University of Arizona. Over a period of 2 hours, students answer various decision-making questions and play economic games with other participants.
Experimental Design Details
To investigate the effects of incentives on performance in cognitive biases, the present study measures whether teams and/or incentives may improve decision-making. We conduct a laboratory experiment with approximately 500 participants to test whether the use of teams and incentives decreases the effect of well-known cognitive biases that measure statistical and cognitive reasoning, including: the anchoring effect, the availability heuristic, base rate neglect, the cognitive reflection test, the conjunction fallacy, contingent thinking, the gambler's fallacy, and the Monty Hall problem.
For each cognitive bias category (out of 8), participants are given three items of varying difficulty for a total of 24 decisions. All decisions are randomized before the experiment with the constraint that no similar item may appear next to one another, in order to avoid learning effects. The same randomization is then implemented for all experimental conditions. In the TEAM-INCENTIVE treatment, teams earn $1.00 for each answer they score correctly, whereas the TEAM-CONTROL treatment does not earn any payment for correct answers. Similarly, in the INDIVIDUAL-INCENTIVE treatment, individuals earn $1.00 for each answer they score correctly, yet no payment is given in the INDIVIDUAL-CONTROL condition. After completing the main decision-making questions, participants play economic games to measure cooperation and competition. They then complete a memory task (incentivized at .50 per item out of 12), Raven's 2 matrix (incentivized at .50 per item out of 24), a risk, ambiguity, and time preference measure task (incentivized), and scales and demographic items (including BFI-44 personality inventory, the FInancial Management Behavior Scale, the Abbreviated Numeracy Scale, Lay Rationalism, and an index for COVID19 compliance risk behaviors).
Randomization Method
Eight versions of the experimental treatments are implemented at the session level with computer software.
Randomization Unit
Session
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
40-55 session clusters of 10-20 participants in each session is expected. Conditional on the variable show-up rate for the student population after COVID19, more sessions may be added to increase undergraduate participants in the case of sessions with fewer participants.
Sample size: planned number of observations
500 undergraduate students
Sample size (or number of clusters) by treatment arms
Teams are treated as a single decision unit in our study. Individuals are a single decision unit.

Condition 1 (84 students)
Condition 2 (84 students)
Condition 3 (166 students; 83 groups)
Condition 4 (166 students; 83 groups)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
With power = .80, our sample size with four experimental conditions is able to detect a small-to-medium effect size (f = .20) from 277 units of analysis. We have chosen to increase our sample size to 334 units of analysis and improve our power to detect a difference (power = .87; p = .05).
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
University of Arizona IRB
IRB Approval Date
2021-09-08
IRB Approval Number
21-08-ECON

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials