Discouraging Competition

Last registered on March 22, 2024

Pre-Trial

Trial Information

General Information

Title
Discouraging Competition
RCT ID
AEARCTR-0013204
Initial registration date
March 19, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 19, 2024, 5:37 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
March 22, 2024, 3:45 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
University of Gothenburg - Department of Economics

Other Primary Investigator(s)

PI Affiliation
University of Gothenburg - Department of Economics
PI Affiliation
University of Gothenburg - Department of Economics

Additional Trial Information

Status
In development
Start date
2024-03-19
End date
2024-09-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We study if increased competition leads to decreased effort provision.
External Link(s)

Registration Citation

Citation
Behler, Timm, Jens Ewald and Patrik Reichert. 2024. "Discouraging Competition." AEA RCT Registry. March 22. https://doi.org/10.1257/rct.13204-1.1
Experimental Details

Interventions

Intervention(s)
We randomly vary (1) prize inequality across contests in 3 treatments and (2) contest scale across contests in 3 treatments.
Intervention (Hidden)
We run an online experiment on Amazon Mechanical Turk ("MTurk"), whereby participants are given 30 minutes to classify as many images of animals as they can. Participants are allocated to groups (contests) of different sizes, and then ranked based on the number of correctly classified images in this group. The groups can vary in how the total prize budget is split among the relative performance ranks (prize inequality treatments), or how many participants are in a given group (contest scale treatments).

In the inequality treatments we hold the number of competitors constant and vary prize inequality. We cover the two extremes: in treatment HI (“High Inequality”) we concentrate the entire budget of $24 in a prize awarded to the player who ranks first. In treatment LI (“Low Inequality”) all players except the one with the lowest score receive a prize of $3. Furthermore, we include a treatment with intermediate prize inequality (treatment II), where players in the top three earn a prize of $8 each.

In the scale treatments, we fix the size of the individual prizes at $8 and vary both the number of prizes and the number of competitors. Treatment LS (“Low Scale”) is the contest with the smallest scale: there is one prize and three competitors. The single prize of $8 is awarded to the top performer in the contest. In treatment IS (“Intermediate Scale”), there are two identical prizes (double from LS) and the number of competitors is now six (double of LS). The contest with the largest scale is treatment HS (“High Scale”), with three identical $8 prizes for the top three performer and nine contestants.

Note that treatment HS is the same as treatment II among the inequality treatments. We, therefore, have a total of five distinct treatments that nevertheless provides three distinct levels of treatment intensity along both the prize inequality- and the contest scale dimension.
Intervention Start Date
2024-03-19
Intervention End Date
2024-04-30

Primary Outcomes

Primary Outcomes (end points)
Our main outcome variable of interest is (unobservable) individual effort. We have three proxies:
(1) the number of correctly classified images during the time allocated to complete the task,
(2) time elapsed until a participant exits the task,
(3) dummy variable for early exit.
Primary Outcomes (explanation)
Note on (2) and (3): we allow participants to exit the task before the maximum time of 30 minutes allocated for the task is over.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We recruit workers on MTurk to complete a task often requested on the platform: image classification. The workers are randomized into five treatment, varying (1) prize inequality and (2) contest scale. Workers compete in groups: prize allocation is a function of how many images an individual classifies correctly. The experiment ends with a questionnaire.
Experimental Design Details
See attached analysis plan for more details.
Randomization Method
We randomize each individual among all five treatments, conditional on maintaining an even allocation among treatments. The randomization is performed by a computer.
Randomization Unit
Individual participant.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
1530
Sample size: planned number of observations
1530
Sample size (or number of clusters) by treatment arms
306
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
See analysis plan.
IRB

Institutional Review Boards (IRBs)

IRB Name
Ethical Advisory Board at the Department of Economics (University of Gothenburg)
IRB Approval Date
2022-05-30
IRB Approval Number
N/A
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials