Competition, Cheating and Feedback

Last registered on September 24, 2024

Pre-Trial

Trial Information

General Information

Title
Competition, Cheating and Feedback
RCT ID
AEARCTR-0013949
Initial registration date
September 18, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
September 24, 2024, 2:47 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation

Other Primary Investigator(s)

PI Affiliation
IMT School for Advanced Studies Lucca

Additional Trial Information

Status
In development
Start date
2024-09-19
End date
2025-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We propose an online experiment to study the role of feedback on dishonesty in a competitive setting. The effects of competition and feedback on dishonest reporting behaviour have been widely studied in the literature. While some research suggests that competition can increase unethical conduct, others have found that competitive environments can also promote integrity and fairness. Similarly, the impact of performance feedback on dishonest behaviour is mixed, with studies showing either an increase or a decrease in cheating. The experiment aims to shed light on the interplay between competition and feedback, and their joint influence on individuals' propensity to misreport outcomes. In the experiment, subjects have an incentive to misreport a random outcome in their favour. We study how mis-reporting such an outcome is affected by the payment scheme (individual piece-rate vs winner takes all contest) and by the information about others' choices (no information vs past reports). Moreover, we study whether the effect of competition will carry over and affect individuals' misreporting in a subsequent round with a piece-rate payment scheme.
External Link(s)

Registration Citation

Citation
Albertazzi, Andrea and Elke Weidenholzer. 2024. "Competition, Cheating and Feedback." AEA RCT Registry. September 24. https://doi.org/10.1257/rct.13949-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
There are two treatment dimensions:

* Incentive scheme: piece-rate vs winner-takes-all contest
* Available information about another participant/opponent: no information vs information about their past behaviour in the experiment
Intervention Start Date
2024-09-19
Intervention End Date
2025-12-31

Primary Outcomes

Primary Outcomes (end points)
Average points obtained in each part of the experiment.
Primary Outcomes (explanation)
Average points are measured as the mean of obtained points for each subject in each part. In part 2 of treatments with competition, we halve the points corresponding to each outcome to ease comparisons between parts.

Secondary Outcomes

Secondary Outcomes (end points)
Binary variable equal to one if the average points obtained in one part are greater than 2.
Beliefs about others' behaviour.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We propose an online experiment studying the role of competition and feedback on dishonesty. The experiment consists of three parts. In each part, participants can roll a virtual four-sided die 20 times and report each outcome. Each outcome is associated with a certain amount of points ranging from 0 to 3.

In Part 1, all subjects receive a piece rate payment corresponding to the total sum of points associated with their reported outcomes.

In Part 2, participants are assigned to one of the following treatments:

1) Ind No F: same as part 1

2) Ind F: same as part 1, but additionally, before rolling the die, the subjects receive information about the distribution of reports made by another random participant in part 1.

3) Comp No F: same as part 1, but subjects are competing in a winner-takes-all contest and will only be paid if they earn more points than another random participant they are matched with.

4) Comp F: same as (3), but additionally, before rolling the die, subjects receive information about the distribution of reports in part 1 made by the random participant they have been matched with in part 2.

Part 3 is identical to Part 1.

In Part 2, treatments (3) and (4), the amount of points associated with each die roll outcome is doubled in order to keep monetary incentives comparable across parts (assuming a 50% probability of winning).

At the end of the experiment, we elicit beliefs about others' behaviour, risk preferences, willingness to compete, and participants' sex at birth.

The experiment will be conducted online via Qualtrics and will involve real-time interaction thanks to SMARTRIQS software. We will recruit participants on Prolific and use a US subject pool. A typical session will last approximately 15 minutes, and participants will receive a fixed fee of £1.50 plus a performance-based bonus payment, resulting in an expected total compensation of around £18 per hour.

Additional treatments: upon funding availability, we will run a further experiment to elicit third-party beliefs on behaviour in our main experiment.
Experimental Design Details
Not available
Randomization Method
Randomisation is performed on the online platform. Participants will blindly self-select in one of the four experimental conditions without prior knowledge.
Randomization Unit
The randomization is done on the individual participant. Each participant is independently assigned to one of the experimental conditions.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
600 human subjects
Sample size: planned number of observations
Participants, failing an attention test at the beginning of the experiment will be dropped. Because of the interactive nature of the experiment, some participants may drop out during the experiment due to time constraints. Thus, if a participant drops out, the person he is paired with must also drop out. This will make the observations of the whole pair unusable. For this reason, we plan to collect at least 200 "usable" participants per experimental condition. We will stop collecting data once we have reached this number for each treatment. This will give us a total of at least 600 subjects.
Sample size (or number of clusters) by treatment arms
200 subjects per treatment.
We will recruit participants on Prolific from the US sample with an approval rate of at least 99% and fluent in English.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Following a pilot for our baseline treatment (individual w/o feedback), we report, in Part 2, an average of points equal to 1.721429 with a standard deviation of 0.4714887. A two-means test (t-test) with 200 subjects per treatment and equal variances allows to detect an effect size of 0.1324 (experimental-group mean 1.8538) with a power of 0.8 and alpha = 0.05.
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
University of Essex, ERAMS
IRB Approval Date
2023-02-13
IRB Approval Number
ETH2223-0884
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information