The American Economic Association's registry for randomized controlled trials
Please fill out this
short user survey
of only 3 questions in order to help us improve the site. We appreciate your feedback!
Moral transgression : the impact of competition
Last registered on December 02, 2020
View Trial History
Moral transgression : the impact of competition
Initial registration date
December 01, 2020
December 02, 2020 11:24 AM EST
United Kingdom of Great Britain and Northern Ireland
United States of America
Karlsruhe Institute of Technology
Contact Primary Investigator
Other Primary Investigator(s)
Victoria University of Wellington
University of Bonn
Additional Trial Information
Firms & Productivity
We investigate the impact of competition on the susceptibility to moral transgressions. We use contests to model competition and consider lying as a prominent form of moral transgression. Contests are likely to influence the frequency of moral transgressions for two reasons: First, the expected monetary benefit from lying may differ between settings with and without competition. Second, contests may reduce moral concerns by triggering a “desire to win”. We design treatments where the expected financial benefit from lying is the same in treatments with and without a contest structure. This allows us to isolate the pure behavioral impact of competing against others. Overall, our study consists of four treatments. We design three treatments with identical financial benefits from lying. In all three treatments, we employ the die-under-the-cup paradigm introduced by Fischbacher & Föllmi-Heusi (2013) where subjects roll a die in private and then report the outcome. We vary the external consequences of lying and the competition with others across treatments. In the fourth treatment, we elicit treatment-specific social norms by following the methodology developed in Krupka and Weber (2013). All treatments will be executed on a large online platform (Prolific).
Dato, Simon, Eberhard Feess and Petra Nieken. 2020. "Moral transgression : the impact of competition." AEA RCT Registry. December 02.
Sponsors & Partners
Our intervention encompasses four treatments: In all treatments, subjects have to fill out a survey on personal attitudes and social preferences. They receive a fixed compensation for this survey. In three treatments, subjects roll a die in private and can report either a high or a low outcome. High reports increase the probability of receiving a high instead of a low bonus payment. The treatments differ as follows:
Individual treatment I: There is no matching with other subjects. The probability of receiving the high bonus depends on the own report and the outcome of a random draw. The report has no impact on other subjects in the treatment.
Negative externality treatment N: In contrast to treatment I, we match two subjects in one group in treatment N. One subject will be in the active role (active subject) and has to roll a die and report the outcome. The other subject will be in the passive role (a bystander) and does not roll a die. As in treatment I, a high report increases the probability for the active subject to receive the high bonus. In contrast to treatment I, the active subject knows that the bystander will receive the low bonus if the active subject gets the high bonus. Also, if the active subject receives the low bonus, the bystander will receive the high bonus. The active subjects know that the bystanders will learn the report and the resulting payment. The bystanders are asked to state their belief about the behavior of active subjects in this experiment before they are informed that they are in the role of a bystander. 1/3 of the bystanders state their belief about the report in the I treatment, 1/3 about the reports in the N, and 1/3 about the reports in the C treatment.
Contest treatment C: Two subjects are matched in one group and both report the outcome of their die roll. The reports are compared in the contest and determine the payment of both subjects.
Social norm treatment: In addition to our three main treatments, we elicit social norms for each of the three main treatments by adjusting the methodology from Krupka and Weber (2013) to our setting. Social norms are elicited from subjects who do not participate in our main treatments. Each subject will be asked to evaluate the possible decisions from one treatment (I, N, and C) only.
Target group: The study will be conducted on Prolific, a large online platform for surveys and market research. We restrict the sample to participants fluent in English, whose country of residence is either USA or UK, and whose submissions have been approved in at least 95% of the cases. We impose these restrictions to ensure a high data quality.
Intervention Start Date
Intervention End Date
Primary Outcomes (end points)
The main variable of interest is the number of high reports submitted by the subjects. Subjects roll a die in private and are then asked to report the outcome. A die roll fo 1 to 4 translates into “Low.” If the outcome is 5 or 6, this translates into “High.”
Primary Outcomes (explanation)
We compare the share of high reports across treatments. Treatment C differs from treatment I in two respects: There is competition and, as a consequence of competition, a higher own expected payoff reduces the payoff of other subjects. Comparing the frequencies of high reports in treatments C and I hence measures the overall impact of competition. In treatment N, the impact on other subjects is identical to the impact in treatment C. Comparing the frequencies of high reports in treatments C and N hence measures the isolated behavioral impact of competition, net of the impact on other subjects.
Secondary Outcomes (end points)
We also elicit social norms for each of the three main treatments and the beliefs of bystanders in treatment N about the behavior of other subjects. In particular, we ask how many subjects report “high” if the true outcome was “low”.
Secondary Outcomes (explanation)
We run our experiment on Prolific, a large online platform for surveys and market research. Our experiment consists of four treatments. We aim at balancing the financial incentives across the three treatments I, N, and C. In treatment C, the financial incentives, i.e., the probability to receive the high bonus with a given report, depend on the other contestant’s report. To implement balanced financial incentives in treatments I and N, we will run one session with 100 subjects in treatment C before executing the remaining sessions. The random draw in treatments I and N then resemble the results from the session with 100 subjects in treatment C. This balances the average financial incentives across treatments. We will run the remaining sessions of treatment C and treatments I and N simultaneously by randomly allocating the subjects to either treatment.
Subjects in the Social Norm treatment will be randomly allocated to one of the three possible scenarios (resembling treatments I, N, and C) and the same holds for the subjects in the passive role of treatment N. To test the functionality of the experimental software and to detect possible comprehension problems in our instructions, we plan to run a small pilot session with 30 subjects in the C treatment.
Experimental Design Details
We restrict the sample to participants fluent in English, whose country of residence is either USA or UK and whose submissions have been approved in at least 95% of the cases. We impose these restrictions to ensure a high data quality. To avoid confounding effects if subjects cheat and do not read the instructions properly, we use IMC checks and also record the time subjects spend working on the study. In particular, we ask 4 comprehension questions about the design of the treatments and implement one attention check question in our survey about personal characteristics (the question is: It’s important that you pay attention to this study, please tick “strongly disagree”). If a subject fails all IMC questions, the dataset will be excluded from the analysis. In addition, we will exclude subjects that spend less than 60 seconds on the study.
Randomization is done by the survey software (Qualtrics).
The randomization is done at the individual level. For details see the experimental description
Was the treatment clustered?
Sample size: planned number of clusters
Sample size: planned number of observations
We plan to collect 1530 observations in total. One observation means one subject that participates in one of the above mentioned treatments.
Sample size (or number of clusters) by treatment arms
We plan to collect 300 observations (subjects) for each treatment with the exception of the N treatment where we plan to collect 600 observations (300 active subjects and 300 bystander subjects). In addition, we will execute a small pilot session for the C treatment with 30 subjects.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Our main variable of interest is the share of high reports which varies between zero and 100%. From the metaanalysis in Abeler et al. we expect 28% of subjects that see a low outcome to lie and report high in the I treatment. This would result in a baseline effect of 61% high reports. A power calculation with a total sample size of 600 (we compare the outcome between two treatments), a power of 0.8 and an alpha of 0.5 leads to a minimum detectable effect size of 0.1128 (two sample Chi-Square test).
Supporting Documents and Materials
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Approval Date
IRB Approval Number
Post Trial Information
Is the intervention completed?
Is data collection complete?
Is public data available?
Reports, Papers & Other Materials
REPORTS & OTHER MATERIALS