Back to History Current Version

Content Matters: The Effects of Commitment Requests on Truth-Telling

Last registered on November 25, 2020

Pre-Trial

Trial Information

General Information

Title
Content Matters: The Effects of Commitment Requests on Truth-Telling
RCT ID
AEARCTR-0006700
Initial registration date
November 24, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 25, 2020, 10:32 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
University of Erlangen-Nuremberg

Other Primary Investigator(s)

PI Affiliation
Bundesbank
PI Affiliation
University of Linz

Additional Trial Information

Status
On going
Start date
2020-11-20
End date
2021-03-01
Secondary IDs
Abstract
This RCT is a replication excercise regarding the cheating game considered in the working paper version of Cagala, Tobias; Glogowsky, Ulrich; Rincke, Johannes (2019, "Content Matters: The Effects of Commitment Requests on Truth-Telling" (for a pdf of the working paper, see Docs and Materials). The original study uses a laboratory experiment to test how the request to sign a nocheating declaration affects truth-telling. Its main finding is that the effects strongly depend on the declaration’s content. Signing a no-cheating declaration increases truth-telling if it is morally charged, does not affect behavior if it is morally neutral, and reduces truth-telling if it is morally neutral and threatens to punish. The latter effect is driven by subjects with particularly high values on Hong’s Psychological Reactance Scale. These are subjects with a tendency to push back if their freedom of choice is restricted. We aim to replicate the original design with three main changes: First, we would like to increase the statistical power of the analysis. Specifically, we aim at collecting 600 individual observations. Second, we plan to embed survey questions aiming at the measurement of subjects' reactance score in a larger survey including the Big 5 personality traits. Third, we plan to collect the experimental data online. This change relative to the original design is due to the fact that laboratories in Germany are currently closed because of the COVID-19 pandemic.
External Link(s)

Registration Citation

Citation
Cagala, Tobias, Ulrich Glogowsky and Johannes Rincke. 2020. "Content Matters: The Effects of Commitment Requests on Truth-Telling." AEA RCT Registry. November 25. https://doi.org/10.1257/rct.6700-1.0
Experimental Details

Interventions

Intervention(s)
Subjects play a standard cheating game. A random draw determines a number between 1 and 6. Subjects receive an additional payoff of 5 Euro if they report a 5, and no additional payoff if they report any other number. At the beginning of the experimental sessions, subjects in the treatment groups sign a no-cheating declaration (control group: no declaration, so signature). The content of the declaration varies between a version referring to the standards of ethical sound behavior (ETHICAL STANDARD), a morally neutral version that does not refer to any ethically loaded norm (NEUTRAL), and a version combining the neutral no-cheating declaration with a threat that non-compliance will be sanctioned (SANCTION). Note that in the replication, we plan to collect the data online.
Intervention Start Date
2020-11-20
Intervention End Date
2021-03-01

Primary Outcomes

Primary Outcomes (end points)
- Indicator for subjects who cheat by reporting a 5, conditional on having drawn a 1, 2, 3, 4, or a 6.
Primary Outcomes (explanation)
- See the working paper under Docs 6 Materials for details.

Secondary Outcomes

Secondary Outcomes (end points)
- Subjects' values on Hong's Psychologocal Reactance Scale. We will study the treatment effect heterogeneity with respect to this measure.
Secondary Outcomes (explanation)
We focus on an index measuring reactant behavior. To calculate the score, we follow the factor analysis of De Las Cuevas (2014) and average over the subjects' answers to eight of the fourteen statements that are part of the survey. See the working paper under Docs & Materials for details. Specifically, the Appendix of the working paper includes a list of all fourteen statements and indicates which statements we used to calculate the reactance score.

Experimental Design

Experimental Design
Subjects play a standard cheating game. A random draw determines a number between 1 and 6. Subjects receive an additional payoff of 5 Euro if they report a 5, and no additional payoff if they report any other number. At the beginning of the experimental sessions, subjects in the treatment groups sign a no-cheating declaration (control group: no declaration, so signature). The content of the declaration varies between a version referring to the standards of ethical sound behavior (ETHICAL STANDARD), a morally neutral version that does not refer to any ethically loaded norm (NEUTRAL), and a version combining the neutral no-cheating declaration with a threat that non-compliance will be sanctioned (SANCTION). We study the heterogeneity of cheating with respect to the subjects' psychologocal reactance score. The reactance score is constrcuted using survey data collected before the experiment. Note that in the replication, we plan to collect all data online.
Experimental Design Details
We implement a simple cheating game (a) to examine whether commitment requests can increase honesty and (b) to test whether freedom-restricting forms of commitment requests backfire. The data collection takes place online. Subjects registered at the laboratory are invited to an online experiment. After logging in to a website hosting the experiment, subjects are informed that the session consists of two parts: a survey and a short experiment. In the first part, subjects received a payoff of 3 Euro for answering a 15-minute survey on the German inheritance tax schedule. We add this part to the experiment for two reasons. First, by placing other elements before the cheating decision, we followed the standard experimental protocol in the literature. Second, and more importantly, we include this survey to introduce our commitment requests more naturally and mitigate experimenter demand effects. Specifically, directly after the welcome page, the website re-directs subjects in the treatment groups to a page where they are asked to sign the no-cheating declaration right at the beginning of the session. Subjects sign the declaration by typing their full name (first and last name) into a text field. This design element connects the commitment to the entire session rather than to the cheating experiment.

At the beginning of the session's second part, the participants read instructions presented on the computer screen. The instructions inform subjects that the experiment will start with a computerized random draw of a number between one and six that they will be asked to self-report. Subjects also learn from the instructions that their additional payoff (i.e., the payoff in addition to the fixed payment for participating in the survey) will be 5 Euro if they report a 5 and zero if they report a number from the set {0,1,2,3,4,6}.

The computerized random draw simulates the process of drawing a chip from an envelope. Subjects first see an envelope containing six chips numbered between one and six on their screen. Subjects then see the chips being shuffled for a few seconds, and that one randomly selected chip falls out of the envelope. On the next screen, subjects are asked to report their draw by entering the number into a field on the screen. Before reporting their draw, subjects can also click a button to show the instructions and the payoff structure again. They can also click a button to display the result of the random draw again. After subjects have reported their number, subjects are asked to answer a few survey questions on individual characteristics (including age, gender, etc). Finally, subjects are informed about their payoff, and that the payoff will be paid to them in the form of an Amazon voucher. Subjects are informed about the fact that they are paid via Amazon voucher already in the invitation to the experiment.

The fact that the random draw is computerized makes cheating observable to the researchers at an individual level. This design element comes with the benefit of a much higher statistical power compared to approaches that identify cheating by evaluating the empirical distribution of self-reports against the expected distribution under truthful reporting. If individuals believe that the instructions correctly describe the experimental conditions, the expected (immediate) monetary sanction should nevertheless be zero. We neither include a monetary punishment for cheating, nor do we communicate a positive probability of such a punishment. Instead, the instructions highlight that a subject's payoff depends exclusively on the reported number.

We complement the experimental data with survey data to elicit the subjects' psychological reactance. The survey-based standard measure for a subject's reactance type used in this paper is Hong's Psychological Reactance Scale. The original scale consists of 14 statements that approximate the degree to which one person shows reactance. For instance, one statement is ``regulations trigger a sense of resistance in me'', and another one reads ``when someone forces me to do something, I feel like doing the opposite''. To record the answers, we use a 5-point Likert Scale with higher (lower) values indicating stronger agreement (disagreement).

Importantly, to avoid spillovers between survey responses and behavior, we collect the survey data two weeks before the experiment. The procedure of survey data collection will be as follows. Several days before the collection of the expermental data, the subjects who will have registered for the experiment will receive an invitation to take part in an online survey. Participants will have 48 hours to answer the questionnaire, and we will remind subjects who have not completed the survey a few hours before the deadline. Answering the online survey will take about five minutes. Participants receive a fixed payoff of 2 Euro for taking part.
Randomization Method
Randomization done in office by a computer. The randomization will be stratified by the subject's reactance score. Specifically, after collecting the survey data on psychological reactance, we will determine the terciles of the reactance scores, and use these terciles as strata. This is meant to make sure that the distribution of reactance is balanced across treatment conditions.
Randomization Unit
Individual/subject
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
We aim to collect observations on 600 individuals/subjects.
Sample size: planned number of observations
We aim to collect observations on 600 individuals/subjects.
Sample size (or number of clusters) by treatment arms
150 individuals in control and 150 individuals per treatment arm (three treatments)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
In the working paper, we find the following proportions of cheaters by treatment condition: Control 0.277 Ethical Standard 0.116 Neutral: 0.365 Sanction: 0.465 As stated before, we plan to collect 150 observations by treatment condition. The minimum detectable effect sizes for a negative effect (relative to Control) of Ethical Standard at a power of 80 percent is -0.132. Put differently, the power to detect an effect that is equally-sized to the one we have identified in the working paper is 94.3%. The minimum detectable effect sizes for a positive effect (relative to Control) of Neutral at a power of 80 percent is 0.154. In our working paper, we did not identify any significant effect of the neutral condition. The minimum detectable effect sizes for a positive effect (relative to Control) of sanction at a power of 80 percent is 0.154. Put differently, the power to detect an effect that is equally-sized to the one we have identified in the working paper is 92.5%. In the heterogeneity analysis, we will study the treatment effect by tercile of the reactance score. When comparing two treatment groups, those comparisons will be based on 100 observations. In the working paper, we find the following proportions of cheaters by tercile: Tercile 1: Control 0.278; Sanction: 0.263 Tercile 2: Control 0.154; Sanction: 0.417 Tercile 3: Control 0.222; Sanction: 0.778 In the first tercile, the minimum detectable effect size for a positive effect (relative to Control) of Sanction at a power of 80 percent is 0.273. In the data from the original experiment, this difference in means is small and not significantly different from zero. In the second tercile, the minimum detectable effect size for a positive effect (relative to Control) of Sanction at a power of 80 percent is 0.248. Put differently, the power to detect an effect that is equally-sized to the one we have identified in the working paper is 84%. In the third tercile, the minimum detectable effect size for a positive effect (relative to Control) of Sanction at a power of 80 percent is 0.265. Put differently, the power to detect an effect that is equally-sized to the one we have identified in the working paper is larger than 99%. We plan to also run difference-in-difference (DiD) estimations between treatment condition and reactance tercile (i.e., we will test if the difference between, say, the second and the third reactance score tercile differs between, say, Sanction and Control). In each of these DiD estimations, we will be able to use 200 observations.
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials