Please fill out this short user survey of only 3 questions in order to help us improve the site. We appreciate your feedback!
Integrity on the Internet
Last registered on December 04, 2017


Trial Information
General Information
Integrity on the Internet
Initial registration date
December 04, 2017
Last updated
December 04, 2017 2:28 PM EST
Primary Investigator
Harvard University
Other Primary Investigator(s)
PI Affiliation
Harvard University
PI Affiliation
Harvard University
PI Affiliation
Harvard University
PI Affiliation
Harvard University
PI Affiliation
Harvard University
Additional Trial Information
In development
Start date
End date
Secondary IDs
While the internet has eased the ability to exchange information, the veracity of information obtained online can be difficult to determine. As many transactions move online, organizations now face challenges in ensuring the integrity of data that they acquire. For example, online review platforms want truthful reviews that provide informative signal to consumers, insurance companies benefit from truthful claims that do not falsely raise their costs, and social networks can improve their network quality from accurate information. In this study, we explore how being online influences individuals’ propensity to be honest, as well as the conditions under which individuals are more likely to tell the truth online.
External Link(s)
Registration Citation
Bazerman, Max et al. 2017. "Integrity on the Internet." AEA RCT Registry. December 04. https://doi.org/10.1257/rct.2609-1.0.
Former Citation
Bazerman, Max et al. 2017. "Integrity on the Internet." AEA RCT Registry. December 04. http://www.socialscienceregistry.org/trials/2609/history/23705.
Experimental Details
The main interventions will vary how participants are asked to report answers to a task, as well as different features of the reporting form.
Intervention Start Date
Intervention End Date
Primary Outcomes
Primary Outcomes (end points)
The key outcome variable of interest is the degree to which participants are honest.
Primary Outcomes (explanation)
In study 1, we will analyze the reported sum of the first two die rolls, which are incentivized (higher reported sums increase the probability of winning $50 prizes). In study 2, we will analyze the number of unsolvable matrices solved (also incentivized).
Secondary Outcomes
Secondary Outcomes (end points)
In study 2, we will also analyze a binary indicator for any cheating.
Secondary Outcomes (explanation)
This will be measured by assigning 1 to the indicator variable if any unsolvable matrix was claimed to be solved.
Experimental Design
Experimental Design
In Study 1, we run a lab experiment, where individuals are asked to complete a task. Each individual is randomly assigned to one of 6 conditions, varying how the reporting form is filled out.

In Study 2, we will run an experiment on Amazon's Mechanical Turk. We will ask participants to complete a task, assigning each individual randomly to one of 3 conditions that vary the display of the task.
Experimental Design Details
Study 1: We will recruit participants who are part of the Harvard Computer Lab for Experimental Research (CLER) subject pool. Using two rounds of lab bundles, We will ask participants to roll a twelve-sided die and report the sum of the first two rolls. The sum is equal to the number of entries they will get to a raffle to win one of three $50 prizes. There will be six conditions (3x2): • Online vs offline reporting form • Signature on reporting form: o Signature at top of reporting form o Signature at bottom of reporting form o No signature on reporting form Study 2: We will recruit participants through Amazon Mechanical Turk and limit the experiment to US-based English-speaking participants. Participants will be given a Matrix task and will be asked to mark whether or not they solved each of twenty matrices. Half of the matrices will be unsolvable. There will be three conditions: • Banner throughout the task showing the participant's name • Banner throughout the task showing an honesty prompt and the participant's name • No banner
Randomization Method
Randomization done in office by a computer
Randomization Unit
Was the treatment clustered?
Experiment Characteristics
Sample size: planned number of clusters
Study 1: 450 lab participants. Study 2: 300 Mechanical Turkers.
Sample size: planned number of observations
Study 1: 450 lab participants. Study 2: 300 Mechanical Turkers.
Sample size (or number of clusters) by treatment arms
Study 1: 75 participants per condition. Study 2: 100 participants per condition.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB Name
Harvard University
IRB Approval Date
IRB Approval Number
Analysis Plan

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Post Trial Information
Study Withdrawal
Is the intervention completed?
Is data collection complete?
Data Publication
Data Publication
Is public data available?
Program Files
Program Files
Reports, Papers & Other Materials
Relevant Paper(s)