Integrity on the Internet
Last registered on December 04, 2017


Trial Information
General Information
Integrity on the Internet
Initial registration date
December 04, 2017
Last updated
December 04, 2017 2:28 PM EST

This section is unavailable to the public. Use the button below to request access to this information.

Request Information
Primary Investigator
Harvard University
Other Primary Investigator(s)
PI Affiliation
Harvard University
PI Affiliation
Harvard University
PI Affiliation
Harvard University
PI Affiliation
Harvard University
PI Affiliation
Harvard University
Additional Trial Information
In development
Start date
End date
Secondary IDs
While the internet has eased the ability to exchange information, the veracity of information obtained online can be difficult to determine. As many transactions move online, organizations now face challenges in ensuring the integrity of data that they acquire. For example, online review platforms want truthful reviews that provide informative signal to consumers, insurance companies benefit from truthful claims that do not falsely raise their costs, and social networks can improve their network quality from accurate information. In this study, we explore how being online influences individuals’ propensity to be honest, as well as the conditions under which individuals are more likely to tell the truth online.
External Link(s)
Registration Citation
Bazerman, Max et al. 2017. "Integrity on the Internet." AEA RCT Registry. December 04.
Experimental Details
The main interventions will vary how participants are asked to report answers to a task, as well as different features of the reporting form.
Intervention Start Date
Intervention End Date
Primary Outcomes
Primary Outcomes (end points)
The key outcome variable of interest is the degree to which participants are honest.
Primary Outcomes (explanation)
In study 1, we will analyze the reported sum of the first two die rolls, which are incentivized (higher reported sums increase the probability of winning $50 prizes). In study 2, we will analyze the number of unsolvable matrices solved (also incentivized).
Secondary Outcomes
Secondary Outcomes (end points)
In study 2, we will also analyze a binary indicator for any cheating.
Secondary Outcomes (explanation)
This will be measured by assigning 1 to the indicator variable if any unsolvable matrix was claimed to be solved.
Experimental Design
Experimental Design
In Study 1, we run a lab experiment, where individuals are asked to complete a task. Each individual is randomly assigned to one of 6 conditions, varying how the reporting form is filled out.

In Study 2, we will run an experiment on Amazon's Mechanical Turk. We will ask participants to complete a task, assigning each individual randomly to one of 3 conditions that vary the display of the task.
Experimental Design Details
Not available
Randomization Method
Randomization done in office by a computer
Randomization Unit
Was the treatment clustered?
Experiment Characteristics
Sample size: planned number of clusters
Study 1: 450 lab participants. Study 2: 300 Mechanical Turkers.
Sample size: planned number of observations
Study 1: 450 lab participants. Study 2: 300 Mechanical Turkers.
Sample size (or number of clusters) by treatment arms
Study 1: 75 participants per condition. Study 2: 100 participants per condition.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB Name
Harvard University
IRB Approval Date
IRB Approval Number