Integrity on the Internet

Last registered on December 04, 2017

Pre-Trial

Trial Information

General Information

Title
Integrity on the Internet
RCT ID
AEARCTR-0002609
Initial registration date
December 04, 2017

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
December 04, 2017, 2:28 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Harvard University

Other Primary Investigator(s)

PI Affiliation
Harvard University
PI Affiliation
Harvard University
PI Affiliation
Harvard University
PI Affiliation
Harvard University
PI Affiliation
Harvard University

Additional Trial Information

Status
In development
Start date
2017-12-04
End date
2019-01-31
Secondary IDs
Abstract
While the internet has eased the ability to exchange information, the veracity of information obtained online can be difficult to determine. As many transactions move online, organizations now face challenges in ensuring the integrity of data that they acquire. For example, online review platforms want truthful reviews that provide informative signal to consumers, insurance companies benefit from truthful claims that do not falsely raise their costs, and social networks can improve their network quality from accurate information. In this study, we explore how being online influences individuals’ propensity to be honest, as well as the conditions under which individuals are more likely to tell the truth online.
External Link(s)

Registration Citation

Citation
Bazerman, Max et al. 2017. "Integrity on the Internet." AEA RCT Registry. December 04. https://doi.org/10.1257/rct.2609-1.0
Former Citation
Bazerman, Max et al. 2017. "Integrity on the Internet." AEA RCT Registry. December 04. https://www.socialscienceregistry.org/trials/2609/history/23705
Experimental Details

Interventions

Intervention(s)
The main interventions will vary how participants are asked to report answers to a task, as well as different features of the reporting form.
Intervention (Hidden)
This project consists of two studies. In the first study, which we will conduct in the lab, we vary whether participants are asked to report answers to a task on paper or on an online form, as well as whether they are asked to sign at the top or bottom of the reporting form (or not at all). In the second study, which we will conduct online (on Amazon's Mechanical Turk), we vary whether participants are shown their name or an honesty prompt that they sign along with their name in a banner throughout the task (compared to the control condition, where nothing is shown).
Intervention Start Date
2017-12-04
Intervention End Date
2018-01-31

Primary Outcomes

Primary Outcomes (end points)
The key outcome variable of interest is the degree to which participants are honest.
Primary Outcomes (explanation)
In study 1, we will analyze the reported sum of the first two die rolls, which are incentivized (higher reported sums increase the probability of winning $50 prizes). In study 2, we will analyze the number of unsolvable matrices solved (also incentivized).

Secondary Outcomes

Secondary Outcomes (end points)
In study 2, we will also analyze a binary indicator for any cheating.
Secondary Outcomes (explanation)
This will be measured by assigning 1 to the indicator variable if any unsolvable matrix was claimed to be solved.

Experimental Design

Experimental Design
In Study 1, we run a lab experiment, where individuals are asked to complete a task. Each individual is randomly assigned to one of 6 conditions, varying how the reporting form is filled out.

In Study 2, we will run an experiment on Amazon's Mechanical Turk. We will ask participants to complete a task, assigning each individual randomly to one of 3 conditions that vary the display of the task.
Experimental Design Details
Study 1: We will recruit participants who are part of the Harvard Computer Lab for Experimental Research (CLER) subject pool. Using two rounds of lab bundles, We will ask participants to roll a twelve-sided die and report the sum of the first two rolls. The sum is equal to the number of entries they will get to a raffle to win one of three $50 prizes. There will be six conditions (3x2):
• Online vs offline reporting form
• Signature on reporting form:
o Signature at top of reporting form
o Signature at bottom of reporting form
o No signature on reporting form

Study 2: We will recruit participants through Amazon Mechanical Turk and limit the experiment to US-based English-speaking participants. Participants will be given a Matrix task and will be asked to mark whether or not they solved each of twenty matrices. Half of the matrices will be unsolvable. There will be three conditions:
• Banner throughout the task showing the participant's name
• Banner throughout the task showing an honesty prompt and the participant's name
• No banner
Randomization Method
Randomization done in office by a computer
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Study 1: 450 lab participants. Study 2: 300 Mechanical Turkers.
Sample size: planned number of observations
Study 1: 450 lab participants. Study 2: 300 Mechanical Turkers.
Sample size (or number of clusters) by treatment arms
Study 1: 75 participants per condition. Study 2: 100 participants per condition.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Harvard University
IRB Approval Date
2017-11-20
IRB Approval Number
IRB17-1742
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials