Back to History Current Version

Ethics

Last registered on March 24, 2021

Pre-Trial

Trial Information

General Information

Title
Ethics
RCT ID
AEARCTR-0007243
Initial registration date
February 22, 2021

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
February 23, 2021, 6:19 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
March 24, 2021, 11:42 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation

Other Primary Investigator(s)

PI Affiliation
Northwestern University
PI Affiliation
University of Michigan

Additional Trial Information

Status
In development
Start date
2021-02-01
End date
2021-04-01
Secondary IDs
Abstract
Misinformation abounds, particularly on the internet and social media. Previous studies have focused on the demand side of misinformation, but much less in known about the supply side: Who and under what circumstances are people willing to manipulate or fabricate data that can rapidly spread on social media? Our study will address these questions by studying workers’ willingness to create misleading graphs about the current Covid-19 pandemic. Anecdotal evidence suggests misleading graphs about the Covid-19 pandemic are proliferating on social media, contributing to hostile protests and further jeopardizing public health. Leveraging an online field experiment, we plan to hire workers on MTurk and ask them to create misleading graphs about Covid-19 death rates. We will randomly assign workers to conditions that vary whether they are being asked to manipulate data. Our main outcome variables are (i) whether workers will complete the job, (ii) how strongly they will manipulate the data, and (iii) how misleading the public perceives the manipulated graphs. By examining the supply side of misinformation, this research contributes to stopping the spread of misinformation, broadly, and health misinformation, specifically, on social media and social technology platforms.
External Link(s)

Registration Citation

Citation
Cohn, Alain, Hatim Rahman and Jan Stoop. 2021. "Ethics." AEA RCT Registry. March 24. https://doi.org/10.1257/rct.7243-1.2000000000000002
Experimental Details

Interventions

Intervention(s)
Field Experiment

Workers will complete two “human intelligence tasks” (or HITs), requiring them to create graphs using Covid19 data. After completing the first task, the same workers will be invited to complete a follow-up HIT that will again require them to create Covid19 graphs. For the follow-up HIT, we will randomly assign subjects to one of three conditions:
• Control treatment: Workers will be paid $1.20 to complete this HIT.
• Low-pay treatment: Workers will be paid $0.60 to complete this HIT.
• Unethical treatment: Workers will be paid $1.20 to complete this HIT and they will be asked to manipulate the data so that the Covid19 graphs look less worrying than what the actual data suggest.
Intervention Start Date
2021-02-24
Intervention End Date
2021-03-11

Primary Outcomes

Primary Outcomes (end points)
Field Experiment
The aim of the field experiment is to measure workers’ willingness to perform unethical tasks (i.e., manipulate Covid19 graphs).

Primary outcomes:
• [Accept] Do workers accept the follow-up HIT? (yes/no)
• [Falsification] To what extent do the Covid19 death numbers in the graph deviate from the original data? Falsification = manipulated data – actual data.

Manipulation Check
The aim of the manipulation check survey is to measure the perceived moral acceptability of the follow-up HIT. We will randomly assign participants to one of two conditions. Participants will either read about the Control HIT or the Treatment/unethical HIT.

Primary outcome:
• [Moral] “Do you personally believe that working on this follow-up HIT is morally acceptable, morally unacceptable, or is it not a moral issue?” (Morally acceptable, Morally unacceptable, not a moral issue)

Downstream Consequences Survey

The aim of this survey is to measure the downstream consequences of the manipulated Covid19 graphs. We will randomly assign participants to one of two conditions. Participants will either see a representative graph from the Control HIT or the Treatment/unethical HIT.

Primary outcomes:
• [Actions] Given the current situation with the Covid19 outbreak, would you feel comfortable or uncomfortable doing each of the following in [state X]? Going out to the grocery store, Eating out in a restaurant, Attending an indoor sporting event or concerts, Visiting with a close friend or family member inside their home, Support mandatory mask wearing on public places in [state X], Travel to [state X] for a pre-paid trip in the next month. (1-7)
• [Risk perceptions 1] How worried are you about the health consequences of Covid19 for you? (1-7)
• [Risk perceptions 2] How worried are you that the Covid19 mutation will lead to a new wave of infections? (1-7)
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Field Experiment

Treatment effect heterogeneity:
• Are workers who politically identify as Republicans more likely to accept the unethical follow-up HIT than workers who politically identify as Democrats?
• Among those who accept the job, do Republican workers manipulate the Covid19 data more strongly (i.e., reduce the Covid19 death rate) than Democrat workers?

Manipulation Check
• [Self-report Accept] “How likely would you be to work on this follow-up HIT?” (1-7)
• [Others Accept] “Out of 100 MTurkers, how many do you think would accept and work on the follow-up HIT?” (0-100)
• [Pay] “What is the lowest pay you would accept to work on this follow-up HIT” ($0-2)
• [Responsibility] “To what extent would you feel personally responsible for how it affects other people?” (1-7)


Downstream Consequences Survey
• [Sharing] “How likely would you be to share this graph with your friends and/or followers on social media?” (1-7)
• [Trust media] “How would you rate your trust in the mainstream media's reporting of Covid19?” (1-7)
• [Trust science] “How would you rate your trust in the job Public health officials, such as those at the CDC (Centers for Disease Control and Prevention), are doing responding to the Covid19 outbreak?” (1-7)
• [Trust government 1] “How would you rate the Trump's/Biden’s administration handling of the Covid19 outbreak?” (1-7)
• [Trust government 2] “To what extent do you think that the Trump/Biden administration downplayed or exaggerated the severity of the Covid19 outbreak?” (1-7)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The basic setup of this experiment is to hire workers on MTurk and ask them to create misleading graphs about the current Covid-19 pandemic. Then, we will measure whether they are willing to perform this task, and to what extent they are manipulating the data.

We plan to recruit 1,200 workers on MTurk (400 for each of the three conditions). When recruiting subjects on MTurk, we will set as a requirement that an MTurker is US based, has a HIT approval rate of at least 98%, and that the MTurker has successfully completed at least 1,000 HITs (this is standard to ensure high-quality data).

The experiment consists of two stages:
1. We will approach MTurkers and ask if they are interested in creating professional looking graphs (i.e., plotting data into graphs) showing the rate of Covid19 infections for a specific US state. We will recruit 50% Democrats and 50% Republicans. Workers will be paid $1.20 for the HIT, which takes about six minutes to complete. Stage 1 is the same for all workers.
2. All workers who complete the first task will be randomly assigned to one of three conditions (Baseline, Low-Pay, and Unethical).
3. We will then contact those workers and ask them to create similar professional looking graphs, this time about Covid19 deaths. Workers will be paid $1.20 (or $0.60 in the Low-Pay treatment) for the HIT, which again takes about six minutes to complete

After all responses have been collected, we will debrief the workers.

In addition to the main field experiment, we will also conduct two types of surveys. First, we will conduct a manipulation check (using a between-subjects design) with a new group of MTurkers. The primary goal of the manipulation check is to gauge workers’ perceptions of the moral acceptability of the follow-up HIT.

We will also conduct a survey (using a between-subjects design) with a representative sample from the U.S. to measure the downstream consequences of the (manipulated) Covid19 graphs. To this end, we plan to recruit participants via Prolific. We will present representative graphs to the participants that were created by the MTurkers in the different conditions, and then we will ask them several questions about how it would affect their attitudes and behavior related to the Covid19 pandemic.
Experimental Design Details
Randomization Method
Field Experiment

We will place a job recruitment advertisement on MTurk for the first HIT, balancing the number of Democrats and Republicans we invite. There is no randomization at this stage yet, jobs are filled on a “first come, first-serve” basis.

After completion of the first HIT, we will randomly assign workers to one of the three treatments. The randomization will be done in excel, with workers having the same probability to end up in one of the three treatments. We will use block randomization with respect to political party affiliation.
Randomization Unit
Randomization will be done at the individual level.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Each observation is independent of the other observations. Thus, there is no need for clustering.
Sample size: planned number of observations
Field Experiment We plan to recruit 1,200 workers on MTurk (400 for each of the three treatments). According to a power calculation, we need about 388 observations per treatment to detect a small effect in job acceptance (i.e., Cohen’s d = 0.2) with a power of 0.80 and an alpha of 0.05 (two-sided). Manipulation Check We plan to recruit 800 workers on MTurk (400 for the two main treatments, Control and Unethical). According to a power calculation, we need about 394 observations per treatment to detect a small effect in how ethical workers perceive the follow-up HIT (i.e., Cohen’s d = 0.2) with a power of 0.80 and an alpha of 0.05 (two-sided). Downstream Consequences Survey: We plan to recruit 800 subjects via Prolific (400 for the two main treatments, Control and Unethical). According to a power calculation, we need about 394 observations per treatment to detect a small effect in Covid19-related behavior (i.e., Cohen’s d = 0.2) with a power of 0.80 and an alpha of 0.05 (two-sided).
Sample size (or number of clusters) by treatment arms
See above.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Field Experiment Given a type 1 error (α) of 5 percent and a power (1- β) of 80 percent, a minimum detectable effect size (δ) of 10 percentage points in job acceptance between two treatments requires 388 observations per treatment (Chi2 tests). Manipulation Check Given a type 1 error (α) of 5 percent and a power (1- β) of 80 percent, a minimum detectable effect size (δ) of 10 percentage points in perceived moral acceptability (binary variable) between two treatments requires 388 observations per treatment (Mann-Whitney U test). Downstream Consequences Survey Given a type 1 error (α) of 5 percent and a power (1- β) of 80 percent, a minimum detectable effect size (δ) of 0.4 units in actions and risk perceptions (7-item Likert scales) between two treatments requires 394 observations per treatment (Mann-Whitney U tests).
IRB

Institutional Review Boards (IRBs)

IRB Name
Internal Review Board Experimental Research of the Erasmus Research Institute of Management, Erasmus University Rotterdam
IRB Approval Date
2020-01-14
IRB Approval Number
IRB-E Approval 2019-06
IRB Name
The Northwestern University IRB
IRB Approval Date
2020-07-17
IRB Approval Number
STU00212827
IRB Name
Health Sciences and Behavioral Sciences Institutional Review Board
IRB Approval Date
2020-05-26
IRB Approval Number
HUM00179761
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials