Back to History Current Version

Evaluating Policy Interventions to Prevent the Spread of Health Misinformation on Social Media: Experimental evidence from the Global South

Last registered on November 23, 2022

Pre-Trial

Trial Information

General Information

Title
Evaluating Policy Interventions to Prevent the Spread of Health Misinformation on Social Media: Experimental evidence from the Global South
RCT ID
AEARCTR-0010432
Initial registration date
November 17, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 18, 2022, 12:31 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
November 23, 2022, 7:53 PM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region
Region

Primary Investigator

Affiliation
University of Oxford

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2022-11-25
End date
2023-09-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Belief in misinformation causes confusion, reduces trust in authorities and encourages risky behaviours that can cause significant harm to health, as exemplified by the COVID-19 pandemic. Social media platforms have taken several policy measures to address this challenge; working with independent fact-checking companies to label inaccurate content, promoting verified information through prompts of fact-checked articles, or tailoring the algorithm to demote false posts in the newsfeed. But how effective are these measures? I aim to address this issue with a focus on Facebook and its policies to combat health-related misinformation in the context of the Global South. My study has three key goals. First, using an online survey experiment I will evaluate the effectiveness of a specific label currently used by Facebook to debunk misinformation. Second, I aim to examine design tweaks informed by behavioural science to improve the effectiveness of the existing labels. Finally, I will examine if introducing a very low-cost and scalable digital media literacy intervention increase discernment between true and false, as well as increases the effectiveness of the label in debunking misinformation. I will collect data in two waves, spaced two weeks apart to measure if the effects endure over a period of time.
External Link(s)

Registration Citation

Citation
Chandra, Gauri. 2022. "Evaluating Policy Interventions to Prevent the Spread of Health Misinformation on Social Media: Experimental evidence from the Global South." AEA RCT Registry. November 23. https://doi.org/10.1257/rct.10432-2.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
I am testing the effects of a specific label used by Facebook to flag inaccurate posts as such.

Interventions:
1. Label attached to inaccurate posts by third-party fact-checking organisations working with Meta
2. A slightly modified label that is more salient than the original one.
3. Original label + Digital media literacy intervention (a simple low-cost and scalable intervention)

Intervention Start Date
2022-11-25
Intervention End Date
2023-02-28

Primary Outcomes

Primary Outcomes (end points)
Perceived accuracy of the claims contained in the posts
Intention to share the posts
Intention to read an article related to the post (demand for facts)
Primary Outcomes (explanation)
I will measure perceived accuracy of all five posts using the following question, where ** is a placeholder for a description of the claim relevant to the post being viewed:

"Insert claim"
To the best of your knowledge, is this claim true?

It's definitely true
It's likely to be true
It's likely to be false
It's definitely false

I will measure the intention to like / share the five posts, or to read an article related to the post by giving respondents options such as:

"Click here to like this post"
"Click here to share this post"
"Click here to read a related article"

Note that there will be no link provided to actually like or share the posts on Facebook as only screenshots of the posts will be used.

Secondary Outcomes

Secondary Outcomes (end points)
Performance on the cognitive reflection test
Trust in science, experts and authorities;
Conspiracist ideation
Secondary Outcomes (explanation)
How much do you trust each of the following to give you correct health-related news and information?
Scientists, doctors and other health experts (1)
The Indian Government (2)
TV News Channels (3)
Pharmaceutical and biotechnology companies such as Serum Institute of India, Astra Zeneca, Pfizer or Moderna (4)
Social Media / Tech giants like Facebook (5)
Fact-checker organisations that verify the accuracy of online viral posts (6)

Response options:
I trust them a lot (2)
I somewhat trust them (1)
I don't trust them (0)

Conspiracist ideation:
There is often debate about whether or not the public is told the whole truth about various important issues. The following questions are designed to assess your beliefs about some of these subjects.
Please indicate the degree to which you believe each statement is likely to be true. (5 point scale)

"The spread of certain viruses and/or diseases is the result of the deliberate, secret efforts of some organisation."
"Mind-controlling technology is used manipulatively on people without their knowledge."
"A lot of important information about diseases and treatments is deliberately kept secret from the public."
"Some viruses and/ or diseases which many people are infected with were created in a lab as bio-weapons."

Response options:
It's definitely true (4)
It's likely to be true (3)
Not sure (2)
It's likely to be false (1)
It's definitely false (0)

Experimental Design

Experimental Design
I will use a between-subjects design.
Experimental Design Details
Participants will be randomly divided in to one of following arms -

Treatment 1: "missing context" label
Treatment 2: "missing context" label + Red warning sign
Treatment 3: "missing context" label + digital media literacy intervention
Control: No label

I have chosen two inaccurate posts as stimuli for this study and participants will be exposed to any one of them, randomly. Therefore, there are eight arms in total (four for each stimuli) and respondents will be randomly assigned to any one of the eight arms.

Participants will be recontacted after 10 days to return for the second part of the survey where in their beliefs (perceived accuracy) about the same claims will be measured again on the same scale as part one of the survey. This is to examine if the effects of the label persist over a period of time or remain ephemeral.
Randomization Method
Online randmonisation by Qualtrics algorithm
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
4800
Sample size: planned number of observations
4800
Sample size (or number of clusters) by treatment arms
600 per treatment arm, per stimuli
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Blavatnik School of Government, University of Oxford
IRB Approval Date
2022-07-25
IRB Approval Number
SSH/BSG_C1A-22-13

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials