Back to History Current Version

Evaluating Policy Interventions to Prevent the Spread of Health Misinformation on Social Media: Experimental evidence from the Global South

Last registered on November 18, 2022

Pre-Trial

Trial Information

General Information

Title
Evaluating Meta’s Policy Interventions to Prevent the Spread of Health Misinformation: Experimental evidence from the Global South
RCT ID
AEARCTR-0010432
Initial registration date
November 17, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 18, 2022, 12:31 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
University of Oxford

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2022-11-21
End date
2023-09-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Belief in misinformation causes confusion, reduces trust in authorities and encourages risky behaviours that can cause significant harm to health, as exemplified by the COVID-19 pandemic. Social media platforms have taken several policy measures to address this challenge; working with independent fact-checking companies to label inaccurate content, promoting verified information through prompts of fact-checked articles, or tailoring the algorithm to demote false posts in the newsfeed. But how effective are these measures? I aim to address this issue with a focus on Facebook and its policies to combat health-related misinformation in the context of the Global South. My study has three key goals. First, I will evaluate the effectiveness of specific policies currently used by Facebook to debunk misinformation using an online experiment. Second, I aim to examine design tweaks informed by behavioural science to improve the effectiveness of these existing policies. Finally, I will examine how core aspects of users’ identities and prior beliefs interact with the content of inaccurate posts to impact the efficacy of the labelling policies, accounting for potential demand-side factors that contribute to the spread of misinformation. I will collect data in two waves, spaced two - three weeks apart to measure if the effects endure over a period of time.
External Link(s)

Registration Citation

Citation
Chandra, Gauri. 2022. "Evaluating Meta’s Policy Interventions to Prevent the Spread of Health Misinformation: Experimental evidence from the Global South." AEA RCT Registry. November 18. https://doi.org/10.1257/rct.10432-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Interventions:
1. "Missing context" and "Partly-false information" labels attached to inaccurate Facebook posts by the third-party fact-checking organisations that Meta collaborates with.
2. Promotion of links to related fact-checked articles.

As part of an online survey experiment, I will show all participants screenshots of five real Facebook posts: four that contain true information, and one that has either been labelled as ‘missing context’ or 'partly false' c Participants in the no label condition will see the post without labels, and those in the labelled conditions will see the posts with the original labels. In one treatment, I will show participants a slightly modified 'missing context' label that is more salient than the original one.
Intervention Start Date
2022-11-21
Intervention End Date
2023-02-28

Primary Outcomes

Primary Outcomes (end points)
Perceived accuracy of the claims contained in the posts
Intention to share the posts
Intention to read an article related to the post (demand for facts)
Primary Outcomes (explanation)
I will measure perceived accuracy of all five posts using the following question, where ** is a placeholder for a description of the claim relevant to the post being viewed:

"Insert claim"
To the best of your knowledge, is this claim true?

It's definitely true
It's likely to be true
It's likely to be false
It's definitely false

I will measure the intention to like / share the five posts, or to read an article related to the post by giving respondents options such as:

"Click here to like this post"
"Click here to share this post"
"Click here to read a related article"

Note that there will be no link provided to actually like or share the posts on Facebook as only screenshots of the posts will be used.

Secondary Outcomes

Secondary Outcomes (end points)
Trust in science, experts and authorities;
Conspiracist ideation
Secondary Outcomes (explanation)
How much do you trust each of the following to give you correct health-related news and information?

Scientists, doctors and other health experts (1)
The Indian Government (2)
TV News Channels (3)
Pharmaceutical and biotechnology companies such as Serum Institute of India, Astra Zeneca, Pfizer or Moderna (4)
Social Media / Tech giants like Facebook (5)
Fact-checker organisations that verify the accuracy of online viral posts (6)

Response options:
A trust them a lot (2)
I somewhat trust them (1)
I don't trust them (0)

Experimental Design

Experimental Design
I will use a between-subjects design.
Experimental Design Details
Participants will be randomly divided in to one of 5 groups -

1. Treatment A1 ("missing context" label)
2. Treatment A2 ("missing context" label + Red warning sign)
3. Control A (no label)

4. Treatment B ("partly false" label)
5. Control B (no label)
Randomization Method
Online randmonisation by Qualtrics algorithm
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
4500
Sample size: planned number of observations
4500
Sample size (or number of clusters) by treatment arms
900
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Blavatnik School of Government, University of Oxford
IRB Approval Date
2022-07-25
IRB Approval Number
SSH/BSG_C1A-22-13

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials