Adaptive RCT on Misinformation

Last registered on March 03, 2025

Pre-Trial

Trial Information

General Information

Title
Adaptive RCT on Misinformation
RCT ID
AEARCTR-0015486
Initial registration date
March 03, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 03, 2025, 9:38 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
Ofcom

Other Primary Investigator(s)

PI Affiliation
Ofcom
PI Affiliation
Ofcom
PI Affiliation
Ofcom

Additional Trial Information

Status
In development
Start date
2025-03-03
End date
2025-03-23
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Randomised Control Trials (RCTs) are the gold standard for testing hypotheses about competing interventions. Adaptive RCTs, which utilise real-time data for decision-making, have gained popularity in industry and are now attracting attention in academia and regulation for their efficiency in experimental studies. This project aims to apply Adaptive RCTs to explore interventions that enhance social media users' ability to identify misinformation and disinformation. Given the prevalence of algorithmic bias on social media, misinformation can spread rapidly. This experiment will uniquely bring together many behavioural interventions, which have only been tested separately, including fact-checker and AI labelling, community notes, prompts, and media literacy/inoculation techniques. By reallocating resources to better-performing interventions in real-time, this Adaptive RCT will provide robust evidence to identify the most effective strategies to combat misinformation.
External Link(s)

Registration Citation

Citation
Cibik, Ceren Bengu et al. 2025. "Adaptive RCT on Misinformation." AEA RCT Registry. March 03. https://doi.org/10.1257/rct.15486-1.0
Experimental Details

Interventions

Intervention(s)
We used an existing classification of interventions against online misinformation (based on 81 scientific papers) to develop intervention ideas.^1 Afterwards, we prioritised the ideas based on feasibility, relevance, and potential impact. ​We will test the effect of AI and fact-checkers labelling, community notes, prompts and media literacy/inoculation techniques (together with some of their combinations as descibed below) on correctly identifying correct/misinformation.

AI labelling will include a label on the social media post stating " AI has flagged this information as fake".

Fact-checkers labelling will include the label on a social media post stating "Independent fact-checkers have flagged this information fake".

Community notes will include a note under the social media post stating "Our community members added context. Members of our community have flagged this information might be fake." with an additional post-specific explanation of why it was flagged.

The inoculation quiz will consist of a process at the beginning of the experiment where participants will be given 3 different social media posts one-by-one and asked immediately if this information is true or false. Based on their answers, they will receive immediate feedback on their answers by providing post-specific tips on how to detect misinformation.

Prompts will include receiving a quick summary of what has been taught to the participants with the inoculation quiz that will be shown after 50% of the feed. (By definition, prompts will be tested only with the inoculation quiz.)


^1 Kozyreva, A., Lorenz-Spreen, P., Herzog, S. M., Ecker, U. K., Lewandowsky, S., Hertwig, R., ... & Wineburg, S. (2024). Toolbox of individual-level interventions against online misinformation. Nature Human Behaviour, 1-9.
Intervention (Hidden)
The full list of trial arms:

Treatment Group 1: Control​
Treatment Group 2: Inoculation ​quiz
Treatment Group 3: Inoculation quiz + Reminder prompt​
Treatment Group 4: Fact-checkers label​
Treatment Group 5: AI label ​
Treatment Group 6: Community notes​
Treatment Group 7: Inoculation quiz + AI label​
Treatment Group 8: Inoculation quiz + Fact-checkers label​
Treatment Group 9: Inoculation quiz + Community notes​
Treatment Group 10: Inoculation quiz + AI label+ Reminder Prompt​
Treatment Group 11: Inoculation quiz + Fact-checkers label+ Reminder Prompt​
Treatment Group 12: Inoculation quiz + Community notes + Reminder Prompt​

Intervention Start Date
2025-03-03
Intervention End Date
2025-03-23

Primary Outcomes

Primary Outcomes (end points)
The overall ability to discern true information from misinformation : overall accuracy

See attached Technical Annex for formal definition.
Primary Outcomes (explanation)
For each post, the participant is asked whether the post contains misinformation or not (Binary choice question), and the level of confidence in his answer (between 50 and 100). If the answer to the to binary choice question is correct, the score associated with the answer is equal to the level of confidence expressed; if the answer to the binary choice question is incorrect, the score associated with the answer is equal to 100 minus the level of confidence expressed.
The total accuracy score for a participant will be calculated as the arithmetic mean of the scores across all posts (15) shown.

See attached Technical Annex for formal definitions.

Secondary Outcomes

Secondary Outcomes (end points)
2a- The overall accuracy for true posts
2b- The overall accuracy for false posts
3- Overall correct identification of posts (ignoring confidence levels)
4- The overall accuracy for individual posts containing misinformation by subgroups of people
5- Correct identification for individual posts containing misinformation

Secondary Outcomes (explanation)
2- The overall accuracy (as described above in the primary outcome measure) for a) true posts only, and b) false post only
3- The ability to correctly identify true information from misinformation (ignoring the confidence levels)
4- The overall accuracy for individual posts containing misinformation by subgroups (relevant subgroups are defined in the Technical Annex)
5- The overall ability to correctly identify true information from misinformation (ignoring confidence levels) for individual posts containing misinformation by subgroups (relevant subgroups are defined in the Technical Annex).

See attached Technical Annex for formal definitions.

Experimental Design

Experimental Design
We will present each participant with a set of social media posts each containing different pieces of information (news, commercial advertisements etc.). There will be fifteen posts in total and they will be presented in a randomised order. 2/3 of the posts will include correct information, whereas 1/3 will include misinformation*. Participants will be randomised** across one of the twelve trial arms involving different interventions (or combinations thereof as described above). For each post presented, the participant will be asked to assess if the post contains misinformation or not and how confident they are in their response. After participants complete their assessment of fifteen social media posts, there will be a post-survey questionnaire. It includes media consumption habits, social media attitudes and use, general conspiracy theory beliefs and demographic information (age, gender, education, ethnicity, non-native English speaker and political leaning). We will also employ 2 attention checks throughout the experiment. The experiment will conclude with debunking misinformation that participants will see during the experiment.

*Ratio 2:1 for True vs Mis/dis posts. Ratio decision took into consideration a) making sure there are enough mis/dis posts to get primary outcome data while not making the experiment too long b) not making it completely unrealistic or making it easy to guess if used 1:1 ratio.
**Details about the randomisation can be found below in the Randomisation Section and also in the Technical Annex.
Experimental Design Details
Randomization Method
The randomisation will be carried out in 5* rounds. In the first 4 rounds (Stage 1), we will test all 12 arms. At the end of each round in Stage 1, we will update assignment probabilities for randomisation in the next round via an algorithm designed for adaptive randomisation (see below for details about the algorithm).
At the end of the fourth round, we will select the best treatment arm (denoted b^*) based on the data collected up to that point.
In the 5th round (Stage 2), we will only test the control arm and the best arm b^* from Stage 1 with equal probability of assignment.
The number of participants for each round is as follows:
Round 1: 1050 participants
Round 2: 1050 participants
Round 3: 1050 participants
Round 4: 1050 participants
Round 5: 1800 participants
Each round will be carried out on a different day, roughly at the same time of the day to ensure comparability of experimental conditions across rounds.
*This method is special to the Adaptive RCT method. More details on how the interim probabilities will be calculated are provided in more details in the Technical Annex (attached).
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
6000 individuals
Sample size: planned number of observations
6000
Sample size (or number of clusters) by treatment arms
The number of participants for each round is as follows (as mentioned in the randomisation section):
Round 1: 1050 participants
Round 2: 1050 participants
Round 3: 1050 participants
Round 4: 1050 participants
Round 5: 1800 participants

More details on how the interim probabilities will be calculated for each round to determine how many observations will be allocated to each treatment are provided in more details in the Technical Annex (attached).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
The power calculations for Adaptive RCTS are not done in a traditional way as it would have been for RCTs. Our simulations with 12 arms, N=6000, floor=0.01 parameters give Average policy reward = 0.7868, Average regret=0.00320, Correct identification 90.9% and Average Regret in-Sample=0.0448 Our simulations with 12 arms, N=4200, floor=0.01 parameters give Average policy reward = 0.7849, Average regret=0.0051, Correct identification 85.8% and Average Regret in-Sample=0.0448
IRB

Institutional Review Boards (IRBs)

IRB Name
Ofcom Data Protection and Impact Assessment
IRB Approval Date
2025-01-30
IRB Approval Number
N/A
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials