Prebunking for Democracy: Experimental Evidence on Modes of Misinformation Resilience Training

Last registered on February 19, 2026

Pre-Trial

Trial Information

General Information

Title
Prebunking for Democracy: Experimental Evidence on Modes of Misinformation Resilience Training
RCT ID
AEARCTR-0017884
Initial registration date
February 16, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
February 19, 2026, 7:29 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
University of Glasgow

Other Primary Investigator(s)

PI Affiliation
University of Glasgow
PI Affiliation
London School of Economics
PI Affiliation
University of Glasgow
PI Affiliation
Middlesex University London

Additional Trial Information

Status
In development
Start date
2026-02-17
End date
2026-10-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
In an age in which information is abundant but verification is scarce, democratic resilience increasingly depends on citizens’ ability to evaluate online content critically. Strengthening this capacity requires interventions that build misinformation resilience - to be aware, to recognise, to assess, to resist, and to counter false or misleading information online - without constraining free expression. This study examines how digital literacy training affects citizens’ capacity to identify false social media content and their willingness to share it, and how these effects vary across delivery modes.
The intervention is designed and delivered by a UK civil society organisation specialising in media literacy. We implement a two-stage randomised experiment. First, participants are randomly assigned to one of three delivery modes: an in-person workshop, an interactive live webinar, or a passive video (short or long format). Second, within each mode, participants are randomly assigned to treatment or control, yielding seven experimental arms: four live arms (in-person treatment, in-person control, webinar treatment, webinar control) and three passive-video arms (short-video treatment, long-video treatment, and a shared control that views the short video after outcomes are collected). All formats cover comparable core content but differ in interactivity, cost, and scalability.
This design provides causal evidence on which training formats most effectively - and feasibly - strengthen citizens’ misinformation resilience. We measure this construct across four dimensions: awareness of misinformation risks, accuracy in discerning false content, resistance to sharing misinformation, and confidence in helping others verify information. We use the term misinformation as an umbrella category encompassing both unintentional misinformation and intentional disinformation, recognizing that users typically encounter false content without knowing the intent behind it.
External Link(s)

Registration Citation

Citation
Foos, Florian et al. 2026. "Prebunking for Democracy: Experimental Evidence on Modes of Misinformation Resilience Training." AEA RCT Registry. February 19. https://doi.org/10.1257/rct.17884-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
This study evaluates the effectiveness of digital media literacy training delivered through different formats in strengthening citizens’ resilience to misinformation. Participants are randomly assigned to one of three delivery modes: (1) an in-person workshop, (2) an interactive live webinar, or (3) a video-based format (short or long).

Within each delivery mode, participants are further randomly assigned to treatment or control. All formats cover comparable core content designed to improve awareness of misinformation, the ability to assess the credibility of online content, resistance to sharing misleading content, and confidence in helping others verify information.

The key difference between treatment and control groups lies in the timing of outcome measurement. Treatment participants complete outcome measures after receiving the training, while control participants complete outcome measures before receiving the training. This design allows us to estimate the causal effect of digital media literacy training across delivery modes while comparing effectiveness and scalability.
Intervention Start Date
2026-02-17
Intervention End Date
2026-10-31

Primary Outcomes

Primary Outcomes (end points)
The main outcome variables are as follows:

(i) Problem awareness: Awareness of the prevalence and risks of misinformation
(ii) Truth discernment: Ability to accurately identify false content
(iii) Behavioural resistance: Reduced willingness to engage with or share misinformation
(iv) Social empowerment / self-efficacy: Confidence in helping others verify information

Placebo outcome: (i) truth discernment of factually correct content; and (ii) sharing and engagement intentions for factually correct content;
Primary Outcomes (explanation)
Primary Outcomes
Assess understanding of how mis- and disinformation operate and the risks they pose.

(i) Awareness about the threats of mis/disinformation
Awareness is measured using the following items:
“To what extent do you agree or disagree with the following statements? Please choose the option that best reflects your view.”
Scale: 1) Strongly disagree to 5) Strongly agree. The different options are:
False or misleading information is common on social media.
Misleading information online can have serious consequences for society.

(ii) Truth discernment of fake content.
Respondents will be shown three fake posts from social media. The posts contain false claims about the secrecy of the ballot, the reality of free speech in democracies, and the alleged cancellation of a media licence. Respondents are then asked to assess the truthfulness of each post.
“Do you think the content of this post is fact or fake?”
Scale: 0 = Certainly fake … 10 = Certainly fact

(iii) Sharing and engagement intentions for fake content
After each post, respondents are asked:
“If you saw the above on social media, how likely would you be to share this?”
Scale: 0 = Not likely at all … 10 = Extremely likely

(iv) Empowering people to teach others how to verify news.
“How confident would you feel helping a friend or family member check whether an online news story is true or false?”
Scale: 0 = Not at all confident … 10 = Extremely confident

Secondary Outcomes

Secondary Outcomes (end points)
Exploratory Outcomes

(i) Truth discernment of factually correct content
Respondents will be shown one factually correct post, containing information from the BBC about health, before assessing its truthfulness.
“Do you think the content of this post is fact or fake?”
Scale: 0 = Certainly fake … 10 = Certainly fact

(ii) Sharing and engagement intentions for factually correct content.
After the post, respondents are asked:
“If you saw the above on social media, how likely would you be to share this?
Scale: 0 = Not likely at all … 10 = Extremely likely

(iii) Willingness to deepen learning
“How interested are you in learning more about how to spot mis/disinformation online? Scale: 0 = Not at all interested … 10 = Extremely interested

Secondary outcomes (open-ended and pilot items)

(i) Resilience strategies (open-ended): “How can we support others to be resilient to mis- and disinformation?” (open text).
(ii) Survey feedback (open-ended): “What do you think of this survey?” (open text).
(iii) Pilot-only perceived usefulness: Extent to which the content helped respondents feel able to identify mis/disinformation online (0 = Not at all; 10 = A great deal).
(iv) Pilot-only improvement suggestions: “Was there anything you would have liked us to include or explain more clearly in the content?” (open text).
Secondary Outcomes (explanation)
Truth discernment of factually correct content and sharing intentions for factually correct content are measured on 0–10 scales and analysed separately.

Willingness to deepen learning is measured on a 0–10 scale indicating interest in learning more about spotting misinformation.

Open-ended responses (resilience strategies and survey feedback) will be analysed descriptively and may be used for qualitative thematic exploration.

Pilot-only measures (perceived usefulness and improvement suggestions) are exploratory and will not be included in primary hypothesis testing.

Experimental Design

Experimental Design
Our study is a combined field and online experiment that consists of a baseline and endline survey and an intervention (or treatment). Our study comprises three parts: a pre-treatment questionnaire, a treatment, and an outcomes questionnaire. Participants are recruited into the study using a combination of online ads that we run on social media platforms and promotional material that is distributed by local partners in the locations where we intend to conduct the study.

Baseline questionnaire. Directly after enrolling in the online survey, participants will answer questions that capture demographics and online behaviour (age, postcode area, gender, education, employment, ethnicity, household income, voting intention, social media use and platforms, and online news engagement) pre-treatment. We will also measure baseline attitudes towards misinformation (concern about misinformation, perceived importance of checking news sources, and confidence in identifying and interpreting misleading online content).

Experiment. We conduct a mixed-mode randomised controlled trial (online and in-person) to test which digital media literacy training mode most effectively strengthens citizens’ resilience to misinformation, while minimising time and costs for participants and implementing organisations. The digital media literacy training evaluated through our experiment will be delivered by our partner organisation. We will have seven experimental arms: four live arms (in-person–treatment, in-person–control, webinar–treatment, webinar–control) and three passive-video arms (short-video treatment, long-video treatment, and a shared control that watches the long video after completing the questionnaire).

All participants are invited to attend their assigned digital training session, but the key difference between treatment and control is when outcomes are measured: control participants complete the outcome questionnaire before attending the digital media literacy training session, whereas treatment participants complete the outcome questionnaire after attending the digital media literacy training session. This design allows us to estimate causal effects of the training while comparing effectiveness, scalability, and effort across delivery modes. This design accounts for participants’ willingness to attend a training session, guarding against differential attrition between experimental arms within modes.

Outcomes questionnaire: After (treatment groups) or before (control groups) respondents attend the digital training session, they will answer questions measuring (i) awareness of the prevalence and risks of misinformation; (ii) perceived accuracy/credibility of online content; (iii) sharing/engagement intentions, as well as (iv) confidence in teaching others how to verify news. As an attention check, respondents will answer a question about the training content.

The wording and coding of all variables are available in the attached questionnaire.
Experimental Design Details
Not available
Randomization Method
Randomization is done in office by a computer using Qualtrics’ automated randomization module.
Randomization Unit
Individual-level randomization.

After providing consent and completing the baseline questionnaire, participants are assigned to a delivery mode and to treatment or control using a two-stage randomisation procedure implemented in Qualtrics. In the first stage, participants are randomly assigned to one of three delivery modes: an in-person workshop, a live webinar, or a passive video. In the second stage, participants are randomly assigned within each mode to treatment or control. This yields seven experimental arms: four live arms (in-person treatment, in-person control, webinar treatment, webinar control) and three video arms (short-video treatment, long-video treatment, and a shared video control). Randomisation is fully automated in Qualtrics and will be assessed using baseline balance checks across arms, applying standard tests of differences in means and proportions for key pre-treatment variables.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
0
Sample size: planned number of observations
In total, we will recruit 6,625 participants (1,000*4=4,000 for live models; 875*3= 2,625 for passive modes). Approximately 950 participants per location.
Sample size (or number of clusters) by treatment arms
A priori, we aim to achieve at least 950 sign-ups in each locality and assign subjects with a probability of 1/3 to the in-person workshop, with 1/3 to the webinar, and with 1/3 to the video group. In each locality, we will run at least two live webinars and two in-person training sessions. We assume an attendance rate of approximately 30% for both live formats. For each live delivery mode, we expect around 50 participants in the treatment group and 50 in the control group. This implies that each session (in-person or webinar) will include up to 50 participants.

Where capacity allows - in coordination with our delivery partner and local hosts - we aim to offer additional in-person sessions to reduce group size and ensure effective interaction. Treatment and control participants will attend the same live sessions. The key difference lies in the timing of outcome measurement: treatment participants will complete the outcome survey immediately after the session, whereas control participants will complete the outcome survey before receiving the training.

For the passive videos, we assume an attrition rate of 20% and an MDE of 0.15 SDs for the passive videos, this leads to 700*1.25= 875 per arm of passive video. For the live modes, we assume an attrition of 70% and a MDE of 0.25 SDs, this leads to 300*3.33=1,000 per arm.

In total, we will recruit 6,625 participants (1,000*4=4,000 for live models; 875*3= 2,625 for passive modes). Approximately 950 participants per location.

We will update these calculations based on pilot attendance and attrition data and will file an amendment to the PAP in case attendance and attrition rates diverge markedly from our assumptions. We will also drop the long video treatment and shift subjects to the short video in future areas if data from two pilot localities show that the long video arms will incur much larger attrition rates than the short video and the video control.

Stopping rule

The survey link will be deactivated once we reach the required number of respondents (in total and per treatment condition, as indicated in the power calculations above).

Recruitment to the in-person and webinar treatment arm will stop at least two days before the training.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
For the passive videos, we assume an attrition rate of 20% and an MDE of 0.15 SDs for the passive videos, this leads to 700*1.25= 875 per arm of passive video. For the live modes, we assume an attrition of 70% and a MDE of 0.25 SDs, this leads to 300*3.33=1,000 per arm.
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information