Testing Digital Literacy Interventions for Improving the Ability to Spot Misinformation Online: Evidence from a Large-Scale, Multi-Arm RCT in India

Last registered on January 03, 2023

Pre-Trial

Trial Information

General Information

Title
Testing Digital Literacy Interventions for Improving the Ability to Spot Misinformation Online: Evidence from a Large-Scale, Multi-Arm RCT in India
RCT ID
AEARCTR-0010652
Initial registration date
December 16, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
January 03, 2023, 4:33 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Freeman Spogli Institute for International Studies, Stanford University

Other Primary Investigator(s)

Additional Trial Information

Status
On going
Start date
2022-11-07
End date
2024-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
While the digital revolution has greatly enhanced access to information for millions, it has simultaneously allowed unscrupulous state and non-state actors to spread misinformation online. Youth are particularly vulnerable to misinformation online or ‘fake news’. Even though they are digital natives, recent research shows that a majority of them cannot distinguish between “sponsored content” and real news. A lack of ability to evaluate information and news online (digital literacy skills) among youth puts them at risk of being misled—this can have adverse impacts on societies worldwide. Previous digital literacy interventions have mostly made students skeptical of all news, rather than discerning news consumers. Considering this global threat—and previous efforts to address digital literacy—we explore strategies that improve digital literacy skills. Specifically, we test the effectiveness of various programs designed to improve digital literacy skills by conducting a large-scale randomized controlled trial with approximately 5,000 low and middle-income youth in 400 classrooms in India. The study thus hopes to shed light on means to help youth navigate information in the digital age and, in the process, strengthen society by ensuring that future citizens can identify trustworthy information about social and political issues.
External Link(s)

Registration Citation

Citation
Loyalka, Prashant. 2023. "Testing Digital Literacy Interventions for Improving the Ability to Spot Misinformation Online: Evidence from a Large-Scale, Multi-Arm RCT in India." AEA RCT Registry. January 03. https://doi.org/10.1257/rct.10652-1.0
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2022-11-28
Intervention End Date
2022-12-06

Primary Outcomes

Primary Outcomes (end points)
Data on the primary outcome measure (digital literacy score, z-scored using the mean and standard deviation of the control group at endline) will be collected at endline.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
For the purposes of this study, we focus on approximately 5,000 students in 400 classrooms. The classrooms of students will be randomly assigned to one of four treatment conditions in a 2X2 experimental design in which a skills-based training program intervention is cross-cut with a values-based intervention.

The experiment takes place in four stages. First, students fill out a baseline survey. Students also take a digital literacy test with scenario-based items that assess skills in assessing the veracity of online information. Second, after the baseline survey, the 400 classrooms are randomized with equal probability to one of four conditions: (a) a control group; (b) the skills-based program intervention; (c) the values-based program intervention; (d) both the skills-based and values-based program interventions. Third, approximately 10 days after the interventions are completed, students will take an endline survey. Similar to the baseline, students will also complete a digital literacy test with scenario-based items.
Experimental Design Details
Not available
Randomization Method
randomization done in office by a computer
Randomization Unit
classrooms
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
400
Sample size: planned number of observations
5000
Sample size (or number of clusters) by treatment arms
100 per arm
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
For the purposes of the power analysis, we set power at 0.8, alpha at 0.05, and R-squared conservatively at 0.4 (capturing the relationship between the outcome measure and baseline controls including the digital literacy pre-test score). After stratifying classrooms into groups of 8 based on their average digital literacy score at baseline (and randomizing 2 classes to each treatment arm within each stratum), we estimate that the intraclass correlation coefficient in the digital literacy test is 0.000. For each pairwise treatment comparison, the minimum detectable effect size (MDES) is therefore approximately 0.11 SDs.
IRB

Institutional Review Boards (IRBs)

IRB Name
Stanford University IRB
IRB Approval Date
2022-10-25
IRB Approval Number
51527
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information