The diffusion of online misinformation in India

Last registered on April 28, 2022


Trial Information

General Information

The diffusion of online misinformation in India
Initial registration date
April 24, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 28, 2022, 6:06 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.



Primary Investigator

UC Berkeley

Other Primary Investigator(s)

Additional Trial Information

In development
Start date
End date
Secondary IDs
JPAL CVI Grant GR-1192, Weiss Fund Berkeley’s Agreement No. 048745
Prior work
This trial does not extend or rely on any prior RCTs.
We conduct a set of lab-in-field experiments in India to understand how misinformation can spread through peer-to-peer messaging. We examine how sharers make sharing choices based on a story's (perceived) accuracy and the cost/benefit of sharing it; and receivers form an opinion about the story based on its content and their assessment of the sharers' discernment & motivations.

The experiments also provide estimates of Indian citizens' ability to discern misinformation of various topics (health, politics & identity, finance), and the types of stories they like to share. (N = 2000 participants expected)
External Link(s)

Registration Citation

Narang, Jimmy. 2022. "The diffusion of online misinformation in India." AEA RCT Registry. April 28.
Sponsors & Partners


Experimental Details


(No intervention)
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
- beliefs in the accuracy of various news-stories / WhatsApp forwards, elicited as one of: percentages, likert scale, binary
- decisions to share these stories with fellow participants (binary)
- Beliefs about discernment and motivation: participants' beliefs about how good their partners are at guessing if a story is true, or how likely they are to share a specific type of story in real life.
- Recall of the accuracy of stories 1-2 weeks after the experiment.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We recruit participants in pairs, such that members of a pair know each other in real life. One partner in each pair is (randomly) assigned to be the "sharer", and the other, the "receiver". Participation is sequential: receivers begin when sharers finish their part of the study.

During the study, the sharer scrolls through a series of news stories -- formatted as they would appear in a Facebook newsfeed or a WhatsApp group -- and chooses to share as many (or as few) of these stories with their partner as they like. (There is no special incentive to do so: sharers are asked to match their real life behavior as much as possible). Additionally, sharers are asked to report how likely they think each story is true, whether they have seen it outside the lab, and a couple of other (randomized) questions.

Receivers also see a series of news stories on their screens, but do not have the option to share. Instead, they report how likely they think each story is true. Then, they are shown a "signal" about each story. The signal can be: (i) a prompt informing them if the story was shared by their partner (the sharer); (ii) the sharer's belief in the story (but not the sharing choice); (iii) a computer generated clue about whether the story is true or not. In each case, receivers report their posteriors after seeing the signal.
Both sharers and receivers are also asked for beliefs _about_ their partner's discerment or reasoning. For instance, receivers may be asked to guess how likely the _sharer_ thought the story was true. Sharers may be asked to guess if the receiver has seen the story before.

At the end of the study, participants are shown which stories were true and which were false, along with links to the actual news story or a third-party fact-checking website debunking the claim.

1-2 weeks later, participants answer an (optional) follow-up questionnaire that measures their recall of which stories were true (or not), and Provide some feedback on their experience of the study.
Experimental Design Details

Randomization Method
Randomization of participants into groups is done by a computer/program
Randomization of the sequence of news-stories and questions is also programmatic.
Randomization Unit
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
Sample size: planned number of observations
40,000 (20 observations x 2000 participants)
Sample size (or number of clusters) by treatment arms
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Institutional Review Boards (IRBs)

IRB Name
Institute for Financial Management and Research (IFMR) India
IRB Approval Date
IRB Approval Number
IRB Name
University of California, Berkeley
IRB Approval Date
IRB Approval Number


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials