Fake News and the Problem of Disregarding True Messages: Theory and Experimental Evidence

Last registered on June 23, 2025

Pre-Trial

Trial Information

General Information

Title
Fake News and the Problem of Disregarding True Messages: Theory and Experimental Evidence
RCT ID
AEARCTR-0010984
Initial registration date
February 27, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 20, 2023, 5:21 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
June 23, 2025, 1:32 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
WHU - Otto Beisheim School of Management

Other Primary Investigator(s)

PI Affiliation
Victoria University of Wellington
PI Affiliation
WHU – Otto Beisheim School of Management

Additional Trial Information

Status
In development
Start date
2025-06-26
End date
2025-08-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Fake news is a global phenomenon that continues to shape our political reality in significant ways. We design an experiment accounting for three prominent features of fake news: (i) Senders have superior knowledge compared to receivers, (ii) they have incentives to push receivers in a specific direction known by receivers, and (iii) receivers’ decisions whether to follow the sender’s messages depends on their prior beliefs and their ideological worldview.

In the role of receivers, participants are first asked whether they believe that unemployment and crime rates in a given U.S. state were higher under the Trump (T) or Obama (O) administration. They are then presented with a message from a sender—taken from a previous study—who suggests the correct answer. Participants are informed that senders knew the correct answer, were allowed to send either T or O, and received a bonus if the receiver they were matched with in the previous study answered T (in treatment TRUMP) or O (in treatment OBAMA). Participants themselves receive a bonus for answering correctly.

This setup allows us to examine how receivers respond to messages, depending on the senders’ incentives, their own prior beliefs, and their political attitudes. In additional treatments, we either elicit participants’ estimates of how frequently senders lie or provide them with explicit information on this frequency. This enables us to investigate how the (perceived or actual) lying frequency influences the likelihood that participants accept or reject the senders’ messages.
External Link(s)

Registration Citation

Citation
Feess, Eberhard , Peter-J. Jost and Anna Ressi. 2025. "Fake News and the Problem of Disregarding True Messages: Theory and Experimental Evidence." AEA RCT Registry. June 23. https://doi.org/10.1257/rct.10984-3.0
Experimental Details

Interventions

Intervention(s)
Intervention (Hidden)
Intervention Start Date
2025-06-26
Intervention End Date
2025-08-31

Primary Outcomes

Primary Outcomes (end points)
We test the predictions S1-S2 for senders and R1-R4 for receivers as stated in the previous version of our pre-registration:

- Hypotheses for senders
S1. Incentives: The probability of lying is lower if the truth corresponds to the senders’ incentives than if the truth does not correspond to the senders’ incentives.
S2. Lying costs: The probability of lying is higher the lower the senders’ lying costs.

- Hypotheses for receivers
R1. Partisanship I: The probability that the initial answer is "Trump administration" ("Obama administration") is higher for Republicans (Democrats).
R2. Partisanship II: The probability that receivers who change their initial answer from “Trump administration” to “Obama administration” (“Obama administration” to “Trump administration”) is higher for Democrats (Republicans).
R3. Senders’ incentives: The probability that receivers change their initial answer is lower if the message corresponds to the senders’ incentives than when it does not correspond to the senders’ incentives.
R4. Belief: The probability that receivers change their initial answer decreases with the strength of their belief in stage 1.


These predictions are derived from our (updated) game-theoretic model. Additionally, we now test our previously defined secondary outcome S3 as a main outcome, as it also follows from the model. We no longer test R5 from the previous version of our pre-registration because we dropped the associated treatment manipulation (receivers’ first-stage answers incentivized vs. not incentivized) from our new design.

Based on the data collected in the earlier version of the study, we additionally test the following hypotheses:

- H1: Across all treatments (A1/A2, B, C, D), we predict that the frequency of type-II errors exceeds that of type-I errors. This pattern is expected to hold regardless of whether the message is aligned or misaligned with the sender’s incentives, as well as whether the message is aligned or misaligned with the receiver’s partisanship.
- H2: The difference between type-II and type-I errors is greatest in treatments A, followed by treatment B and then treatment C.
- H3: The difference between type-II and type-I errors decreases in treatment D compared to treatments A.
- H4: In treatments A1, A2 and D, the probability that a receiver switches increases with their belief about the senders’ lying costs.

For calculating the frequency of type-I and type-II errors, we derive each receiver’s subjectively optimal decision according to their ex-ante belief that their answer is correct and, depending on the treatment and the receiver’s information, the estimated or actual lying frequencies of the senders.

We test these hypotheses using both non-parametric and parametric methods (regression analysis). As in the previous version of the pre-registration (and consistent with the predictions of our game-theoretic model), our analyses will focus on observations for which the sender’s message is not aligned with the receiver’s initial answer.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
There are two kinds of additional secondary outcomes for which our study is explorative:

1. We examine whether receivers earn higher payoffs when switching from their initial answer, depending on whether the sender’s message is or is not in line with their incentives.
2. We investigate the influence of our control variables (for demographics mainly age and gender) as well as personal and political attitudes. We also explore the impact of our two question domains (crime and unemployment).

Because our previously collected data did not reveal statistically significant differences between treatments in which the receiver’s stage-1 answers were and were not incentivized, the new design omits this treatment variation. Therefore, we no longer include the related secondary outcomes listed in the earlier version of the pre-registration.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Participants in the role of receivers know that there are two main stages of which only one will be randomly chosen for determining their bonus.
In stage 1 of the experiment, we randomly select two out of four questions. For half of the questions, the correct answer is “Obama” and for the other half, the correct answer is “Trump”. Specifically, participants are asked for their belief on whether the unemployment rate in a particular US state, and the crime rate in another US state, was lower during the Obama or the Trump administration. The two questions are presented in a random order. Participants know that, given that stage 1 is paid out, exactly one of the two questions will be randomly chosen to determine their payoff and that they will receive a bonus if and only if their answer to this question is correct. For each question, we also ask them how certain they are that their belief is correct.
In stage 2, receivers can revise their previous answers based on a message of a sender in a previous study. For those messages, we use data from a previous study. Hence, all participants in our new experiment are receivers. In this previous study, we informed senders about the correct answer. For each question, they could then send one out of two possible messages, suggesting that the correct answer is “Obama” or “Trump”. Each of the two messages was sent to a receiver of the previous study. Senders in treatment TRUMP (OBAMA) knew that they would receive a bonus if and only if the receiver’s answered “Trump” (“Obama”).
Receivers are informed that senders knew the correct answers but did not need to report truthfully. We also inform them about the senders’ incentives. After they receive the senders’ messages, we again ask them whether they believe that “Obama” or “Trump” is the correct answer. Receivers know that, given that stage 2 is paid out, they will get a bonus if and only if their answer is correct.
To elicit the receivers’ responses to the senders’ messages, we use the strategy-method: For each of the two questions, receivers have to choose between “Obama” and “Trump” for each of the two possible messages of the sender. Payoffs are calculated based on the message the sender actually sent.
In the post-experimental questionnaire, we elicit basic demographic information and personal attitudes, such as their political affiliation.
Overall, our first treatment manipulation varies the senders’ incentives:
o In treatment OBAMA, receivers know that the senders got a bonus if and only if the receiver they were matched with announces that “Obama” is the correct answer.
o In treatment TRUMP, receivers know that the senders got a bonus if and only if the receiver they were matched with announces that “Trump” is the correct answer.

We also have the following additional treatment variations:
o Treatment A1 & A2: We ask receivers the following incentivized questions before (treatment A1) or after (treatment A2) they provide their final answers:
• If (TRUMP / OBAMA) is the correct answer, how many out of 100 senders sent the true message (TRUMP / OBAMA).
• If (OBAMA / TRUMP) is the correct answer, how many out of 100 senders sent the false message (TRUMP / OBAMA).
For each of the two questions, they will get an additional bonus of £0.10 if their guess is less than 5 percentage points away from the true share.
o Treatment B: We give receivers the following information before they provide their final answers
• If (TRUMP / OBAMA) is the correct answer, (95 / 96) out of 100 senders sent the true message (TRUMP / OBAMA).
• If (OBAMA / TRUMP) is the correct answer, (52 / 63) out of 100 senders sent the false message (TRUMP / OBAMA).
o Treatment C: We give receivers the following information before they provide their final answers:
• Out of 100 messages (TRUMP / OBAMA), (66 / 62) were true.
• Out of 100 messages (OBAMA / TRUMP), (89 / 90) were true.
o Treatment D: Before receivers provide their final answers, we ask them to imagine that they are in the role of the sender and ask them the following two questions:
• Suppose (TRUMP / OBAMA) is correct. Which message would you send?
• Suppose (OBAMA / TRUMP) is correct. Which message would you send?
As in treatment A2, we also elicit their estimated lying frequencies after they provide their final answer.

We use a between-subject design: Each subject participates in only one treatment. The experiment will be conducted online using the survey software Qualtrics and participants will be recruited using Prolific. We will require participants to reside in the US and show an approval rate of at least 95%. Participants who answered the majority of our comprehension questions incorrectly will not be allowed to continue and will hence not be included our sample. Moreover, in the post-experimental questionnaire, participants are asked whether they looked up the answers to the estimation questions on the internet. Those who respond “Yes” will be excluded from the statistical analysis.
Experimental Design Details
Preamble

In a previous version of the paper, we conducted a sender-receiver experiment similar to the current design. The collected data revealed that participants made substantially more “type-II” than “type-I” errors—that is, they were more likely to disregard a true message than to follow a false one. Based on these findings, we identified a weakness in our earlier design: It does not enable us to uncover the underlying reasons for this pattern.

Therefore, we decided to first replicate the original findings while including an essential additional question—namely, participants’ estimates of the lying frequency in different sender situations—to address this shortcoming (treatments A1/A2; see below). In addition, we introduce three new treatments to explore whether and how our findings are linked to (mis)perceptions of the senders’ lying frequency (treatment B) and/or an inaccurate mapping of lying frequencies onto expected payoffs from following or ignoring the messages (treatment C). Finally, in treatment D, we examine whether nudging participants to adopt the sender’s perspective reduces type-II errors.

As our primary interest lies in receivers’ behavior, we do not collect new data on the senders’ messages. Instead, we inform receivers about the sender behavior observed in the previous study. This also has the advantage of providing a consistent benchmark for evaluating and comparing receiver behavior across treatments.
Randomization Method
All randomization procedures during the study—whether assigning participants to the TRUMP or OBAMA treatments or randomizing the order of questions and answer options—are implemented by the software.
Randomization Unit
Individual level randomization
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
We cluster on the participant level.
Sample size: planned number of observations
We plan to collect 2,400 observations (usable for our statistical analysis) in total – 600observations in each of our four treatments. To obtain our planned number of observations, we need to recruit 300 receivers for each treatment (150 in treatment TRUMP and 150 in treatment OBAMA), i.e., 1,200 participants in total. This is because our use of the strategy method for eliciting receivers’ belief implies that we obtain 4 observations per participant (two messages for two questions), i.e., 4,800 observations in total. Since our analyses will focus on those receivers for whom the initial answer is not in line with the message they receive, only half of the observations, i.e. 2,400, will actually be used.
Sample size (or number of clusters) by treatment arms
Because we elicit each participant’s decisions for both possible messages in both domains, we need to recruit a total of 1,200 participants – 300 in each of our four treatments A1/A2, B, C, and D (150 per treatment OBAMA and TRUMP).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Ethical Review Board (WHU - Otto Beisheim School of Management)
IRB Approval Date
2025-06-23
IRB Approval Number
N/A

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials