Back to History Current Version

Fake News: An online experiment

Last registered on March 20, 2023

Pre-Trial

Trial Information

General Information

Title
Fake News: An online experiment
RCT ID
AEARCTR-0010984
Initial registration date
February 27, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 20, 2023, 5:21 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
WHU - Otto Beisheim School of Management

Other Primary Investigator(s)

PI Affiliation
Victoria University of Wellington
PI Affiliation
WHU – Otto Beisheim School of Management

Additional Trial Information

Status
In development
Start date
2023-03-01
End date
2023-06-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Fake news is a global phenomenon that has been decisively influencing our political reality to date. Important features of fake news are that (i) the person who acts as the sender of messages has superior knowledge compared to the receivers who are susceptible to fake news, and that (ii) the sender has incentives to push the receivers’ beliefs in specific directions. Also, the success of fake news is likely to depend on (i) the receivers’ strength of their initial beliefs, and on (ii) whether the fake news is in line with the receiver’s (ideological) worldview. We develop an experiment that accounts for these features of fake news. Subjects in the role of receivers are initially asked for their beliefs about unemployment and crime rates in a US state during periods with either republican or democratic governments. They can then revise their beliefs based on a message from a subject in the role of the sender, suggesting the correct answer. We inform receivers that senders know the correct answer, that they may or may not tell the truth, and that they monetarily benefit from receivers’ updates in a pre-determined direction. This allows us to analyze how receivers respond to the senders’ messages depending on the senders’ incentives, the receivers’ initial beliefs, and the receivers’ political attitudes.
External Link(s)

Registration Citation

Citation
Feess, Eberhard , Peter-J. Jost and Anna Ressi. 2023. "Fake News: An online experiment." AEA RCT Registry. March 20. https://doi.org/10.1257/rct.10984-1.0
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2023-03-01
Intervention End Date
2023-04-30

Primary Outcomes

Primary Outcomes (end points)
We test the hypotheses listed below (notwithstanding that our pilots suggest that most of them will be rejected even with a large number of subjects). All hypotheses are primary outcomes. We test all these hypotheses both with non-parametric and parametric methods (regression analysis).

Hypotheses for senders
S1: Senders respond to incentives: They more often send the message that monetarily benefits them given the receiver believes their message.

Hypotheses for receivers
R1: Receivers’ update their beliefs more strongly in the direction of the senders’ messages when these messages are against the senders’ monetary incentives.
R2: Receivers update their beliefs more strongly in the direction of the senders’ messages when these messages are in line with their own political attitude.
R3 (Efficiency): If the senders’ messages violate the senders’ monetary incentives, receivers earn higher payoffs when updating their beliefs in line with these messages rather than not updating their beliefs.
R4a (R4b): If the senders’ messages are in line with the senders’ monetary incentives, receivers earn higher (lower) payoffs when updating their beliefs in line with the senders’ messages rather than not updating their beliefs.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
We obtain three kinds of secondary outcomes. For all of these secondary outcomes, our study is explorative:

1. We analyze whether receivers behave (in any respect) differently when knowing that their final answer influences rather does not influence the payoff of the sender they are matched with.
2. We analyze the impact of our control variables (for demographics mainly age and gender) and our measures of personal and political attitudes. We also consider if the impact of these variables differs between the two domains.
3. We compare two samples: The first sample consists of all decisions by the agents. The second sample consists only of the agents’ decisions for the first domain (i.e., if an agent is first presented with the crime domain and then for the unemployment domain, we only consider the decisions in the crime domain). Comparing the two samples then allows us to identify whether agents exhibit preferences for consistent behavior.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Our experiment matches two subjects who take the role of either the sender or the receiver. Receivers are first asked for their belief on whether the unemployment rate in a US state and the crime rate in another US state was lower during the Obama or the Trump administration. The two questions are presented in a random order. Participants know that one question will be randomly chosen to be payoff relevant. For each question, we also ask them how certain they are that their belief is correct. Receivers know that they maximize their expected payoff by indicating their actual degree of certainty. We then match each receiver with one sender. Senders know the correct answers. For each domain, they can send one out of two possible messages suggesting that the correct answer is “Obama administration” or “Trump administration” to the receiver they are matched with. We inform receivers that senders know the correct answers but do not need to report truthfully. After receivers observed the sender’s message, they can change their beliefs, still knowing that announcing their actual degree of certainty maximizes their expected payoff. To elicit the receivers’ responses to the senders’ messages, we use the strategy-method: Receivers have to indicate whether or not and how much to change their initial beliefs for each of the two possible messages of the sender. Payoffs are calculated based on the message the sender actually sent.

In the post-experimental questionnaire, we elicit basic demographic information and personal attitudes, such as their political affiliation.

We have two main treatments that differ in the senders’ incentives:
• In treatment OBAMA, the sender gets a bonus if and only if the receiver, after having received the message, announces that “Obama administration” is the correct answer.
• In treatment TRUMP, the sender gets a bonus if and only if the receiver, after having received the message, announces that “Trump administration” is the correct answer.

We have two additional treatments where we inform receivers about the message the sender they are matched with sent in a previous experiment. The crucial difference is that receivers know that the sender’s payoff is independent of what they announce. This is done to avoid that the receivers’ behavior is confounded by caring about the senders’ payoffs.

We use a between-subject design. Each subject, hence, participates in only one treatment and plays only one role. The experiment will be conducted online and participants will be recruited using Amazon Mechanical Turk. We will require that participants are located in the US and that they show a HIT approval rate of at least 95%. Participants who answered the majority of our comprehension questions incorrectly will not be allowed to continue, and will hence be excluded from our sample.
Experimental Design Details
Randomization Method
We use a stratified randomization approach based on the point in time individuals choose to participate in our study. We first collect the principals’ data: The first 10 individuals to participate in our study are assigned to treatment OBAMA, the next 10 participants to treatment TRUMP, and this order is continued until we collected the total number of observations we strive for. Afterwards, we proceed in the same manner for collecting the agents’ data in our 4 treatments.
During the study, the randomization of questions is done by the software.

By comparing the two main treatments, we predominantly identify 1) whether principals choose to send different messages when profiting from agents indicating that TRUMP is the correct answer compared to profiting from agents believing that OBAMA is the correct answer, and 2) whether and how agents account for the different incentives of principals when giving their answers. Hence, instead of having a neutral control group, we use participants’ decisions in the OBAMA treatment as the baseline for comparison.
Randomization Unit
Individual level randomization
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
We cluster on the participant level.
Sample size: planned number of observations
We plan to collect 8100 observations. We elicit 3240 beliefs of receivers who do not influence the sender’s payoff and 3240 beliefs of receiver who influence the sender’s payoff. Our use of the strategy method for eliciting the receivers’ beliefs implies that we need to elicit 1620 messages of senders.
Sample size (or number of clusters) by treatment arms
Because we elicit each participant’s decisions in both domains, we need to recruit a total of 2430 participants: 1620 in the role of receivers and 810 in the role of senders, to gather our planned number of observations.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Ethical Review Board (WHU - Otto Beisheim School of Management)
IRB Approval Date
2023-02-23
IRB Approval Number
N/A

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials