Fake News: An online experiment

Last registered on October 11, 2023

Pre-Trial

Trial Information

General Information

Title
Fake News: An online experiment
RCT ID
AEARCTR-0010984
Initial registration date
February 27, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 20, 2023, 5:21 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
October 11, 2023, 3:53 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
WHU - Otto Beisheim School of Management

Other Primary Investigator(s)

PI Affiliation
Victoria University of Wellington
PI Affiliation
WHU – Otto Beisheim School of Management

Additional Trial Information

Status
In development
Start date
2023-03-01
End date
2024-02-29
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Fake news is a global phenomenon that has been decisively influencing our political reality to date. Important features of fake news are that (i) the person who acts as the sender of messages has superior knowledge compared to receivers, and that (ii) the sender has incentives to push the receivers’ beliefs in specific directions. Also, the success of fake news is likely to depend on (i) the receivers’ confidence in their initial beliefs, and on (ii) whether the fake news is in line with the receiver’s (ideological) worldview (partisan behavior). We develop an experiment that accounts for these features of fake news. Subjects in their role of receivers are initially asked whether they believe that the unemployment and crime rates in a US state were higher under the Trump (T) or Obama (O) administration. They can then revise their beliefs based on a message from a sender, suggesting the correct answer. Receivers are informed that senders know the correct answer, are still allowed to send either T or O, and will get a bonus if the receiver’s final answer is T (O). Receivers will get a bonus if their answer is correct. This allows us to analyze how receivers respond to the senders’ messages depending on the senders’ incentives, the receivers’ initial beliefs, and the receivers’ political attitudes.
External Link(s)

Registration Citation

Citation
Feess, Eberhard , Peter-J. Jost and Anna Ressi. 2023. "Fake News: An online experiment." AEA RCT Registry. October 11. https://doi.org/10.1257/rct.10984-2.0
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2023-10-16
Intervention End Date
2023-12-31

Primary Outcomes

Primary Outcomes (end points)
We test the hypotheses listed below. All hypotheses that are based on our behavioral game theoretical model are primary outcomes. We test these hypotheses both with non-parametric and parametric methods (regression analysis). The main ingredients are senders’ lying costs and receivers’ partisanship. In our analyses, we focus on observations for which the senders’ messages are not in line with the receivers’ initial answers.

Hypotheses for senders
S1. Incentives: The probability of lying is lower if the truth corresponds to the senders’ incentives than if the truth does not correspond to the senders’ incentives.
S2. Lying costs: The probability of lying is higher the lower the senders’ lying costs.

Hypotheses for receivers
R1. Partisanship I: The probability that the initial answer is "Trump administration" ("Obama administration") is higher for Republicans (Democrats).
R2. Partisanship II: The probability that receivers who change their initial answer from “Trump administration” to “Obama administration” (“Obama administration” to “Trump administration”) is higher for Democrats (Republicans).
R3. Senders’ incentives: The probability that receivers change their initial answer is lower if the message corresponds to the senders’ incentives than when it does not correspond to the senders’ incentives.
R4. Belief: The probability that receivers change their initial answer decreases with the strength of their belief in stage 1.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
The following hypotheses are secondary outcomes because they are not based on our behavioral game theoretical model:
S3. Partisanship: The probability that senders send the message “Obama administration” (“Trump administration”) is higher for Democrats (Republicans).
R5. Own incentives: The probability that receivers switch after getting a message that does not correspond to the senders’ incentives is higher for receivers whose answers in stage 1 are not incentivized than for those whose answers in stage 1 are incentivized.
There are three kinds of additional secondary outcomes for which our study is explorative:

1. We analyze whether or not receivers earn higher payoffs when switching their initial answer if the senders’ messages are in line with the senders’ incentives.
2. We analyze the impact of our control variables (for demographics mainly age and gender) and our measures of personal and political attitudes. We also consider if the impact of these variables differs between the two questions (crime and unemployment).
3. We compare two samples: The first sample consists of the decisions of all receivers. The second sample consists only of the receivers’ decisions for the first question (that is, if a receiver is first asked about crime and then for unemployment, we only consider the decisions for crime and vice versa). Comparing the two samples then allows us to identify whether receivers exhibit preferences for consistent behavior.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Our experiment matches two subjects who take the role of either the sender or the receiver. Receivers know that there are two main stages of which only one will be randomly chosen for determining their bonus.

In stage 1 of the experiment, receivers are asked for their belief on whether the unemployment rate in a US state, and the crime rate in another US state, was lower during the Obama or the Trump administration. The two questions are presented in a random order. Participants know that, given that stage 1 is paid out, exactly one of the two questions will be randomly chosen to determine their payoff and that they will receive a bonus if and only if their answer to this question is correct. For each question, we also ask them how certain they are that their belief is correct.

In stage 2, receivers can revise their previous answers based on a message of a sender. For eliciting those messages, we randomly choose two out of four questions. For half of the questions, the correct answer is “Obama administration” and for the other half, the correct answer is “Trump administration”. We inform senders about the correct answer of their questions. For each question, they can then send one out of two possible messages, suggesting that the correct answer is “Obama administration” or “Trump administration”. Each of the two messages will then be sent to a different receiver. For determining the senders’ payoffs, one receiver will be chosen at random. Senders in treatment TRUMP (OBAMA) know that they will receive a bonus if and only if the receiver’s answer is “Trump” (“Obama”). We inform receivers that senders know the correct answers but do not need to report truthfully, as well as their incentives. After receivers observed the sender’s message, we again ask them whether they believe that “Obama administration” or “Trump administration” is the correct answer. Receivers know that, given that stage 2 is paid out, they will get a bonus if and only if their answer is correct.

To elicit the receivers’ responses to the senders’ messages, we use the strategy-method: For each of the two questions, receivers have to choose between “Obama administration” and “Trump administration” for each of the two possible messages of the sender. Payoffs are calculated based on the message the sender actually sent.

In the post-experimental questionnaire, we elicit basic demographic information and personal attitudes, such as their political affiliation.

Overall, we have two main treatments that differ in the senders’ incentives:
• In treatment OBAMA, the sender will get a bonus if and only if the receiver, after having received the message, announces that “Obama administration” is the correct answer.
• In treatment TRUMP, the sender will get a bonus if and only if the receiver, after having received the message, announces that “Trump administration” is the correct answer.

We also have an additional treatment, where the receivers are not incentivized for their answers in the first stage (treatment STAGE1_NI). For those participants, their bonus is always determined by their answers in stage 2. Like before (treatment STAGE1_I), they will receive a bonus if and only if their answer is correct.

We use a between-subject design. Each subject, hence, participates in only one treatment and plays only one role. The experiment will be conducted online and participants will be recruited using Prolific. We will require that participants are located in the US and that they show an approval rate of at least 95%. Participants who answered the majority of our comprehension questions incorrectly will not be allowed to continue, and will hence not be included our sample.
Experimental Design Details
Preamble
We ran several pretests with our original design where the payoff of subjects in their role of receivers depended on their announced degree of certainty. Notwithstanding our various attempts to re-write the instructions, the percentage of subjects who didn’t understand the design was too high. Therefore, we simplified the design (see the instructions above). Furthermore, we developed a behavioral game theoretical model to develop the hypotheses that we test in our experiment.
Randomization Method
We first collect the senders’ data and only after we obtained the desired number of observations we collect the receivers’ data. For both stages, we use a randomization approach based on the point in time individuals choose to participate in our study.
During the study, any randomization is done by the software.

By comparing the two main treatments (OBAMA vs TRUMP), we predominantly identify 1) whether senders choose to send different messages when profiting from receivers indicating that “Trump administration” is the correct answer compared to profiting from receivers believing that “Obama administration” is the correct answer, and 2) whether and how receivers account for the different incentives of senders when giving their answers. Hence, instead of having a neutral control group, we use participants’ decisions in the OBAMA treatment as the baseline for comparison.
Randomization Unit
Individual level randomization
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
We cluster on the participant level.
Sample size: planned number of observations
We plan to collect 2000 observations for senders and 4000 observations for receivers. To obtain our planned number of observations, we need to recruit 500 receivers whose answers are incentivized already in stage 1 and additional 500 receivers whose answers in stage 1 are not incentivized (in each case, 250 for treatment TRUMP and 250 for treatment OBAMA). Hence, we need to recruit 1000 senders, yielding a total of 2000 messages. Our use of the strategy method for eliciting receivers’ belief implies that we obtain 4000 observations of receivers in total (1000 receivers times two questions times two messages). Because our analyses will focus on those receivers for whom the initial answer is not in line with the message they receive, only half of the observations, i.e. 2000, will actually be used.
Sample size (or number of clusters) by treatment arms
Because we elicit each participant’s decisions in both domains, we need to recruit a total of 2000 participants. We recruit 1000 participants in the role of receivers. For 500 receivers, their answers are incentivized already in stage 1 and for the remaining 500 receivers, their answers in stage 1 are not incentivized (in each case, 250 for treatment TRUMP and 250 for treatment OBAMA). Moreover, we recruit 1000 participants in the role of senders.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Ethical Review Board (WHU - Otto Beisheim School of Management)
IRB Approval Date
2023-10-05
IRB Approval Number
N/A

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials