NEW UPDATE: Completed trials may now upload and register supplementary documents (e.g. null results reports, populated pre-analysis plans, or post-trial results reports) in the Post Trial section under Reports, Papers, & Other Materials.
The ``Fake News'' Effect: Experimentally Identifying Motivated Reasoning Using Trust in News
Initial registration date
May 25, 2020
May 26, 2020 5:00 PM EDT
Other Primary Investigator(s)
Additional Trial Information
Motivated reasoning posits that people distort how they process information in the direction of beliefs they find attractive. This paper creates a novel experimental design to identify motivated reasoning from Bayesian updating when people have preconceived beliefs. It analyzes how subjects assess the veracity of information sources that tell them the median of their belief distribution is too high or too low. Bayesians would infer nothing about the source veracity, but motivated beliefs are evoked. Results support politically-motivated reasoning about immigration, income mobility, crime, racial discrimination, gender, climate change, and gun laws. Motivated reasoning helps explain belief biases, polarization, and overprecision.
Thaler, Michael. 2020. "The ``Fake News'' Effect: Experimentally Identifying Motivated Reasoning Using Trust in News." AEA RCT Registry. May 26.
The theory of motivated reasoning posits that people distort how they process information in the direction of beliefs they find attractive. This experiment uses a new design that aims to disentangle motivated reasoning from Bayesian inference when people enter into the experiment with preconceived beliefs. It analyzes how subjects assess the veracity of information sources that tell them the median of their belief distribution is too high or too low. In this environment, Bayesians would infer nothing about the source veracity, but motivated beliefs are evoked. I test motivated reasoning on ten different topics, and relate the bias to current belief biases, belief polarization, trust in "Fake News," overprecision, and overconfidence.
Intervention Start Date
Intervention End Date
Primary Outcomes (end points)
Motivated reasoning: Assessment of the veracity of a news source.
Primary Outcomes (explanation)
Motivated reasoning is measured as a directional deviation from Bayes' rule. Subjects are asked to assess the probability that a message comes from a news sources that tells the truth (or lies). This design is constructed such that Bayesians will not update about the veracity about a news source regardless of the message. However, motivated reasoners will give higher veracity assessments to sources that tell them when it tells them something they are more motivated to believe. Motivated reasoning is measured by regressing veracity assessment on message type (pro-motivated belief / anti-motivated belief).
Secondary Outcomes (end points)
1. Dummy for what direction people change their beliefs
2. Polarization: Whether subjects' beliefs move away from or towards the mean belief 3. Overprecision: 1/2 minus the probability that 50% confidence interval contains the answer 4. Confidence: Prediction of performance relative to 100 others 5. Performance: Subjects' performance on news assessments 6. Willingness-to-pay for a message (tertiary)
Secondary Outcomes (explanation)
1. Whether subjects change their beliefs in the direction of the message statement; e.g. if the message says "The answer is greater than 57." then this takes value 1 if and only if the subject's second guess is greater than 57.
2. Whether subjects are more likely to change their beliefs in the direction of the message statement when the message tells them to move away from the mean population belief. 3. Subjects' median belief, 25th percentiles, and 75th percentiles are elicited on each question. Overprecision is a dummy that takes 0.5 if the correct answer is not within the CI and -0.5 if the correct answer is within the CI. That is, overprecision is equal to 0.5 - P(answer within 50\% CI). Overprecision is positive (negative) when the CI contains the true answer less (greater) than 50\% of the time. 4. Subjects' prediction of where they ranked in performance on the study is elicited towards the end of the experiment. 5. On every news assessment, subjects earn a score which is proportional to the probability they win a bonus prize. Performance on a given question is equal to this score. 6. Becker-DeGroot-Marschak elicitation to determine whether the subject receives a message at all.
Identifying Motivated Reasoning:
The main test of this in the experiment involves three steps. See the analysis plan for more details. 1. Beliefs: Subjects are asked to guess the answers to questions like the refugee one above. Importantly, they are asked and incentivized to guess their median belief (i.e. such that find it equally likely for the answer to be above or below their guess). They are also asked and incentivized for their interquartile range. 2. News: Subjects receive a binary message from one of two news sources: True News and Fake News. The message from True News is always correct, and the message from Fake News is always incorrect. The probability of either source is 1/2 and iid across questions. This is the main (within-subject) treatment variation. The message says either "The answer is greater than your previous guess of [previous guess]." or "The answer is less than your previous guess of [previous guess]." Note that the message space is different for each subject since subjects have different priors. These customized messages are designed so that they have the same subjective likelihood of occurring. 3. Assessment: After receiving the message, subjects assess the probability that the source was True News on a scale from 0/10 to 10/10 and are incentivized to state their true belief. This is the main outcome measure. The page is identical to the beliefs page but the guess boxes are replaced with assessment choices. The effect of variation in news on veracity assessments is the primary outcome variable for identifying motivated reasoning. The general point of this setup is that subjects receive messages that compare the answer to their median, so they should not rationally update their assessment based on the message. Directionally different assessments are difficult to reconcile with Bayesian updating; they are also difficult to reconcile with general misweighting of priors (since the prior of source is fixed at 1/2) or likelihoods (each message is equally likely, so the message is uninformative about source veracity). However, these deviations can be explained by motivated reasoning. The most direct test is to hypothesize what people are motivated to believe, and compare their assessments on "Pro-Motive" news and "Anti-Motive" news. If Pro-Motive news is trusted more than Anti-Motive news, this indicates that motivated reasoning is likely with these hypothesized motives is at play. Interacting this assessment gap with a treatment provides an estimate for the effect of the treatment on the degree of the motivated reasoning bias. If this gap is positive, and the interaction of news type (Pro/Anti-Motive) and treatment is negative, then the treatment likely is effective in debiasing subjects. Subjects see 14 rounds of questions; on politicized and performance-related topics, the random binary messages are coded as Pro-Motive or Anti-Motive using a hypothesized table of motives. Between-subject randomizations:
Given prior / not given prior:
1/3 of subjects will be told that there is a 50% chance of seeing True News and a 50% chance of seeing Fake News, while the other 2/3 will not be given this prior. Willingness-to-pay (WTP) / Second guessing (SG):
WTP: Half of subjects will have their WTP for a message elicited using a Becker-DeGroot-Marschak procedure in Round 12. They will either see a message as in previous rounds (if their WTP is above the random number), or a black bar over the message.
SG: The other half of subjects will be asked to guess the answer to the original question again after seeing the message.
Experimental Design Details
Computer randomizes news source within subject; computer randomizes between-subject treatments.
News source: Question
Was the treatment clustered?
Sample size: planned number of clusters
1000 individuals (based on comprehension check failure rates from the pilot), 90% of whom will be partisan and be in the main analysis.
Sample size: planned number of observations
12800 news assessments (based on comprehension check failure rates from the pilot and willingness-to-pay estimates)
Sample size (or number of clusters) by treatment arms
News source: 6400 True News, 6400 Fake News. 4000 Pro-Party News, 4000 Anti-Party News. 500 Pro-Performance News, 500 Anti-Performance News.
WTP treatment: 500 individuals
Second-guess treatment: 500 individuals
Given prior treatment: 333 individuals
Not given prior treatment: 667 individuals
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
INSTITUTIONAL REVIEW BOARDS (IRBs)
Committee on the Use of Human Subjects: University-Area Institutional Review Board at Harvard
IRB Approval Date
IRB Approval Number
Post Trial Information
Is the intervention completed?
Is data collection complete?