Selective Exposure and Trust in News

Last registered on December 02, 2024

Pre-Trial

Trial Information

General Information

Title
Selective Exposure and Trust in News
RCT ID
AEARCTR-0014741
Initial registration date
November 01, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
December 02, 2024, 11:08 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
UCL

Other Primary Investigator(s)

PI Affiliation
Stanford
PI Affiliation
Stanford

Additional Trial Information

Status
In development
Start date
2024-11-01
End date
2025-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
In the United States, Democrats and Republicans tend to consume different types of news, as partisans are more likely to seek out information that supports their party. This paper seeks to better understand why people tend to consume more co-partisan news. In particular, we consider two main types of explanations: people believe that co-partisan sources are more accurate and instrumentally value them more, and people have a preference to read co-partisan sources for reasons besides their instrumental value. To disentangle these hypotheses, we run an experiment in which people choose news sources when they are asked to make predictions about a salient political event, and we vary the incentives they have for making accurate predictions. When incentives increase, people may turn to sources they expect to be more instrumentally valuable, and potentially away from sources they consume for other reasons. We test this effect using real news sources like Fox News and CNN, as well as using artificially-constructed sources that exogenously vary in accuracy and slant. To isolate the channel that accuracy of news-source predictions play, we both analyze behavior when they use news sources to improve their own prediction, as well as when news sources make predictions for them.
External Link(s)

Registration Citation

Citation
Gentzkow, Matthew, Peter Robertson and Michael Thaler. 2024. "Selective Exposure and Trust in News." AEA RCT Registry. December 02. https://doi.org/10.1257/rct.14741-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
We have copied text from our pre-analysis plan in the "Hidden" field below.
Intervention (Hidden)
We run an experiment in which people choose news sources when they are asked to make predictions about a salient political event: the outcome of the 2024 US Presidential election. Participants make choices over what sources they would like to access to help them make a better prediction. To disentangle preferences for like-minded news and beliefs about accuracy, we vary the incentives for correct predictions. The idea is that when incentives increase, people will turn to sources they expect to be more instrumentally valuable, while not affecting other psychological-utility factors.

Our experiment focuses on a salient political context: the outcome of the 2024 United States presidential election. Participants predict the winner in one of seven states: Arizona, Georgia, Michigan, Nevada, North Carolina, Pennsylvania, and Wisconsin. (These states are classified by many sources as the seven states most likely to tip the overall election (e.g. Silver and McKown-Dawson (2024) gives these states a 90\% chance of tipping the election, as of October 31).)

We run two versions of this experiment. In Study 1, we consider the case when people choose among real media outlets which plausibly vary in their party alignment. In Study 2, we further unpack mechanisms by constructing artificial information sources that we exogenously vary the accuracy and slant of. This is important, since participants are likely to arrive to Study 1 with well-formed views about the accuracies and slants of the sources we use.

There are many factors that influence instrumental utility. To isolate the channel of perceived accuracy, it is not enough to just let participants access news sources and update their beliefs. Instead, much of our analyses focuses on a treatment in which participants are asked to delegate their prediction to the source. That is, the source makes their prediction for them. In this treatment, the instrumental value of a source is exactly proportional to its accuracy. This also enables us to control for instrumental value and unpack other factors that are at play.
Intervention Start Date
2024-11-01
Intervention End Date
2024-11-05

Primary Outcomes

Primary Outcomes (end points)
Co-partisan choice: Indicator variable for whether the participant ranks the co-partisan source higher than the other two available sources.
Primary Outcomes (explanation)
We provide further information and detail our regression specification in our pre-analysis plan.

Secondary Outcomes

Secondary Outcomes (end points)
Same source: Indicator variable for whether the participant ranks the same source highest (among all three available sources) across elicitation with and without incentives.

Isolation index: Within the experiment, this is intended to measure how much selective exposure there is between Democrats and Republicans. We estimate the index using Gentzkow and Shapiro (2011).
Secondary Outcomes (explanation)
We provide further information, detail our regression specification, and discuss auxiliary outcomes, in our pre-analysis plan.

Experimental Design

Experimental Design
We have copied text from our pre-analysis plan in the "Hidden" field below.
Experimental Design Details
Data

We plan to recruit 4000 participants from the online platform Prolific (prolific.co). We plan to recruit 2500 participants in Study 1, and 1500 participants in Study 2, over the course of November 1-4. In each study, we screen participants to require that they be United States nationals who are eligible to vote in the 2024 elections, that they have previously completed 100 submissions on Prolific, that they have an approval rate of at least 90%, and that they did not take our previous studies related to this project. 2000 participants will have reported to Prolific that they are Democrats, and the other 2000 participants have reported that they are Republicans. We will also use representative samples for gender and race/ethnicity, within-party, in order to have a more broadly-representative population of interest.

Participants are given an information sheet and consent form. Only participants who give consent to participate will enter into the study. Participants are paid $2.50 for completing the study, plus any bonuses they receive, and the study is advertised as taking 10 minutes.

Some participants may respond differently to our survey than they do on Prolific. As such, we include our own question of party identification, and in all our analyses we classify participants as Republicans if they either say they are Republicans or Independents who prefer the Republican Party, and Democrats if they either say they are Democrats or Independents who prefer the Democratic Party.


Initial Beliefs

Participants are randomly assigned to one of seven swing states and are asked questions about whether they believe that Donald Trump or Kamala Harris will win in that state. They are asked to make a binary prediction (e.g. Trump is more likely to win in Arizona) and a probabilistic prediction (e.g. Trump has a 67% chance to win in Arizona).

After they state their initial beliefs, participants enter into either Study 1 or Study 2.


Study 1: Real news sources

In Study 1, participants are asked to choose among four options. One option is the website of Fox News, the most popular news outlet for Republicans according to YouGov (2024). One option is the website of CNN, the most popular news outlet for Democrats according to YouGov (2024). The third option is a local source in the specific state that participants predict. (Specifically, we use the following local sources. Arizona: The Arizona Republic; Georgia: The Atlanta Journal-Constitution; Michigan: MLive Media Group; Nevada: The Las Vegas Review-Journal; North Carolina: The News and Observer; Pennsylvania: The Philadelphia Inquirer; Wisconsin: The Milwaukee Journal Sentinel.)
The fourth option is to make the guess themselves, without any additional news.

Participants rank these four options in various conditions. The higher they rank an option, the more likely it is that they will be assigned to that option. We define the "co-partisan source" as Fox News for Republicans and CNN for Democrats, and the "counter-partisan source" as CNN for Republicans and Fox News for Democrats.

We consider four types of randomization.

First, we randomly vary the framing of the initial prediction question between participants. Participants are roughly equally likely to be given a partisan frame that suggests that Republicans will be favored in the state, that Democrats will be favored in the state, or that there are reasons to think both parties could be favored.

Second, a major treatment arm involves randomly varying, between participants, how the sources are used. To ensure incentive compatibility of initial guesses, about 1% of participants will skip the source pages and skip to the end of the study. Of the remaining participants, about 40% will be assigned to the SecondGuess treatment, in which they have the option to revise their guess about the winning candidate after seeing the source's content. The remaining 60% will be assigned to the Delegate treatment, in which they will delegate their decision to the news source's prediction of the winner. To create a measure of news source predictions, we asked a separate group of workers on Prolific to categorize each source into predicting Trump or predicting Harris. This binary prediction is then used in place of the participant's prediction to determine whether the participant is deemed correct or incorrect. To test whether participants' preferences for sources are affected by seeing good or bad news for a source, in the Delegate treatment we have roughly half of the participants see the source's prediction ("DelegateSeen") after their choice is locked in, and the other half not see the source's prediction at any stage ("DelegateUnseen").

Third, our main within-person treatment arm involves incentives for correct predictions. We first ask most participants to make their ranking choice without incentives, Then, to ensure incentive compatibility, we implement 1% of participants' choices and have them either make a new prediction (in SecondGuess) or skip to the end of the study (in Delegate). Among the remaining 99%, 70% of participants are asked to make their ranking a second time, now with monetary incentives for getting their binary election prediction correct. To test the effects of large versus small incentives, we vary between-participants whether these incentives are $2 or $20. We additionally have 10% of participants face incentives on the initial prior elicitation screen. This allows us to test for partisan cheerleading and how attention is affected by the incentives.

After this decision is made, the ranking is implemented as follows: 99.4% of participants face a price elicitation to change their preferred option, in order to determine the strength of preference for a given source. For these, we first select what will be considered each participant's top ranked choice: their first option is chosen with 99% chance, their second option is chosen with 0.4% chance. The remaining 0.6% of participants do not face a new price elicitation, they are simply assigned their first choice with 0.3% chance, their second with 0.2% chance, and their third with 0.1% chance.

For these 99.4% of participants, we offer a small bonus ($0.20 in the $2 incentive condition and $2 in the $20 incentive condition) to give up the opportunity to look at or delegate a source (or if they rank their own prediction higher, a small bonus to use their top-ranked source). Choice in this question is a binary measure of strength of preference. To ensure incentive compatibility of their initial choices, either this option will be implemented (with probability 1/2) or participants will receive their top-ranked option and the small bonus (with probability 1/2).

We then have participants rate the sources from 0-100 about the likelihood that the source will make an accurate prediction and the probability that the source will predict that Harris (vs. Trump) wins in that state.

Participants are then directed to the selected website in the SecondGuess treatment, informed of the websites prediction in the DelegateSeen treatment or simply told the website's prediction has been submitted on their behalf in the DelegateUnseen treatment. Participants in the SecondGuess treatment have their binary beliefs re-elicited (in addition to the probabilistic beliefs).

Finally, all participants receive a demographics survey, and a short debriefing page where they are briefly told about the purpose of the study.


Study 2: Artificial news sources

Study 2 extends Study 1 to further unpack mechanisms. In Study 2, we create artificial sources that exogenously vary in their accuracy and slant. Participants choose among a Red Model, Blue Model, Gray Model, and their previous guess.

Models are constructed by adding noise to Nate Silver's Silver Bulletin predictions (Silver and McKown-Dawson 2024). Specifically, we take the binary forecasts and randomly switch some of the predictions from Harris to Trump, or from Trump to Harris. Models vary in their accuracy (few switches = "high accuracy" and many switches = "medium accuracy") and their slant (the Red Model has more switches to Trump, and the Blue Model has more switches to Harris). Each participant chooses between variants of each model, and each of the three models is independently equally likely to have high or medium accuracy.

In Study 2, we use a streamlined set of treatments. Participants only have the option to earn the $2 incentive, and are only randomized between the DelegateSeen and DelegateUnseen treatments (and not the SecondGuess treatment). No participants see the prior elicitation as incentivized, and all participants face both the unincentivized source preference elicitation and the incentivized version.

Further details in terms of the survey flow and other randomization (frames, state) are as in Study 1.
Randomization Method
Randomization done by a computer
Randomization Unit
We provide this information in our attached pre-analysis plan.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
4,000 participants
Sample size: planned number of observations
4,000 participants
Sample size (or number of clusters) by treatment arms
We provide this information in our attached pre-analysis plan, as well as in the experimental design section above.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
UCL Research Ethics Committee
IRB Approval Date
2024-10-24
IRB Approval Number
12439/001
IRB Name
Stanford University Institutional Review Board
IRB Approval Date
2024-11-01
IRB Approval Number
77656
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials