Experimental Design Details
Data
We plan to recruit 4000 participants from the online platform Prolific (prolific.co). We plan to recruit 2500 participants in Study 1, and 1500 participants in Study 2, over the course of November 1-4. In each study, we screen participants to require that they be United States nationals who are eligible to vote in the 2024 elections, that they have previously completed 100 submissions on Prolific, that they have an approval rate of at least 90%, and that they did not take our previous studies related to this project. 2000 participants will have reported to Prolific that they are Democrats, and the other 2000 participants have reported that they are Republicans. We will also use representative samples for gender and race/ethnicity, within-party, in order to have a more broadly-representative population of interest.
Participants are given an information sheet and consent form. Only participants who give consent to participate will enter into the study. Participants are paid $2.50 for completing the study, plus any bonuses they receive, and the study is advertised as taking 10 minutes.
Some participants may respond differently to our survey than they do on Prolific. As such, we include our own question of party identification, and in all our analyses we classify participants as Republicans if they either say they are Republicans or Independents who prefer the Republican Party, and Democrats if they either say they are Democrats or Independents who prefer the Democratic Party.
Initial Beliefs
Participants are randomly assigned to one of seven swing states and are asked questions about whether they believe that Donald Trump or Kamala Harris will win in that state. They are asked to make a binary prediction (e.g. Trump is more likely to win in Arizona) and a probabilistic prediction (e.g. Trump has a 67% chance to win in Arizona).
After they state their initial beliefs, participants enter into either Study 1 or Study 2.
Study 1: Real news sources
In Study 1, participants are asked to choose among four options. One option is the website of Fox News, the most popular news outlet for Republicans according to YouGov (2024). One option is the website of CNN, the most popular news outlet for Democrats according to YouGov (2024). The third option is a local source in the specific state that participants predict. (Specifically, we use the following local sources. Arizona: The Arizona Republic; Georgia: The Atlanta Journal-Constitution; Michigan: MLive Media Group; Nevada: The Las Vegas Review-Journal; North Carolina: The News and Observer; Pennsylvania: The Philadelphia Inquirer; Wisconsin: The Milwaukee Journal Sentinel.)
The fourth option is to make the guess themselves, without any additional news.
Participants rank these four options in various conditions. The higher they rank an option, the more likely it is that they will be assigned to that option. We define the "co-partisan source" as Fox News for Republicans and CNN for Democrats, and the "counter-partisan source" as CNN for Republicans and Fox News for Democrats.
We consider four types of randomization.
First, we randomly vary the framing of the initial prediction question between participants. Participants are roughly equally likely to be given a partisan frame that suggests that Republicans will be favored in the state, that Democrats will be favored in the state, or that there are reasons to think both parties could be favored.
Second, a major treatment arm involves randomly varying, between participants, how the sources are used. To ensure incentive compatibility of initial guesses, about 1% of participants will skip the source pages and skip to the end of the study. Of the remaining participants, about 40% will be assigned to the SecondGuess treatment, in which they have the option to revise their guess about the winning candidate after seeing the source's content. The remaining 60% will be assigned to the Delegate treatment, in which they will delegate their decision to the news source's prediction of the winner. To create a measure of news source predictions, we asked a separate group of workers on Prolific to categorize each source into predicting Trump or predicting Harris. This binary prediction is then used in place of the participant's prediction to determine whether the participant is deemed correct or incorrect. To test whether participants' preferences for sources are affected by seeing good or bad news for a source, in the Delegate treatment we have roughly half of the participants see the source's prediction ("DelegateSeen") after their choice is locked in, and the other half not see the source's prediction at any stage ("DelegateUnseen").
Third, our main within-person treatment arm involves incentives for correct predictions. We first ask most participants to make their ranking choice without incentives, Then, to ensure incentive compatibility, we implement 1% of participants' choices and have them either make a new prediction (in SecondGuess) or skip to the end of the study (in Delegate). Among the remaining 99%, 70% of participants are asked to make their ranking a second time, now with monetary incentives for getting their binary election prediction correct. To test the effects of large versus small incentives, we vary between-participants whether these incentives are $2 or $20. We additionally have 10% of participants face incentives on the initial prior elicitation screen. This allows us to test for partisan cheerleading and how attention is affected by the incentives.
After this decision is made, the ranking is implemented as follows: 99.4% of participants face a price elicitation to change their preferred option, in order to determine the strength of preference for a given source. For these, we first select what will be considered each participant's top ranked choice: their first option is chosen with 99% chance, their second option is chosen with 0.4% chance. The remaining 0.6% of participants do not face a new price elicitation, they are simply assigned their first choice with 0.3% chance, their second with 0.2% chance, and their third with 0.1% chance.
For these 99.4% of participants, we offer a small bonus ($0.20 in the $2 incentive condition and $2 in the $20 incentive condition) to give up the opportunity to look at or delegate a source (or if they rank their own prediction higher, a small bonus to use their top-ranked source). Choice in this question is a binary measure of strength of preference. To ensure incentive compatibility of their initial choices, either this option will be implemented (with probability 1/2) or participants will receive their top-ranked option and the small bonus (with probability 1/2).
We then have participants rate the sources from 0-100 about the likelihood that the source will make an accurate prediction and the probability that the source will predict that Harris (vs. Trump) wins in that state.
Participants are then directed to the selected website in the SecondGuess treatment, informed of the websites prediction in the DelegateSeen treatment or simply told the website's prediction has been submitted on their behalf in the DelegateUnseen treatment. Participants in the SecondGuess treatment have their binary beliefs re-elicited (in addition to the probabilistic beliefs).
Finally, all participants receive a demographics survey, and a short debriefing page where they are briefly told about the purpose of the study.
Study 2: Artificial news sources
Study 2 extends Study 1 to further unpack mechanisms. In Study 2, we create artificial sources that exogenously vary in their accuracy and slant. Participants choose among a Red Model, Blue Model, Gray Model, and their previous guess.
Models are constructed by adding noise to Nate Silver's Silver Bulletin predictions (Silver and McKown-Dawson 2024). Specifically, we take the binary forecasts and randomly switch some of the predictions from Harris to Trump, or from Trump to Harris. Models vary in their accuracy (few switches = "high accuracy" and many switches = "medium accuracy") and their slant (the Red Model has more switches to Trump, and the Blue Model has more switches to Harris). Each participant chooses between variants of each model, and each of the three models is independently equally likely to have high or medium accuracy.
In Study 2, we use a streamlined set of treatments. Participants only have the option to earn the $2 incentive, and are only randomized between the DelegateSeen and DelegateUnseen treatments (and not the SecondGuess treatment). No participants see the prior elicitation as incentivized, and all participants face both the unincentivized source preference elicitation and the incentivized version.
Further details in terms of the survey flow and other randomization (frames, state) are as in Study 1.