Back to History

Fields Changed

Registration

Field Before After
Abstract Fake news is a global phenomenon that has been decisively influencing our political reality to date. Important features of fake news are that (i) the person who acts as the sender of messages has superior knowledge compared to the receivers who are susceptible to fake news, and that (ii) the sender has incentives to push the receivers’ beliefs in specific directions. Also, the success of fake news is likely to depend on (i) the receivers’ strength of their initial beliefs, and on (ii) whether the fake news is in line with the receiver’s (ideological) worldview. We develop an experiment that accounts for these features of fake news. Subjects in the role of receivers are initially asked for their beliefs about unemployment and crime rates in a US state during periods with either republican or democratic governments. They can then revise their beliefs based on a message from a subject in the role of the sender, suggesting the correct answer. We inform receivers that senders know the correct answer, that they may or may not tell the truth, and that they monetarily benefit from receivers’ updates in a pre-determined direction. This allows us to analyze how receivers respond to the senders’ messages depending on the senders’ incentives, the receivers’ initial beliefs, and the receivers’ political attitudes. Fake news is a global phenomenon that has been decisively influencing our political reality to date. Important features of fake news are that (i) the person who acts as the sender of messages has superior knowledge compared to receivers, and that (ii) the sender has incentives to push the receivers’ beliefs in specific directions. Also, the success of fake news is likely to depend on (i) the receivers’ confidence in their initial beliefs, and on (ii) whether the fake news is in line with the receiver’s (ideological) worldview (partisan behavior). We develop an experiment that accounts for these features of fake news. Subjects in their role of receivers are initially asked whether they believe that the unemployment and crime rates in a US state were higher under the Trump (T) or Obama (O) administration. They can then revise their beliefs based on a message from a sender, suggesting the correct answer. Receivers are informed that senders know the correct answer, are still allowed to send either T or O, and will get a bonus if the receiver’s final answer is T (O). Receivers will get a bonus if their answer is correct. This allows us to analyze how receivers respond to the senders’ messages depending on the senders’ incentives, the receivers’ initial beliefs, and the receivers’ political attitudes.
Trial End Date June 30, 2023 February 29, 2024
Last Published March 20, 2023 05:21 PM October 11, 2023 03:53 PM
Intervention Start Date March 01, 2023 October 16, 2023
Intervention End Date April 30, 2023 December 31, 2023
Primary Outcomes (End Points) We test the hypotheses listed below (notwithstanding that our pilots suggest that most of them will be rejected even with a large number of subjects). All hypotheses are primary outcomes. We test all these hypotheses both with non-parametric and parametric methods (regression analysis). Hypotheses for senders S1: Senders respond to incentives: They more often send the message that monetarily benefits them given the receiver believes their message. Hypotheses for receivers R1: Receivers’ update their beliefs more strongly in the direction of the senders’ messages when these messages are against the senders’ monetary incentives. R2: Receivers update their beliefs more strongly in the direction of the senders’ messages when these messages are in line with their own political attitude. R3 (Efficiency): If the senders’ messages violate the senders’ monetary incentives, receivers earn higher payoffs when updating their beliefs in line with these messages rather than not updating their beliefs. R4a (R4b): If the senders’ messages are in line with the senders’ monetary incentives, receivers earn higher (lower) payoffs when updating their beliefs in line with the senders’ messages rather than not updating their beliefs. We test the hypotheses listed below. All hypotheses that are based on our behavioral game theoretical model are primary outcomes. We test these hypotheses both with non-parametric and parametric methods (regression analysis). The main ingredients are senders’ lying costs and receivers’ partisanship. In our analyses, we focus on observations for which the senders’ messages are not in line with the receivers’ initial answers. Hypotheses for senders S1. Incentives: The probability of lying is lower if the truth corresponds to the senders’ incentives than if the truth does not correspond to the senders’ incentives. S2. Lying costs: The probability of lying is higher the lower the senders’ lying costs. Hypotheses for receivers R1. Partisanship I: The probability that the initial answer is "Trump administration" ("Obama administration") is higher for Republicans (Democrats). R2. Partisanship II: The probability that receivers who change their initial answer from “Trump administration” to “Obama administration” (“Obama administration” to “Trump administration”) is higher for Democrats (Republicans). R3. Senders’ incentives: The probability that receivers change their initial answer is lower if the message corresponds to the senders’ incentives than when it does not correspond to the senders’ incentives. R4. Belief: The probability that receivers change their initial answer decreases with the strength of their belief in stage 1.
Experimental Design (Public) Our experiment matches two subjects who take the role of either the sender or the receiver. Receivers are first asked for their belief on whether the unemployment rate in a US state and the crime rate in another US state was lower during the Obama or the Trump administration. The two questions are presented in a random order. Participants know that one question will be randomly chosen to be payoff relevant. For each question, we also ask them how certain they are that their belief is correct. Receivers know that they maximize their expected payoff by indicating their actual degree of certainty. We then match each receiver with one sender. Senders know the correct answers. For each domain, they can send one out of two possible messages suggesting that the correct answer is “Obama administration” or “Trump administration” to the receiver they are matched with. We inform receivers that senders know the correct answers but do not need to report truthfully. After receivers observed the sender’s message, they can change their beliefs, still knowing that announcing their actual degree of certainty maximizes their expected payoff. To elicit the receivers’ responses to the senders’ messages, we use the strategy-method: Receivers have to indicate whether or not and how much to change their initial beliefs for each of the two possible messages of the sender. Payoffs are calculated based on the message the sender actually sent. In the post-experimental questionnaire, we elicit basic demographic information and personal attitudes, such as their political affiliation. We have two main treatments that differ in the senders’ incentives: • In treatment OBAMA, the sender gets a bonus if and only if the receiver, after having received the message, announces that “Obama administration” is the correct answer. • In treatment TRUMP, the sender gets a bonus if and only if the receiver, after having received the message, announces that “Trump administration” is the correct answer. We have two additional treatments where we inform receivers about the message the sender they are matched with sent in a previous experiment. The crucial difference is that receivers know that the sender’s payoff is independent of what they announce. This is done to avoid that the receivers’ behavior is confounded by caring about the senders’ payoffs. We use a between-subject design. Each subject, hence, participates in only one treatment and plays only one role. The experiment will be conducted online and participants will be recruited using Amazon Mechanical Turk. We will require that participants are located in the US and that they show a HIT approval rate of at least 95%. Participants who answered the majority of our comprehension questions incorrectly will not be allowed to continue, and will hence be excluded from our sample. Our experiment matches two subjects who take the role of either the sender or the receiver. Receivers know that there are two main stages of which only one will be randomly chosen for determining their bonus. In stage 1 of the experiment, receivers are asked for their belief on whether the unemployment rate in a US state, and the crime rate in another US state, was lower during the Obama or the Trump administration. The two questions are presented in a random order. Participants know that, given that stage 1 is paid out, exactly one of the two questions will be randomly chosen to determine their payoff and that they will receive a bonus if and only if their answer to this question is correct. For each question, we also ask them how certain they are that their belief is correct. In stage 2, receivers can revise their previous answers based on a message of a sender. For eliciting those messages, we randomly choose two out of four questions. For half of the questions, the correct answer is “Obama administration” and for the other half, the correct answer is “Trump administration”. We inform senders about the correct answer of their questions. For each question, they can then send one out of two possible messages, suggesting that the correct answer is “Obama administration” or “Trump administration”. Each of the two messages will then be sent to a different receiver. For determining the senders’ payoffs, one receiver will be chosen at random. Senders in treatment TRUMP (OBAMA) know that they will receive a bonus if and only if the receiver’s answer is “Trump” (“Obama”). We inform receivers that senders know the correct answers but do not need to report truthfully, as well as their incentives. After receivers observed the sender’s message, we again ask them whether they believe that “Obama administration” or “Trump administration” is the correct answer. Receivers know that, given that stage 2 is paid out, they will get a bonus if and only if their answer is correct. To elicit the receivers’ responses to the senders’ messages, we use the strategy-method: For each of the two questions, receivers have to choose between “Obama administration” and “Trump administration” for each of the two possible messages of the sender. Payoffs are calculated based on the message the sender actually sent. In the post-experimental questionnaire, we elicit basic demographic information and personal attitudes, such as their political affiliation. Overall, we have two main treatments that differ in the senders’ incentives: • In treatment OBAMA, the sender will get a bonus if and only if the receiver, after having received the message, announces that “Obama administration” is the correct answer. • In treatment TRUMP, the sender will get a bonus if and only if the receiver, after having received the message, announces that “Trump administration” is the correct answer. We also have an additional treatment, where the receivers are not incentivized for their answers in the first stage (treatment STAGE1_NI). For those participants, their bonus is always determined by their answers in stage 2. Like before (treatment STAGE1_I), they will receive a bonus if and only if their answer is correct. We use a between-subject design. Each subject, hence, participates in only one treatment and plays only one role. The experiment will be conducted online and participants will be recruited using Prolific. We will require that participants are located in the US and that they show an approval rate of at least 95%. Participants who answered the majority of our comprehension questions incorrectly will not be allowed to continue, and will hence not be included our sample.
Randomization Method We use a stratified randomization approach based on the point in time individuals choose to participate in our study. We first collect the principals’ data: The first 10 individuals to participate in our study are assigned to treatment OBAMA, the next 10 participants to treatment TRUMP, and this order is continued until we collected the total number of observations we strive for. Afterwards, we proceed in the same manner for collecting the agents’ data in our 4 treatments. During the study, the randomization of questions is done by the software. By comparing the two main treatments, we predominantly identify 1) whether principals choose to send different messages when profiting from agents indicating that TRUMP is the correct answer compared to profiting from agents believing that OBAMA is the correct answer, and 2) whether and how agents account for the different incentives of principals when giving their answers. Hence, instead of having a neutral control group, we use participants’ decisions in the OBAMA treatment as the baseline for comparison. We first collect the senders’ data and only after we obtained the desired number of observations we collect the receivers’ data. For both stages, we use a randomization approach based on the point in time individuals choose to participate in our study. During the study, any randomization is done by the software. By comparing the two main treatments (OBAMA vs TRUMP), we predominantly identify 1) whether senders choose to send different messages when profiting from receivers indicating that “Trump administration” is the correct answer compared to profiting from receivers believing that “Obama administration” is the correct answer, and 2) whether and how receivers account for the different incentives of senders when giving their answers. Hence, instead of having a neutral control group, we use participants’ decisions in the OBAMA treatment as the baseline for comparison.
Was the treatment clustered? Yes No
Planned Number of Observations We plan to collect 8100 observations. We elicit 3240 beliefs of receivers who do not influence the sender’s payoff and 3240 beliefs of receiver who influence the sender’s payoff. Our use of the strategy method for eliciting the receivers’ beliefs implies that we need to elicit 1620 messages of senders. We plan to collect 2000 observations for senders and 4000 observations for receivers. To obtain our planned number of observations, we need to recruit 500 receivers whose answers are incentivized already in stage 1 and additional 500 receivers whose answers in stage 1 are not incentivized (in each case, 250 for treatment TRUMP and 250 for treatment OBAMA). Hence, we need to recruit 1000 senders, yielding a total of 2000 messages. Our use of the strategy method for eliciting receivers’ belief implies that we obtain 4000 observations of receivers in total (1000 receivers times two questions times two messages). Because our analyses will focus on those receivers for whom the initial answer is not in line with the message they receive, only half of the observations, i.e. 2000, will actually be used.
Sample size (or number of clusters) by treatment arms Because we elicit each participant’s decisions in both domains, we need to recruit a total of 2430 participants: 1620 in the role of receivers and 810 in the role of senders, to gather our planned number of observations. Because we elicit each participant’s decisions in both domains, we need to recruit a total of 2000 participants. We recruit 1000 participants in the role of receivers. For 500 receivers, their answers are incentivized already in stage 1 and for the remaining 500 receivers, their answers in stage 1 are not incentivized (in each case, 250 for treatment TRUMP and 250 for treatment OBAMA). Moreover, we recruit 1000 participants in the role of senders.
Secondary Outcomes (End Points) We obtain three kinds of secondary outcomes. For all of these secondary outcomes, our study is explorative: 1. We analyze whether receivers behave (in any respect) differently when knowing that their final answer influences rather does not influence the payoff of the sender they are matched with. 2. We analyze the impact of our control variables (for demographics mainly age and gender) and our measures of personal and political attitudes. We also consider if the impact of these variables differs between the two domains. 3. We compare two samples: The first sample consists of all decisions by the agents. The second sample consists only of the agents’ decisions for the first domain (i.e., if an agent is first presented with the crime domain and then for the unemployment domain, we only consider the decisions in the crime domain). Comparing the two samples then allows us to identify whether agents exhibit preferences for consistent behavior. The following hypotheses are secondary outcomes because they are not based on our behavioral game theoretical model: S3. Partisanship: The probability that senders send the message “Obama administration” (“Trump administration”) is higher for Democrats (Republicans). R5. Own incentives: The probability that receivers switch after getting a message that does not correspond to the senders’ incentives is higher for receivers whose answers in stage 1 are not incentivized than for those whose answers in stage 1 are incentivized. There are three kinds of additional secondary outcomes for which our study is explorative: 1. We analyze whether or not receivers earn higher payoffs when switching their initial answer if the senders’ messages are in line with the senders’ incentives. 2. We analyze the impact of our control variables (for demographics mainly age and gender) and our measures of personal and political attitudes. We also consider if the impact of these variables differs between the two questions (crime and unemployment). 3. We compare two samples: The first sample consists of the decisions of all receivers. The second sample consists only of the receivers’ decisions for the first question (that is, if a receiver is first asked about crime and then for unemployment, we only consider the decisions for crime and vice versa). Comparing the two samples then allows us to identify whether receivers exhibit preferences for consistent behavior.
Back to top

Irbs

Field Before After
IRB Approval Date February 23, 2023 October 05, 2023
Back to top