Good Ad, Bad Ad. Effect of Positive and Negative Political Content of Online Advertising
Last registered on November 02, 2018


Trial Information
General Information
Good Ad, Bad Ad. Effect of Positive and Negative Political Content of Online Advertising
Initial registration date
October 31, 2018
Last updated
November 02, 2018 5:48 PM EDT
Primary Investigator
university of warwick
Other Primary Investigator(s)
PI Affiliation
University of Warwick
PI Affiliation
ETH Zurich
Additional Trial Information
In development
Start date
End date
Secondary IDs
There is a growing interest in the role that the internet and social media (such as Facebook) play in political campaigns and elections. The aim of our project is to describe how voters respond to positive and negative advertising regarding politically relevant topics. We also aim to estimate the effect of (negative or positive) advertising on voting outcomes (turnout and candidate choice). We plan to collect data from the United States, in the weeks encompassing the 2018 US Mid-Term Elections, through an online experiment measuring individual responses to positive and negative advertising on socially relevant topics. We shall then combine this data with online advertising prices collected on Facebook. As we have shown in a previous paper (Cuevas et al., 2018), these prices can be used as a proxy for the intensity whereby political campaigns target individual users on social media.
External Link(s)
Registration Citation
Liberini, Federica, Michela Redoano and Michela Redoano. 2018. "Good Ad, Bad Ad. Effect of Positive and Negative Political Content of Online Advertising ." AEA RCT Registry. November 02.
Experimental Details
Intervention Start Date
Intervention End Date
Primary Outcomes
Primary Outcomes (end points)
See attached analysis plan
Primary Outcomes (explanation)
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
See the analysis plan

Experimental Design Details
1. Pre-election survey (randomly assigned over the week preceding the US Elections). The survey shall consist of two sets of questions. The first set includes demographic questions (e.g., gender, age, marital status, location, Internet/Facebook use). The second set includes political questions (vote participation and candidate choice during recent presidential elections). The Pre-Election Survey has two objectives. First, we shall use it to build the respondents’ demographic and ideological profile (to be matched with a Facebook Audience). Second, we shall use the survey to assess the respondents’ intention over political choices (vote participation and orientation for the 2018 midterm elections). 2. Task. At the end of the pre-election survey, each respondent will be shown two Facebook-like post(s) (a mock), each regarding one politically-relevant topic. Each post will display a first sentence containing one objective and neutral factual statement. A second uninformative statement will frame the post in a neutral, positive or negative manner: this setup defines one control group (respondents receiving the neutrally framed text) and two treated groups (respondents receiving the positively and negatively framed text). Respondents will be randomly and uniformly assigned to one of the three groups, or versions of the post. Respondents will then be distracted with a simple, 10 seconds real effort task. After completing the task, respondents will finally be asked one question regarding the factual information contained in the first part of the post. The task has the objective of assessing how the framing of a newsfeed affects the ability of respondents to recall the factual information contained in the text. 3. Post-election survey (21 days after the pre-election survey). This survey shall consist only of political questions (vote participation and candidate choice in midterm elections). The Post-Election Survey’s objective is to collect information on the actual voting behavior of the respondent.
Randomization Method
The randomization will be done by Amazon's Mechanical Turk web service
Randomization Unit
The randomization units are individuals
Was the treatment clustered?
Experiment Characteristics
Sample size: planned number of clusters
The design is not clustered
Sample size: planned number of observations
We plan to have about 1,000 participants
Sample size (or number of clusters) by treatment arms
Approximately 80-100 respondents per treatment
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB Name
Humanities and Social Sciences Research Ethics Committee, University of Warwick
IRB Approval Date
IRB Approval Number
Post Trial Information
Study Withdrawal
Is the intervention completed?
Is data collection complete?
Data Publication
Data Publication
Is public data available?
Program Files
Program Files
Reports and Papers
Preliminary Reports
Relevant Papers