NEW UPDATE: Completed trials may now upload and register supplementary documents (e.g. null results reports, populated pre-analysis plans, or post-trial results reports) in the Post Trial section under Reports, Papers, & Other Materials.
Turning Up, Tuning In, Turning Out: Experimental Evidence from Liberia
Initial registration date
October 24, 2017
October 26, 2017 12:59 AM EDT
Other Primary Investigator(s)
Additional Trial Information
Understanding the role of information is at the core of democratic accountability and the the often-broken representative-constituent link. Representatives face strong incentives to reduce the dissemination of information about their policy promises to avoid tying their hands, and to avoid being held accountable for failing to meet them, especially so when the private returns to public office are high. When the media sector is underdeveloped then this results in a low-quality, low-information equilibrium where democratic accountability suffers. We evaluate the impact of an initiative designed to simultaneously shock the supply of programmatic information by candidates and the credibility of the media sector. We do this by leveraging experimental evidence from a nationwide debate initiative ahead of Liberia’s 2017 elections for House of Representatives designed to solicit concrete policy promises from candidates. With random variation in the participation of political candidates and the intensity of debate broadcasting through community radio stations, we aim to parse how variation in exposure to the policy platforms of candidates affects levels of political information, voting behavior, electoral returns, and the role of the media in intermediating these effects. Ultimately, we want to assess whether the intervention was successful at breaking this low-quality, low-information equilibrium.
Bowles, Jeremy and Horacio Larreguy. 2017. "Turning Up, Tuning In, Turning Out: Experimental Evidence from Liberia." AEA RCT Registry. October 26.
We randomize several elements of an initiative to hold debates between all 984 candidates for 73 House of Representatives seats ahead of the Liberian election in October 2017. 129 standardized debates across all districts were designed to solicit the policy promises of different candidates in a setting where votes are most often won through purchase. In partnership with Internews Liberia and USAID, we cross-randomize two elements of the debates initiative at the between-district level, beyond the within-district sources of quasi-random variation discussed in the PAP (the splitting of candidates across debates, size of debates, and ordering of candidates within a debate). First, we generate random variation in the attendance of political candidates across debates by varying whether debates are assigned a to receive more intensive effort in persuading political candidates to attend. Second, we generate random variation in the share of a given district which is likely to hear the debates at least once by varying the intensity of debate rebroadcasting. The interventions were designed to build off parts of the debates initiative without depriving candidates or voters from opportunities they would have received absent an evaluation of the intervention: rather than experimentally varying the extensive margin of exposure, both interventions were designed to ramp up the intensity of activities already planned to facilitate their evaluation.
Intervention Start Date
Intervention End Date
Primary Outcomes (end points)
With these two interventions, as well as several other sources of randomized variation in the administration of the debates, we evaluate a series of hypotheses. First, these focus on whether and how the initiative affected levels of political knowledge about the policy promises and competence of different candidates, as well as general information about policy. Second, on how learning about candidates affected candidate selection and the extent to which citizens vote in line with their preferences. Third, on the electoral returns to candidates and consequences for how candidates campaign. And fourth, on how debate exposure affects attitudes towards the media and the electoral process more broadly.
To construct outcomes, we are conducting a panel survey of around 4,000 citizens across all 73 electoral districts. The key outcome variables relate to exposure to the broadcasted debates, respondent about candidates, beliefs about candidate competence, voting behavior, voter coordination, political campaigning, and attitudes towards the media and electoral processes more generally.
Primary Outcomes (explanation)
Within each family of outcome variables we will construct composite z-score variables, as well as using individual survey items with corrections for multiple testing as detailed in the pre-analysis plan (section 5).
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
First, we randomize the encouragement to participate in the debates by varying the intensity of efforts to attract candidates to attend the debates. These debates, as discussed, were explicitly designed to solicit the policy priorities of different candidates for office. The decision to participate in a candidate debate is clearly a strategic one, and particularly so in clientelistic settings. Candidates who ‘win’ a debate may enjoy greater publicity and net electoral gains, but they risk either losing a debate or restricting their ability to deviate from policy promises on the campaign trail or once in office. Providing policy platforms through broadly-disseminated debates represents a shift from locally-disseminated cheap talk by candidates – promising to build schools, hospitals and roads everywhere – to a more costly signal of policy promises. Especially for leading candidates, the expected returns from debate participation are limited – they risk providing a platform for their challengers to attack them and gain publicity. Second, we randomize the intensity of radio coverage of the debates. Each debate is broadcast live by a community radio station, and in treatment districts debates are intensively rebroadcast ten times, at peak hours, at the height of the campaigning season. Across 43 community radio stations, easily the dominant way to acquire political information in Liberia, we thus generate variation in the share of individuals exposed to candidate promises. Aside from affecting citizen information about participating candidates, this randomization also specifically affects the relative share of radio news focusing on programmatic policy in a context where candidates frequently turn radio stations into their own mouthpieces. As such, we consider that it may also affect perceptions of media bias and credibility.
Experimental Design Details
Electoral districts are assigned to treatment conditions through block randomization using the R package 'blockTools'.
Electoral districts are the unit of randomization.
Was the treatment clustered?
Sample size: planned number of clusters
73 electoral districts.
Sample size: planned number of observations
4000 survey respondents.
Sample size (or number of clusters) by treatment arms
36/37 units per treatment arm.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
INSTITUTIONAL REVIEW BOARDS (IRBs)
NORC at the University of Chicago
IRB Approval Date
IRB Approval Number
Harvard University Committee on the Use of Human Subjects
IRB Approval Date
IRB Approval Number
Analysis Plan Documents
October 24, 2017
Post Trial Information
Is the intervention completed?
Is data collection complete?