Turning Up, Tuning In, Turning Out: Experimental Evidence from Liberia

Last registered on October 26, 2017

Pre-Trial

Trial Information

General Information

Title
Turning Up, Tuning In, Turning Out: Experimental Evidence from Liberia
RCT ID
AEARCTR-0002553
Initial registration date
October 24, 2017

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 26, 2017, 12:59 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Harvard University

Other Primary Investigator(s)

PI Affiliation
Harvard University

Additional Trial Information

Status
On going
Start date
2017-08-01
End date
2017-12-31
Secondary IDs
Abstract
Understanding the role of information is at the core of democratic accountability and the the often-broken representative-constituent link. Representatives face strong incentives to reduce the dissemination of information about their policy promises to avoid tying their hands, and to avoid being held accountable for failing to meet them, especially so when the private returns to public office are high. When the media sector is underdeveloped then this results in a low-quality, low-information equilibrium where democratic accountability suffers. We evaluate the impact of an initiative designed to simultaneously shock the supply of programmatic information by candidates and the credibility of the media sector. We do this by leveraging experimental evidence from a nationwide debate initiative ahead of Liberia’s 2017 elections for House of Representatives designed to solicit concrete policy promises from candidates. With random variation in the participation of political candidates and the intensity of debate broadcasting through community radio stations, we aim to parse how variation in exposure to the policy platforms of candidates affects levels of political information, voting behavior, electoral returns, and the role of the media in intermediating these effects. Ultimately, we want to assess whether the intervention was successful at breaking this low-quality, low-information equilibrium.
External Link(s)

Registration Citation

Citation
Bowles, Jeremy and Horacio Larreguy. 2017. "Turning Up, Tuning In, Turning Out: Experimental Evidence from Liberia." AEA RCT Registry. October 26. https://doi.org/10.1257/rct.2553-1.0
Former Citation
Bowles, Jeremy and Horacio Larreguy. 2017. "Turning Up, Tuning In, Turning Out: Experimental Evidence from Liberia." AEA RCT Registry. October 26. https://www.socialscienceregistry.org/trials/2553/history/22725
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
We randomize several elements of an initiative to hold debates between all 984 candidates for 73 House of Representatives seats ahead of the Liberian election in October 2017. 129 standardized debates across all districts were designed to solicit the policy promises of different candidates in a setting where votes are most often won through purchase. In partnership with Internews Liberia and USAID, we cross-randomize two elements of the debates initiative at the between-district level, beyond the within-district sources of quasi-random variation discussed in the PAP (the splitting of candidates across debates, size of debates, and ordering of candidates within a debate). First, we generate random variation in the attendance of political candidates across debates by varying whether debates are assigned a to receive more intensive effort in persuading political candidates to attend. Second, we generate random variation in the share of a given district which is likely to hear the debates at least once by varying the intensity of debate rebroadcasting. The interventions were designed to build off parts of the debates initiative without depriving candidates or voters from opportunities they would have received absent an evaluation of the intervention: rather than experimentally varying the extensive margin of exposure, both interventions were designed to ramp up the intensity of activities already planned to facilitate their evaluation.
Intervention Start Date
2017-09-01
Intervention End Date
2017-10-10

Primary Outcomes

Primary Outcomes (end points)
With these two interventions, as well as several other sources of randomized variation in the administration of the debates, we evaluate a series of hypotheses. First, these focus on whether and how the initiative affected levels of political knowledge about the policy promises and competence of different candidates, as well as general information about policy. Second, on how learning about candidates affected candidate selection and the extent to which citizens vote in line with their preferences. Third, on the electoral returns to candidates and consequences for how candidates campaign. And fourth, on how debate exposure affects attitudes towards the media and the electoral process more broadly.

To construct outcomes, we are conducting a panel survey of around 4,000 citizens across all 73 electoral districts. The key outcome variables relate to exposure to the broadcasted debates, respondent about candidates, beliefs about candidate competence, voting behavior, voter coordination, political campaigning, and attitudes towards the media and electoral processes more generally.
Primary Outcomes (explanation)
Within each family of outcome variables we will construct composite z-score variables, as well as using individual survey items with corrections for multiple testing as detailed in the pre-analysis plan (section 5).

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
First, we randomize the encouragement to participate in the debates by varying the intensity of efforts to attract candidates to attend the debates. These debates, as discussed, were explicitly designed to solicit the policy priorities of different candidates for office. The decision to participate in a candidate debate is clearly a strategic one, and particularly so in clientelistic settings. Candidates who ‘win’ a debate may enjoy greater publicity and net electoral gains, but they risk either losing a debate or restricting their ability to deviate from policy promises on the campaign trail or once in office. Providing policy platforms through broadly-disseminated debates represents a shift from locally-disseminated cheap talk by candidates – promising to build schools, hospitals and roads everywhere – to a more costly signal of policy promises. Especially for leading candidates, the expected returns from debate participation are limited – they risk providing a platform for their challengers to attack them and gain publicity. Second, we randomize the intensity of radio coverage of the debates. Each debate is broadcast live by a community radio station, and in treatment districts debates are intensively rebroadcast ten times, at peak hours, at the height of the campaigning season. Across 43 community radio stations, easily the dominant way to acquire political information in Liberia, we thus generate variation in the share of individuals exposed to candidate promises. Aside from affecting citizen information about participating candidates, this randomization also specifically affects the relative share of radio news focusing on programmatic policy in a context where candidates frequently turn radio stations into their own mouthpieces. As such, we consider that it may also affect perceptions of media bias and credibility.
Experimental Design Details
Randomization Method
Electoral districts are assigned to treatment conditions through block randomization using the R package 'blockTools'.
Randomization Unit
Electoral districts are the unit of randomization.
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
73 electoral districts.
Sample size: planned number of observations
4000 survey respondents.
Sample size (or number of clusters) by treatment arms
36/37 units per treatment arm.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
NORC at the University of Chicago
IRB Approval Date
2017-08-11
IRB Approval Number
17.08.03
IRB Name
Harvard University Committee on the Use of Human Subjects
IRB Approval Date
2017-08-07
IRB Approval Number
IRB17-1178
Analysis Plan

Analysis Plan Documents

Pre-Analysis Plan

MD5: cb1ea1ae8e82f8603f31312160450d1b

SHA1: cc1b281e2bb774dae472d5eb58bb2b62a33d1dbb

Uploaded At: October 24, 2017

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials