Experimental Design
Together with our local NGO partner, we constructed report cards on incumbent councilor performance during the fiscal year. The report cards report how the councilor allocated his discretionary funds across broad development categories (water, sanitation, etc.), his committee attendance records, and his performance in public meetings and committees. Those reading the report cards might conclude that their councilor is performing poorly if all of the funds allocated to their ward are not used, the councilor skips a large proportion of public meetings and committee assignments, or the councilor does not participate in the meetings attended. Voters may also chose to remove councilors if they deem spending as primarily on frivolous categories (e.g. fountains). Furthermore, if the official expenditures are out of sync with the voters’ experience of the public works in a ward, voters might conclude that the councilor is siphoning off or misallocating funds.
Our intervention covers 257 wards, randomly assigned to one of four categories. The control sample of 53 wards receives no additional information about councilor performance. These councilors are informed that the NGO will not produce report cards on them until at least 2014. We contrast outcomes in the control wards with three treatment samples. Ward councilors in Treatment 1 are informed in May 2010 that report cards on their performance will be disseminated via a leading Indian newspaper in the run-up to the election in 2012. They are told which performance indicators will feature in the report card. This treatment examines the pure incentive effect – do councilors improve their performance when they anticipate future (pre-election) improvements in voter information?
In the remaining treatment wards, report cards are published in a newspaper in June 2010, and councillors are informed that these will again be published in the run-up to the elections in February 2012. Here, we examine whether politician responsiveness is magnified when voters receive information early. In all cases, councilors are informed in 2010 about the performance indicators that will feature in the report card in 2012.
These wards are randomized into two further treatment samples. For Treatment 2 wards, report cards are published in the newspaper but no further action is taken. One concern with simply giving information is whether voters who are not used to getting information will take note of it. Thus in Treatment 3 wards, we combine the information with an active mobilization campaign, as Bjorkman and Svensson (2009) have shown that community mobilization can be an effective accountability tool. To do so, we identify the slums in each Treatment 3 ward and randomly select half of them for a voter mobilization campaign. In July and August 2010, local civic society organizations conduct door-to-door visits and public meetings in these areas in order to provide voters with newspaper report cards and also educate them about the roles and responsibilities of the ward councilor’s office. From November 2010 until November 2011, there is a continued mobilization that encourages slum dwellers to use this information and to approach sitting councilors with any inquiries or complaints.
Finally, we undertake a within-ward randomization among wards in Treatment 1 in February 2012. Specifically, when the final report cards are published we undertake an intensive door-to-door newspaper delivery in a random sample of polling stations within Treatment 1 wards (following the method in Banerjee et al. (2009)). Differences in voting outcomes across the two subsamples of Treatment 1 provide a measure of the selection effect.
A baseline survey was conducted in February and March 2010. It covers over five thousand households in 103 slums spread across treatment and control wards. We surveyed voters regarding their beliefs about legislator ability and performance, and also collected data on whether and how councilors respond to slum dwellers’ needs. This survey provides baseline data on councilor performance, access to social services in slum areas, knowledge of the local government system, and political preferences in each area. Also, by linking these data with our endline surveys (see below) we can examine the link between voter beliefs and politician actions. To evaluate the impact of these interventions, we combine these data with several other types of data:
1. Subjective Measures of Performance and Voter Behavior: In December 2011, we carry out a first endline household survey. This will focus on voters’ perceptions of the quality of public services as well as legislator ability and performance. We use these subjective data to determine if the treatments changed perceptions of representatives’ behavior. Since we interview the same households as in the baseline survey, we are able to chart the evolution of voter beliefs in each of the treatment areas. We are also able to examine whether voters believe that politician activism was greater in areas which featured on the report card.
Immediately after the February 2012 elections, we conduct a second endline survey in all wards. This focuses on the voter’s perceptions of the election, focusing on reported voting behavior, vote-buying, intimidation and other aspects of the campaigns. Individual-level voting data lets us examine whether treatment effects differ across demographic groups (e.g. less educated, those receiving benefits from the state, etc.). Changes in caste-based voting are another area of interest, as past studies demonstrate its pervasiveness (Banerjee and Pande, 2009). Finally, data on electoral malpractice help us determine if politicians find these methods less costly than performance improvement, leading to adverse unintended consequences of report card campaigns.
2. Objective measures of Performance: We obtain information on councilor committee attendance and discretionary spending for 2010 and 2011. This allows us to examine whether councilors in treatment wards improve performance along dimensions featured in the report card. In all wards, we commission an audit of the public toilet facilities, recording the state of the infrastructure and general cleanliness. The results of this audit is not featured in the report cards. Rather, we use these data to test whether politicians in treatment groups respond to these campaigns by diverting effort and resources from categories that are not explicitly reported on, i.e. whether public information mainly alters legislator behavior by “teaching to the test.”
3. Electoral outcomes: Outcomes are collected at the ward level, which will include turnout and candidate vote share.
4. Party and candidate outcomes: Outcomes are collected at the ward level, which include party choice of candidates, and the candidates’ characteristics, including assets, education, and criminal record (obtained from affidavits filed by candidates).
Randomization of treatment status implies that the empirical design is straightforward. We examine the differential effect of incentives and activism on politician performance outcome Y_w (for the time period July 2010 and February 2012) in ward w via
Y_w = α_1*T_1 + α_2*T_2 + α_3*T_3 + ε_w
Where T_1 to T_3 refer to the three different treatment arms. α_1 captures the pure incentive effect of information. The difference between α_1 and α_2 captures the additional effect of informing voters early on while the difference between α_3 and α_2 tells us whether activism magnifies the politician response.
For the selection effect, we use polling station s data on incumbent vote-share. Here, we also make use of the within-ward randomization in treatment 3. The regression of interest is
Y_sw = β_11*T_11 + β_12*T_12 + β_2*T_2 + β_3*T_3 + ε_s
T_11 is the treatment arm where report cards are published in 2012 but no distribution occurs. T_12 is the subsample of polling stations (within Treatment 1) where voters are provided free copies of the newspaper. The difference between β_32 and β_33 is a measure of the selection effect.