x

We are happy to announce that all trial registrations will now be issued DOIs (digital object identifiers). For more information, see here.
Motivating bureaucrats with non-monetary incentives when state capacity is weak: Evidence from large-scale field experiments in Peru
Last registered on December 24, 2018

Pre-Trial

Trial Information
General Information
Title
Motivating bureaucrats with non-monetary incentives when state capacity is weak: Evidence from large-scale field experiments in Peru
RCT ID
AEARCTR-0003722
Initial registration date
December 21, 2018
Last updated
December 24, 2018 7:14 AM EST
Location(s)
Region
Primary Investigator
Affiliation
University of Rosario
Other Primary Investigator(s)
PI Affiliation
Innovations for Poverty Action
PI Affiliation
Vanderbilt University
Additional Trial Information
Status
Completed
Start date
2015-04-01
End date
2017-02-01
Secondary IDs
Abstract
We study how non-monetary incentives, motivated by recent advances in behavioral economics, affect civil servant performance in a context where state capacity is weak. We collaborated with a government agency in Peru to experimentally vary the content of text messages targeted to civil servants in charge of a school maintenance program. These messages incorporate behavioral insights in dimensions related to information provision, social norms, and loss aversion as well as some weak forms of monitoring and auditing. We find that these messages are a very cost-effective strategy to enforce compliance with national policies among civil servants. We further study the role of social norms and the salience of social benefits in a follow-up experiment and explore the external validity of our original results by implementing a related experiment with civil servants from a different national program. The findings of these new experiments support our original results and provide additional insights regarding the context in which these incentives may work. Our results highlight the importance of carefully designed non-monetary incentives as a tool to improve civil servant performance when the state lacks institutional mechanisms to enforce compliance.
External Link(s)
Registration Citation
Citation
Dustan, Andrew, Juan Hernandez-Agramonte and Stanislao Maldonado. 2018. "Motivating bureaucrats with non-monetary incentives when state capacity is weak: Evidence from large-scale field experiments in Peru." AEA RCT Registry. December 24. https://doi.org/10.1257/rct.3722-2.0
Former Citation
Dustan, Andrew, Juan Hernandez-Agramonte and Stanislao Maldonado. 2018. "Motivating bureaucrats with non-monetary incentives when state capacity is weak: Evidence from large-scale field experiments in Peru." AEA RCT Registry. December 24. https://www.socialscienceregistry.org/trials/3722/history/39571
Sponsors & Partners

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Experimental Details
Interventions
Intervention(s)
We implement a SMS campaign to increase civil servants' compliance. Specifically, we run 3 nationwide field experiments. In the first large-scale experiment (which we call ``Benchmark Experiment"), text messages were crafted in a way that incorporates behavioral insights in dimensions related to information provision, social norms, and loss aversion as well as some weak forms of monitoring and auditing. This intervention was implemented with civil servants from a school maintenance program (from the Ministry of Education) across the country.

We run a second experiment in 2016 (called ``Follow-Up Experiment") to further explore the role of social norms and the salience of social benefits along other implementation details that might be relevant to transform this campaign in a public policy. Regarding social norms, we introduce the distinction between descriptive and injunctive norms (Cialdini et al 2004), taking into account evidence suggesting that they may have differential effects (Cialdini 2007). We also vary the reference group (school district versus the country as a whole) and the use of quantitative or qualitative norms. This approach will allow us to learn more about what type of social norms can be more effective in inducing compliance.

We also run a third experiment (which we call "External Validity Experiment") with a different national program in order to explore external validity issues. Our design does not suffer of the standard external validity problem in which a small subset of a population is part of a experiment given the large-scale nature of our intervention. However, we want to learn whether the basic results of our Benchmark Experiment can be replicated in a population of civil servants with different characteristics. We test the role of social norms and monitoring, the most promising interventions found in the Benchmark Experiment, in an monthly intervention that was implemented from September 2016 to January 2017 with all the civil servants from CUNA MAS, an early childhood development program.

We describe the details of these experiments in this section.

1. Benchmark Field Experiment
Each SMS contains a fixed and a variable component. The fixed component includes the bureaucrat's first name and the deadline for task compliance. These fixed elements are based in behavioral insights. The use of personalized messages has been shown to be an effective strategy. On the other hand, the use of exogenous deadlines has been proven to be more useful when agents suffer from procrastination. The variable component is the main behavioral lever that we use to induce a change in bureaucrats' behavior. We describe this component below.

Maintenance bureaucrats were assigned to one of six groups. Bureaucrats in the control group receive no SMS. The remaining bureaucrats receive an SMS with behavioral content at fixed points during the intervention cycle. In total, each bureaucrat in any of the treatment groups receives up to five SMS. These SMS shared the same behavioral insight over the cycle but varied in terms of the type of maintenance activity that was emphasized. For instance, near the beginning of the intervention cycle, bureaucrats receive SMS emphasizing the withdrawal of maintenance funds, whereas near the end of the cycle SMS emphasize the filing of expense reports. Bureaucrats only receive a particular if they have not complied with the activity being emphasized in that SMS.

Bureaucrats in the reminder/warning treatment receive SMS with an alert and the URL of the PRONIED website where the bureaucrat can obtain more information. Reminders are one of the most popular tools used in behavioral science to influence behavior and the inclusion of an alert is motivated by the need to prime a sense of urgency to comply with maintenance activities. Reminders are motivated by the existence of limited attention problems and are tools that can potentially change the inter-temporal allocation of mental resources to enforce compliance.

Bureaucrats in the monitoring treatment receive SMS with information regarding the amount of transfers not yet withdrawn from the bank or not yet declared on the expense report, depending on the timing of the message in the intervention cycle. It is expected that this information creates the impression on bureaucrats that their actions are being observed and, as a consequence, comply with maintenance policies. This treatment should not be surprising for a fully rational agent since it is perfect knowledge that the program is able to observe funds withdrawal and the expense reporting. Therefore, by making salient a fact that is common knowledge among civil servants, it is possible to re-create some critical dimensions of monitoring systems in a cost-effective way.

Bureaucrats in the social norm treatment receive SMS with a message emphasizing that most bureaucrats are complying in their reference group (UGEL). Social norms are understood in this paper as a set of informal rules and unwritten codes that establish what we expect of others and what others expect from us. Following Cialdini et al. 2004, it is possible to establish a useful distinction between norms that inform us about what is typically done (descriptive norms) and norms that inform us about what is typically approved or disapproved (injunctive norms). We used a qualitative descriptive norm to minimize the risk of backlash effects, considering a body of evidence that suggests that providing actual levels of conformity with a social norm can induce more people to deviate from it if their baseline expectations regarding conformity with the norm were higher. In the follow-up experiment we further explore variants of social norms, including quantitative norms and alternative reference groups.

Bureaucrats in the shaming treatment receive SMS with information regarding the possible publication of a list with the names of those bureaucrats who fail to comply with the reporting of expenses. The goal is to induce concern regarding potential reputational loss in order to motivate compliance, especially when baseline non-compliance behavior is deeply rooted. This treatment arm is based on a large body of evidence indicating that people are more likely to comply when their behaviors are observed.

Finally, bureaucrats in the auditing treatment receive SMS with a soft threat of auditing. Specifically, they are told that they will be visited for supervision of their maintenance activities. Schools are already visited on regular basis by UGEL representatives for several matters, including (of course) the development of maintenance activities. In that sense, the intervention is simply making salient an event that civil servants will face over the course of the academic year. However, given the scale of the intervention, the probability of facing a visit is low at a given moment of time. We take advantage of this fact to induce compliance among civil servants by reminding them about the fact that they will be visited by UGEL officials.

2. Follow-Up Field Experiment
We implemented a new large-scale field experiment in 2016 with the goal of further exploring the role of non-monetary incentives. Taking as starting point the results for the benchmark experiment in 2015, we designed a new large scale intervention to address the following questions:
a) What type of social norms are more relevant?
b) Does making salient the social benefits of investing in school infrastructure an alternative way to enforce compliance among civil servants?
c) Are the effects persistent over time?
d) Does the duration of the SMS campaign matter?

Regarding a), we extend our analysis of social norms by incorporating treatments targeted to address the distinction between descriptive and injunctive social norms. We further explore the role of social norms by breaking down the descriptive social norm treatment into quantitative and qualitative versions as well as modifying the reference group. We proceed in the same way to break down the injunctive social norm into 2 reference groups: parents and principals.

With respect to b), we vary the dimension of social benefit to consider messages that emphasize the importance of a good quality infrastructure for students' health (well-being social benefit treatment), for the pride of the school community (pride social benefit treatment) and for contributing to the students' learning process (learning social benefit treatment).

c) addresses whether civil servants can be induced to comply with the maintenance policies beyond a one-shot SMS.

Finally, d) relates to understanding a critical component of a SMS campaign design: the treatment duration. We experimentally vary the number of SMS delivered to civil servants. One group receives a short duration SMS campaign of 4 SMS delivered in a given period. A second group receives a long duration campaign of 7 SMS.

3. External Validity Field Experiment
We run an additional field experiment in a different population of civil servants to shed light of the applicability of our intervention in other settings. Although we recognize that running additional experiments would not exhaust all relevant dimensions of external validity, we believe this exercise will shed light on relevant issues to consider when it comes to understanding the applicability of our results to other settings. In addition, by implementing the Follow-Up Experiment in 2016 with the population of civil servants, we also address the external validity of our results by controlling for aggregate time-specific shocks (Rosenzweig et al 2016).
Intervention Start Date
2015-06-01
Intervention End Date
2017-02-01
Primary Outcomes
Primary Outcomes (end points)
For the Benchmark and Follow-Up Experiments, we construct a set of dummy variables for the compliance with each step of the maintenance cycle. The most important variable is a dummy for whether the maintenance civil servant complied with the submission of the expense report. This is the one for which maintenance civil servants are accountable. We also consider the completion of the oversight report and the approval of the expense report. These two later outcomes are not directly under the control of the maintenance civil servants, but they provide some measure of the quality of their performance since it is expected that the likelihood of approval of these reports is higher when the maintenance activities are correctly performed. This is an imperfect measure of quality, however, since the lack of compliance by UGEL monitors in reviewing these reports on time can affect their approval. We also create dummy variables for different levels of compliance with the withdrawal of maintenance funds at the National Bank. We consider the withdrawal of any positive amount as well as withdrawal of at least 50\%, 80\%, 90\% and 95\% of the transferred funds.

Regarding the External Validity Experiment, the outcome of interest in this external validity experiment is compliance with the reporting of service delivery. CUNA MAS requires of updated information about the delivery of services as well as beneficiaries' progress on monthly basis. This information is used to plan the service delivery for the next period as well as to update the list of beneficiaries to incorporate new families. Lack of compliance with the submission of this information creates problems for the program to respond to its beneficiaries' needs.
Primary Outcomes (explanation)
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
1. Benchmark Experiment
Assignment to treatment was randomized at the school level. We implemented a simple randomized design. Due to the use of administrative records for the population of civil servants of the program in the country (around 24,000 across the country), the sample size of our experiment makes irrelevant the choice of the randomization strategy (Bruhn et al 2009).

2. Follow-Up Experiment
Assignment to treatment was randomized at the school level. We implemented a factorial block design. The factorial components include the behavioral treatments (9 in total) and the duration treatment (2 in total). The experiment was stratified on three dimensions: treatment status in the Benchmark Experiment, whether the maintenance civil servant is new, and the region.

3. External Validity Experiment
Assignment to treatment was randomized at the civil servant level. We implemented a block randomized design. The experiment was stratified on gender, experience, and the region.
Experimental Design Details
Randomization Method
Randomization was done in office by a computer.
Randomization Unit
School was the unit of randomization for the Benchmark and Follow-Up Experiments. Civil servant was the unit for the External Validity Experiment.
Was the treatment clustered?
No
Experiment Characteristics
Sample size: planned number of clusters
The universe of civil servants with a cellphone registered at the programs' administrative records participated in these experiment. 24,268 schools were part of the Benchmark Experiment. 31,947 schools participated in the Follow-Up Experiment. 1,116 civil servants participated in the External Validity Experiment.
Sample size: planned number of observations
24,268 schools were part of the Benchmark Experiment. 31,947 schools participated in the Follow-Up Experiment. 1,116 civil servants participated in the External Validity Experiment.
Sample size (or number of clusters) by treatment arms
1. Benchmark Experiment
3500 schools per treatment arm x 5 treatment arms. Control group has 6,700 schools.

2. Follow-Up Experiment
Descriptive social norms treatment has 11,800 schools. Injunctive social norm treatment has 5,900 schools. Social benefit treatment has 8,900 schools. Control group has 5,300 schools.

3. Follow-Up Experiment
380 civil servants for treatment arm, including the control group.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Power calculations were not required as the interventions were nation-wide experiments.
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
IRB Approval Date
IRB Approval Number
Post-Trial
Post Trial Information
Study Withdrawal
Intervention
Is the intervention completed?
Yes
Intervention Completion Date
February 01, 2017, 12:00 AM +00:00
Is data collection complete?
Yes
Data Collection Completion Date
February 01, 2017, 12:00 AM +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
24,268 schools were part of the Benchmark Experiment. 31,947 schools participated in the Follow-Up Experiment. 1,116 civil servants participated in the External Validity Experiment.
Was attrition correlated with treatment status?
No
Final Sample Size: Total Number of Observations
24,268 schools were part of the Benchmark Experiment. 31,947 schools participated in the Follow-Up Experiment. 1,116 civil servants participated in the External Validity Experiment.
Final Sample Size (or Number of Clusters) by Treatment Arms
1. Benchmark Experiment 3500 schools per treatment arm x 5 treatment arms. Control group has 6,700 schools. 2. Follow-Up Experiment Descriptive social norms treatment has 11,800 schools. Injunctive social norm treatment has 5,900 schools. Social benefit treatment has 8,900 schools. Control group has 5,300 schools. 3. Follow-Up Experiment 380 civil servants for treatment arm, including the control group.
Data Publication
Data Publication
Is public data available?
No
Program Files
Program Files
No
Reports and Papers
Preliminary Reports
Relevant Papers