Back to History Current Version

Performance Scorecards and Government Service Delivery: Experimental Evidence from Land Record Changes in Bangladesh

Last registered on August 13, 2018

Pre-Trial

Trial Information

General Information

Title
Delays, Corruption and Monitoring in government service provision
RCT ID
AEARCTR-0003232
Initial registration date
August 13, 2018

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 13, 2018, 3:34 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
National University of Singapore

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2018-09-01
End date
2019-08-01
Secondary IDs
Abstract
Countries with high levels of corruption also have slower government service delivery but it is not known if corruption causes delays, delays causes corruption or if the two are causally related at all. In this project I propose a model for how bureaucrats choose processing times and bribe demands to government service applicants. The models shows how, under certain information settings, the possibility of demanding bribes creates incentives for bureaucrats to create inefficiently long processing times for some applications.

I will test the predictions of this model using an experiment in the context of a particular government service in a particular setting, namely changes to land records in Bangladesh. The experiment will first test a management information system in the form of a monthly performance scorecard, making it visible to bureaucrats’ managers if there are delays in the processing of applications for land record changes. The first question the experiment will answer is if the scorecard actually reduce delays in this government service. The second question is if the scorecard, having created an incentive to reduce the number of delayed applications, also reduces the amount of bribes paid by applicants. My model generates different predictions for how the bribe payments change under different information settings. Hence the experiment will not only test the model but also test the information setting under which bureaucrats operate.
External Link(s)

Registration Citation

Citation
Mattsson, Martin. 2018. "Delays, Corruption and Monitoring in government service provision." AEA RCT Registry. August 13. https://doi.org/10.1257/rct.3232-1.0
Former Citation
Mattsson, Martin. 2018. "Delays, Corruption and Monitoring in government service provision." AEA RCT Registry. August 13. https://www.socialscienceregistry.org/trials/3232/history/33007
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
The intervention of our experiment is to generate a monthly “Performance scorecard” for bureaucrats in charge of making changes to land records in Bangladesh. We will share these scorecards with the bureaucrats and their superiors. The scorecard will contain information on the number of applications processed within 45 working days (the limit for how long a processing time can be according to current regulation) in the past month as well as the number of applications pending for more than 45 working days at the end of each month. In addition to these figures, a percentile ranking among the bureaucrats will be generated so anyone receiving the scorecard can assess the relative performance of the bureaucrat.
Intervention Start Date
2018-09-01
Intervention End Date
2019-03-01

Primary Outcomes

Primary Outcomes (end points)
1. Effect of Performance Scorecard on bureaucrat performance as measured by the scorecards
2. Benefits of Performance Scorecard to applicants
3. Testing predictions of the model outlined in Pre-Analysis Plan
4. Spillover effect on applications not entering the scorecards

All of these outcomes are described in detail in the Pre-Analysis Plan.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The main identification strategy of the study will be the randomization of which offices the scorecard system is implemented in. Out of 114 Upazila Land Offices currently connected to the eMutation system, 57 will have scorecards generated for them. There are currently 18 offices that have fully implemented the eMutation system so that all applications are processed using the digital system. The randomization will be done separately for the group with the 100% implementation and for the group with partial implementation. After these two groups have been separated the randomization will be stratified using the following strata:

1. Having processed above/below the median number of applications within 45 working days in the months of June and July, 2018 in full implementation group
2. Having processed within the first, second or third tertile number of applications within 45 working days in the months of June and July, 2018 in partial implementation group
3. Having above/below the median number of applications pending for more than 45 working days in full implementation group
4. Having a number within the first, second or third tertile of applications pending for more than 45 working days in partial implementation group

This gives me 13 strata. Within each strata half of the ULOs are assigned to treatment.
Experimental Design Details
Randomization Method
Stratified (Block) randomization using Stata.
Randomization Unit
Upazila Land Office
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
114 Upazila Land Offices
Sample size: planned number of observations
Approximately 40,000 applications
Sample size (or number of clusters) by treatment arms
57 Treatment Upazila Land Offices, 57 Control Upazila Land Offices
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Human Subjects Committee - Yale University
IRB Approval Date
2018-06-20
IRB Approval Number
2000021565
Analysis Plan

Analysis Plan Documents

Pre Analysis Plan

MD5: 9087d7037a521efbb9e75fc7d4264ce6

SHA1: 1b55d1f36847e3f21c4da9e77bd9f98dafed7818

Uploaded At: August 13, 2018

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials