Kenyan Judicial Performance Improvement

Last registered on November 10, 2015

Pre-Trial

Trial Information

General Information

Title
Kenyan Judicial Performance Improvement
RCT ID
AEARCTR-0000941
Initial registration date
November 10, 2015

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 10, 2015, 2:10 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
November 10, 2015, 2:15 PM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
The World Bank

Other Primary Investigator(s)

PI Affiliation
The World Bank Group
PI Affiliation
The World Bank Group
PI Affiliation
Center for Global Development

Additional Trial Information

Status
On going
Start date
2015-10-01
End date
2018-02-01
Secondary IDs
Abstract
The primary goal of this evaluation is to test alternative implementation strategies for the performance contracts in the Kenyan Judiciary. This impact evaluation will test three of such measures: (i) information that illustrates to performance contract signatories how they are performing against their targets and compared to peers; and (ii) complementing the information with calls from management for supervisory accountability, and (iii) sharing this information with court user committees as a means of local accountability. Variations in these complementary measures will be the heart of the impact evaluation.
External Link(s)

Registration Citation

Citation
Maro, Vincenzo et al. 2015. "Kenyan Judicial Performance Improvement ." AEA RCT Registry. November 10. https://doi.org/10.1257/rct.941-2.0
Former Citation
Maro, Vincenzo et al. 2015. "Kenyan Judicial Performance Improvement ." AEA RCT Registry. November 10. https://www.socialscienceregistry.org/trials/941/history/5975
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
The impact evaluation will measure three treatments: 1.) Simplified information on performance against peers; 2.) Top-down accountability feedback structure; 3.) Demand-side accountability feedback structure.

A randomized control trial methodology will be used to implement these identified interventions and to evaluate whether the feedback and increased engagement results in reduction of the case backlog, improvement in case processing times, and improved quality of judicial services. Treatment will be allocated randomly to all courts nationwide.
Intervention Start Date
2016-02-01
Intervention End Date
2017-02-01

Primary Outcomes

Primary Outcomes (end points)
Court administrate data collected through a template called the Daily Court Return Template. This template will be used to primarily measure timeliness and efficiency of cases. Namely,time to disposition (the average and median time between filing of a case (or entry of plea) to conclusion);percentage of cases concluded within the time limit (360 days); case clearance rate (the number of resolved cases as a percentage of initiated cases for a specific period).

Court user satisfaction and employee engagement surveys will provide a means to decipher whether efforts to improve timeliness and efficiency has negative side-effects on quality. The survey will be conducted on annual basis and will ask court users and employees about their perceptions on service quality, facilities, perceptions of judicial officers and complaints handling.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The intervention implementation will use a fractional factorial design testing three binary treatments. There will be four treatment arms or clusters. To avoid that one of the treatments gets 2 or more of the courts located in the largest urban areas, we will stratify by city size. In particular, courts located within the largest urban areas will not be included in the same treatment group.
Experimental Design Details
Randomization Method
A randomized control trial methodology will be used to implement these identified interventions and to evaluate whether the feedback and increased engagement results in reduction of the case backlog, improvement in case processing times, and improved quality of judicial services.
Randomization Unit
4 clusters with 58 courts randomly assigned to each cluster
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
4 clusters
Sample size: planned number of observations
218 courts
Sample size (or number of clusters) by treatment arms
58 courts control, 58 simple feedback form, 58 simple feedback from + top-down accountability, 58 simple feedback form + demand-side accountability
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Indicators based on the Daily Court Returns Template: Power calculations have revealed that 58 courts per treatment arm are sufficient to achieve power to 80%. Intra-cluster correlation is not applicable here as we have 1 observation per court. We report 2 scenarios for the correlation between waves (which we expect to be quite high given the monthly nature of the data). We use as benchmark indicator the case clearance rate. Other inputs for the sample power calculation are: 5% significance level, power 80%, baseline probability: 60% (based on data from Sep 2015 DCRT), 6 baseline waves, and 12 follow up waves. Power calculations have confirmed that 58 courts per treatment arm should ensure acceptable level of statistical power. Indicators based on court user satisfaction and employee engagement survey: Power calculations have revealed that 58 courts per treatment arm are sufficient to achieve power to 80% under different levels of the intra-cluster correlation (rho). We use as benchmark indicator the probability of court users that report are satisfied with the court service. Other inputs for the sample power calculation are: 5% significance level, power 80%, baseline probability: 66% (based on data from the baseline), 1 baseline wave, and 2 follow up waves, correlation between waves: 0.6, and that 24 respondents will be interviewed in each court (as it was indeed the case for the baseline survey already conducted and will be for the follow ups).
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials