Information Provision and Court Performance: Experimental Evidence from Chile

Last registered on June 17, 2021

Pre-Trial

Trial Information

General Information

Title
Information Provision and Court Performance: Experimental Evidence from Chile
RCT ID
AEARCTR-0005512
Initial registration date
December 06, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
December 07, 2020, 10:54 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
June 17, 2021, 4:42 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
Toulouse School of Economics

Other Primary Investigator(s)

PI Affiliation
IAST/ Toulouse School of Economics
PI Affiliation
University of California, Los Angeles
PI Affiliation
World Bank

Additional Trial Information

Status
In development
Start date
2020-01-01
End date
2021-09-30
Secondary IDs
H43, D63
Abstract
Previous studies have shown that behavioral nudges can be a cost-effective tool to influence changes in people’s actions. In this study, we aim to test whether nudging court managers through informing them on how their court performs in absolute and relative terms can improve court productivity. Moreover, we test if there is any difference if the information about the court performance is given in contrast and relation to self past performance or if the information is relative to other courts' performance.
External Link(s)

Registration Citation

Citation
Carrillo, Paloma et al. 2021. "Information Provision and Court Performance: Experimental Evidence from Chile." AEA RCT Registry. June 17. https://doi.org/10.1257/rct.5512-1.1
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
In this study, we aim to test whether nudging court managers through informing them on how their court performs in absolute and relative terms can improve court productivity.
Intervention (Hidden)
The Department of Institutional Development (DDI) of the Chilean judicial system developed an electronic platform called Quantum in 2018. Quantum displays comprehensive information on court indicators, such as the number of cases filed, the case clearance rate, the average duration of cases finished in a month, and the percentage of realized hearings. It also allows users to compare their courts to other courts in the same jurisdiction. Quantum was launched in 2018, but take-up has been limited: 20% of court managers have never logged in, and overall there is an average of 20 logins per court manager in 1 year and 2 months. The platform is technologically well developed and rich in information, yet it is unclear whether it has any impact on the management of the court. Our project consists of evaluating an intervention through a randomized controlled trial (RCT) with two main branches. First, we will (randomly) promote the Quantum platform in multiple ways, such as sending court managers a survey that implicitly markets the platform, making phone calls, and sending them emails. Second, we will (also randomly) provide court managers a new dashboard that summarizes the main statistics displayed in Quantum and compares them to themselves to a reference group of courts. There would be a total of six distinct intervention or treatments:

Treatment 0: Control. No change to their Quantum dashboards nor provided with any Quantum promotion.

Treatment 1: Quantum Promotion
The tribunals randomized into Treatment 1 will have their court staff receive both emails with a Quantum link to increase accessibility and salience of Quantum, and a small baseline and post-intervention survey that includes Quantum promotion. The baseline survey given to the court managers at the beginning of the RCT will ask them about their beliefs about some productivity metrics, how much these metrics affect their decision at work, inform them that these metrics can be seen in Quantum, and describe the effect of Quantum usage on people’s productivity through the results of an event study using historical data. A sample survey is provided in the appendix.

Treatment 2: No Quantum Promotion + New Dashboard
The tribunals randomized into treatment 2 will not receive any promotion but will have their home page in Quantum, what we call the dashboard, present various statistics at the tribunal level.

Treatment 3: Quantum Promotion + New Dashboard
The tribunals randomized into treatment 3 will receive the same promotion as that in treatment 1 and the new dashboard as in Treatment 2.

Treatment 4: No Quantum Promotion + New Dashboard + Comparative to others
The tribunals in treatment 4 will receive the new dashboard plus another tab or pop-up window that focuses on the tribunal’s best performing and worst-performing dimension from the previous month in comparison to the performance of peer tribunals (same competence) in the same month. This comparison leans into social comparison motivation.

Treatment 5: Quantum Promotion + New Dashboard + Comparative to others
The tribunals in treatment 4 will receive the new dashboard plus another tab or pop-up window that focuses on the tribunal’s best performing and worst-performing dimension from the previous month in comparison to the performance of peer tribunals (same competence) in the same month. This comparison leans into social comparison motivation.

The court managers' job satisfaction level will be measured with pre and post-surveys to court managers that measure perceptions of their tribunals and their satisfaction with their positions. By informing the court managers about their court's standing in the new dashboard and comparative to others, this could change how empowered or satisfied a court manager is with his or her position and power role.
Overall, the objectives of the survey for court staff are threefold. First, it will measure their knowledge of Quantum statistics. How close or far is their perception of their court’s performance from the truth. We can later use this information (prior beliefs) to understand if greater access to Quantum updates the beliefs closer to the truth when we measure their posteriors (survey at the end of intervention). Notice that the measurement of beliefs and opinions is something unique to the survey that the rest of the interventions cannot. Second, the survey will allow us to understand if the court staff find the statistics important and in what order of importance. This is useful for Quantum to know which variables are important for users and make them more salient in the dashboard or in the rest of the Quantum pages. That is, the results from the survey can help tailor the intervention to make it more effective. Third, the survey will promote Quantum as a source of accurate and useful information through the event study results. This may help influence those that are skeptical of Quantum to give it a chance.

Intervention Start Date
2020-12-14
Intervention End Date
2021-09-30

Primary Outcomes

Primary Outcomes (end points)
Case clearance rate, average length for filing cases (days), average length for ending cases, the average time the court needs to provide a written submission during the consultation period, percentage of writing resolved with 3-5 days, average number of days to program a hearing, percentage of hearing that started with a delay of 15 minutes, percentage of cases pending for more than 1-2 years, appeal rate, and number of cases appealed.

On the promotion intervention, the main primary outcome is number of logins per court manager to the Quantum platform
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Court managers job satisfaction
Secondary Outcomes (explanation)
The court managers' job satisfaction level will be taken from the pre and post-surveys to court managers that measure perceptions of their tribunals and their satisfaction with their positions. By informing the court managers about their court's standing it could change how empowered or satisfied a court manager is with his or her position and power role.

Experimental Design

Experimental Design
The program will have six distinct treatments. The treatments will combine promoting the usage of an electronic platform that contains information on their court performance and providing distinct homepages in this platform that will summarize the courts performance stressing the weaknesses and strengths of the court in comparison to a reference group.
Experimental Design Details
First, we will (randomly) promote the Quantum platform in multiple ways, such as sending court managers a survey that implicitly markets the platform, making phone calls, and sending them emails. Second, we will (also randomly) provide court managers a new dashboard that summarizes the main statistics displayed in Quantum and compares them to themselves in the past or to a reference group of courts. There would be a total of six treatments: (0) no quantum promotion no new dashboard (control) (1) quantum promotion, (2) no quantum promotion and new Quantum dashboard, (3) quantum promotion and a new dashboard, (4) no quantum promotion, new dashboard, and comparative that emphasizes the strongest and weakest indicators for that month in comparison to a similar group of courts in that same month, and (5) quantum promotion, new dashboard, and comparative that emphasizes the strongest and weakest indicators for that month in comparison to a similar group of courts in that same month.

Given that the information in the Quantum platform is updated daily and our dashboards are updated with monthly data, we will have multiple pre-treatment observation and many post-treatment observations.
Randomization Method
Randomization done in office by a computer.
Randomization Unit
The unit of randomization is the court. The randomization was stratified by size (small and big) and court type (7 distinct ones).
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
346 courts
Sample size: planned number of observations
346 courts
Sample size (or number of clusters) by treatment arms
57 courts stay as control, 57 courts receive T1, 58 courts receive T2, 58 courts receive T3, 58 courts receive T4, and 58 courts receive T5.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Toulouse School of Economics
IRB Approval Date
2020-01-18
IRB Approval Number
N/A

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials