Fighting Corruption in Peru: Detecting bias in prioritizing whistleblower complaints

Last registered on August 20, 2021

Pre-Trial

Trial Information

General Information

Title
Fighting Corruption in Peru: Detecting bias in prioritizing whistleblower complaints
RCT ID
AEARCTR-0007997
Initial registration date
July 23, 2021

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
July 27, 2021, 2:29 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
August 20, 2021, 4:05 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
Columbia University

Other Primary Investigator(s)

PI Affiliation
Columbia University
PI Affiliation
IADB

Additional Trial Information

Status
On going
Start date
2021-07-19
End date
2021-08-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
As part of its battle against corruption, the Peruvian government's Contraloría General de la República (CGR)---the Comptroller General---invites and receives a large number of anonymous complaints from citizens about misconduct by government officials. However, CGR receives a very large volume of whistleblower complaints, and has a small team that is tasked with receiving the complaints. As a result, CGR is forced to make triaging decisions about which complaints to prioritize and which to discard. These triage decisions are made by analysts who, as in any case where human judgement is involved, will suffer from inefficiency and potential bias. In the Peruvian context, the main candidate for potential bias is bias against indigenous people. This study seeks to measure this bias and discern its causes by differentiating between competing theories of discrimination.

This study is a lab-in-the-field experiment. We will hold sessions in which participants take part in a range of tasks under laboratory conditions, but without physically coming to a laboratory but rather taking part virtually from their home or office. The participants will perform two types of tasks. In the first part of the study, we aim to simulate under laboratory conditions, the triage decisions the analysts make. They will read examples of whistleblower complaints and then assign each complaint a priority score reflecting their impression of how likely it is that this case is worthy of deeper investigation. The complaints they read will be drawn from real, historical, examples of whistleblower complaints from the last 3 years. For a random subset of the complaints, the name of the accused individual will be switched from a Spanish-sounding name to an indigenous (Quechua or Aymara) sounding name, or vice versa. This technique will allow us to measure the extent of ethnic bias in triage decisions among respondents. Combining it with administrative data on the outcomes of the investigations of the cases that were, actually, investigated will allow us to study the sources of this bias. In the second part of the study we elicit a series of behavioral traits of the subjects that we can then correlate with the biases measured in the first part of the study. The subjects will be grouped into sessions with groups of 20 participants and asked to play a series of behavioral games designed to measure prosocial outcomes, such as their level of trust and cooperation. After finishing the tasks that require interaction between players, they will be presented with individual surveys regarding their personality and cognitive skills.
External Link(s)

Registration Citation

Citation
Best, Michael, Jonas Hjort and Gastón Pierri. 2021. "Fighting Corruption in Peru: Detecting bias in prioritizing whistleblower complaints." AEA RCT Registry. August 20. https://doi.org/10.1257/rct.7997-2.0
Experimental Details

Interventions

Intervention(s)
The experimental sessions will generate data of two types. In the first part of the study, we aim to simulate under laboratory conditions, the triage decisions the analysts make. They will read examples of whistleblower complaints and then assign each complaint a priority score reflecting their impression of how likely it is that this case is worthy of deeper investigation. The complaints they read will be drawn from real, historical, examples of whistleblower complaints from the last 3 years. For a random subset of the complaints, the name of the accused individual will be switched from a Spanish-sounding name to an indigenous (Quechua or Aymara) sounding name, or vice versa. This technique will allow us to measure the extent of ethnic bias in triage decisions among respondents. Combining it with administrative data on the outcomes of the investigations of the cases that were, actually, investigated will allow us to study the sources of this bias.

In the second part of the study we elicit a series of behavioral traits of the subjects that we can then correlate with the biases measured in the first part of the study. The subjects will be grouped into sessions with groups of 20 participants and asked to play a series of behavioral games designed to measure prosocial outcomes, such as their level of trust and cooperation. After finishing the tasks that require interaction between players, they will be presented with individual surveys regarding their personality and cognitive skills.
Intervention Start Date
2021-07-19
Intervention End Date
2021-08-03

Primary Outcomes

Primary Outcomes (end points)
triage decisions: 1) priority score given to cases; 2) probability analysts assign to case leading to triggering of alarm
Primary Outcomes (explanation)
Analysts are asked to assign a priority score to each case between 1 and 100. Analysts are also asked what they believe the probability is that this case, if investigated further, would lead the CGR to trigger an alarm and recommend prosecution for corruption.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The experimental sessions will generate data of two types. In the first part of the study, we aim to simulate under laboratory conditions, the triage decisions the analysts make. They will read examples of whistleblower complaints and then assign each complaint a priority score reflecting their impression of how likely it is that this case is worthy of deeper investigation. The complaints they read will be drawn from real, historical, examples of whistleblower complaints from the last 3 years. For a random subset of the complaints, the name of the accused individual will be switched from a Spanish-sounding name to an indigenous (Quechua or Aymara) sounding name, or vice versa. This technique will allow us to measure the extent of ethnic bias in triage decisions among respondents. Combining it with administrative data on the outcomes of the investigations of the cases that were, actually, investigated will allow us to study the sources of this bias.

In the second part of the study we elicit a series of behavioral traits of the subjects that we can then correlate with the biases measured in the first part of the study. The subjects will be grouped into sessions with groups of 20 participants and asked to play a series of behavioral games designed to measure prosocial outcomes, such as their level of trust and cooperation. After finishing the tasks that require interaction between players, they will be presented with individual surveys regarding their personality and cognitive skills.
Experimental Design Details
Randomization Method
Randomization done in office by a computer.
We first randomize the order in which each analyst sees the cases in our library of cases, stratifying by case type. We stratify by how far the case made it through the complaints analysis process (1st stage rejection, second stage rejection, second stage alarm) x whether the case accused an official with an indigenous name. To do this, we form blocks of 31 randomly selected cases such that the proportion of cases in each stratum in each block matches the overall proportions and randomly order them. Then, within each block we randomly switch the name of the accused in 7.5 out of the 31 cases (we alternate between switching 7 and switching 8 names), stratifying by whether the original case had an indigenous or a hispanic name.
Randomization Unit
Case
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
We prepared 155 cases for each analyst and invited 203 analysts to participate. We expect that each analyst will be able to review an average of 90 cases in the time allowed, and that 190 analysts will attend the sessions for a total of 90 * 190 = 17,100 cases.
Sample size: planned number of observations
17,100
Sample size (or number of clusters) by treatment arms
The randomization described above implies that the expected proportion of fake cases analysts see is 7.5 / 31 = 0.2419. So, if they do indeed see 90 cases and 190 analysts attend the sessions, we will have 0.2419 * 17,100 = 4137 fake cases and 12963 real cases.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Columbia University MS IRB
IRB Approval Date
2021-07-16
IRB Approval Number
IRB-AAAT7775
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials