Back to History

Fields Changed

Registration

Field Before After
Abstract This project investigates gender bias in small business microfinance lending through a framed experiment conducted in Egypt. Loan officers evaluate previously approved loans with randomized applicant names. By comparing whether the same portfolios are rejected more frequently with names suggesting different genders, the study aims to identify the existence of bias. Additionally, the project explores the origins of this bias, examining whether it is inclusionary or exclusionary errors. It also investigates whether the amount of bias varies based on the incentive structure that loan officers face. The project seeks to determine how such bias might be mitigated. For example, it tests whether sensitivity training, AI-assisted decision-making, or higher incentives and penalties for incorrect decisions have a positive impact or not. This project investigates gender bias in small business microfinance lending through a framed experiment conducted in Egypt. Loan officers evaluate previously approved loans with randomized applicant names. By comparing whether the same portfolios are rejected more frequently with names suggesting different genders, the study aims to identify the existence of bias. Additionally, the project explores the origins of this bias, examining whether it is inclusionary or exclusionary errors. It also investigates whether the amount of bias varies based on the incentive structure that loan officers face. The project aims to explore how to mitigate bias and improve accuracy in loan decisions. For example, it tests whether sensitivity training, AI-assisted decision-making, or higher incentives and penalties for incorrect decisions have a positive impact or not.
Last Published December 20, 2024 04:13 PM April 08, 2025 07:03 PM
Primary Outcomes (End Points) Existence of bias, following decisions of AI, and types of questions to AI. Existence of bias (exclusion or inclusion errors), accuracy of loan decisions, following AI's decisions or not, and types of questions to AI.
Primary Outcomes (Explanation) With the notification of IAT scores, higher incentives/penalties, and the help of ChatGPT, the bias is expected to be smaller compared to the control group. Then, the existence is the main outcome. Also, whether participants follow AI's decisions or not explains how the AI-assisted decision making can reduce the bias.
Experimental Design (Public) To investigate gender bias in loan decisions and assess interventions to mitigate it, I am conducting a framed field experiment in Egypt. This study involves approximately 400 experienced loan officers. Loan officers make hypothetical decisions about 10 previously approved small business loan portfolios, with randomized five male and five female names. They decide whether to approve or reject loans. Correct decisions, such as approving performing loans or rejecting non-performing loans, will be incentivized, while incorrect decisions will incur penalties. The experiment consists of two stages. In the first stage, loan officers are randomly assigned to one of four groups. The control group will make decisions without additional interventions. In Treatment Group 1, loan officers receive feedback on their IAT scores before making decisions. Treatment Group 2 involves imposing higher incentives and penalties for their decisions, while Treatment Group 3 combines IAT feedback with the higher incentive and penalty structure. In the second stage, loan officers evaluate a new set of 10 loan portfolios with assistance from ChatGPT. Loan officers are further randomized into three subgroups: The first group does not get any help from AI. The second group will use non-interactive AI, where recommendations are viewed without communication, while the last group will use interactive AI, allowing officers to ask follow-up questions. This setup enables an analysis of how AI interaction influences decision-making and whether AI-assisted tools mitigate bias. To investigate gender bias in loan decisions and assess interventions to mitigate it, I am conducting a framed field experiment in Egypt. This study involves approximately 750 experienced loan officers. Loan officers make hypothetical decisions about 10 previously approved small business loan portfolios, with randomized five male and five female names. They decide whether to approve or reject loans. Correct decisions, such as approving performing loans or rejecting non-performing loans, will be incentivized, while incorrect decisions will incur penalties. The experiment consists of two stages. In the first stage, loan officers are randomly assigned to one of four groups. The control group will make decisions without additional interventions. In Treatment Group 1, loan officers receive feedback on their IAT scores before making decisions. Treatment Group 2 involves imposing higher incentives and penalties for their decisions, while Treatment Group 3 combines IAT feedback with the higher incentive and penalty structure. In the second stage, loan officers evaluate a new set of 10 loan portfolios with assistance from ChatGPT. Loan officers are further randomized into three subgroups: The first group does not get any help from AI. The second group will use non-interactive AI, where recommendations are viewed without communication, while the last group will use interactive AI, allowing officers to ask follow-up questions.
Planned Number of Clusters 400 loan officers 750 loan officers
Planned Number of Observations 400 loan officers 750 loan officers
Sample size (or number of clusters) by treatment arms 100 for each arm. In the first stage, approximately 190 loan officers will be assigned to each treatment arm (one quarter for each). In the second stage, about 250 loan officers will be assigned to each treatment arm (one third for each).
Secondary Outcomes (Explanation) Loan officers may have different bias by whether loan portfolios look good or bad. Also, loan officers may have different gender bias by their gender, IAT score, and risk attitude. Also, with higher incentive and penalty, loan officers may spend more time in their decisions. Their certainty about their decisions can differ by treatment, especially with higher incentives and penalties, and with the help of AI.
Back to top