You need to sign in or sign up before continuing.
Back to History

Fields Changed

Registration

Field Before After
Abstract This project investigates gender bias in small business microfinance lending through a framed experiment conducted in Egypt. Loan officers evaluate previously approved loans with randomized applicant names. By comparing whether the same portfolios are rejected more frequently with names suggesting different genders, the study aims to identify the existence of bias. Additionally, the project explores the origins of this bias, examining whether it is statistical or taste-based, or whether inclusionary or exclusionary errors. It also investigates whether the amount of bias varies based on the incentive structure that loan officers face. The project seeks to determine how such bias might be mitigated. For example, it tests whether sensitivity training, AI-assisted decision-making, or higher incentives and penalties for incorrect decisions have a positive impact or not. This project investigates gender bias in small business microfinance lending through a framed experiment conducted in Egypt. Loan officers evaluate previously approved loans with randomized applicant names. By comparing whether the same portfolios are rejected more frequently with names suggesting different genders, the study aims to identify the existence of bias. Additionally, the project explores the origins of this bias, examining whether it is inclusionary or exclusionary errors. It also investigates whether the amount of bias varies based on the incentive structure that loan officers face. The project seeks to determine how such bias might be mitigated. For example, it tests whether sensitivity training, AI-assisted decision-making, or higher incentives and penalties for incorrect decisions have a positive impact or not.
Last Published December 20, 2024 01:03 PM December 20, 2024 02:08 PM
Primary Outcomes (End Points) Approval and accuracy of decisions, following ChatGPT's decision (binary), and questions to ChatGPT by the gender of loan portfolios. Existence of bias and following decisions of AI
Intervention (Hidden) The experiment consists of two stages. In the first stage, loan officers are randomly assigned to one of four groups. The control group will make decisions without additional interventions. In Treatment Group 1, loan officers receive feedback on their IAT scores before making decisions. Treatment Group 2 involves imposing higher penalties for incorrect decisions, while Treatment Group 3 combines IAT feedback with the higher penalty structure. In the second stage, loan officers evaluate a new set of 10 loan portfolios with assistance from ChatGPT. Given that AI can exhibit bias, the accuracy of ChatGPT is set at 80%, with balanced gender outcomes, and include reasoning to guide decision-making. Loan officers are further randomized into two subgroups: one group will use non-interactive AI, where recommendations are viewed without communication, while the other will use interactive AI, allowing officers to ask follow-up questions. This setup enables an analysis of how AI interaction influences decision-making and whether AI-assisted tools mitigate bias. Comparing how loan officers differentially respond to the fictitious male names relative to the female names allows me to estimate a baseline level of gender bias. Second, the effects of the interventions are assessed by comparing the treatment groups to the control group. Third, I analyze the impact of AI-assisted decision-making and the role of AI communication by comparing the interactive and non-interactive AI groups in the second stage to the control group.
Back to top