Can exploratory algorithms reduce discrimination in hiring?

Last registered on September 01, 2025

Pre-Trial

Trial Information

General Information

Title
Can exploratory algorithms reduce discrimination in hiring?
RCT ID
AEARCTR-0016496
Initial registration date
August 25, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
September 01, 2025, 3:05 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Humboldt-Universität zu Berlin, WZB

Other Primary Investigator(s)

PI Affiliation
WZB

Additional Trial Information

Status
In development
Start date
2025-08-26
End date
2026-08-26
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study examines managers' willingness to follow different hiring algorithms and their impact on reducing discrimination against minority groups. Focusing on the exploration-exploitation trade-off, we compare explorative algorithms, which encourage hiring from less-known applicant groups, to exploitative ones favoring familiar candidate profiles. We investigate which algorithms managers prefer, their effects on hiring gender minorities, and how these tendencies relate to internalized gender stereotypes. Our findings aim to inform the design of fairer algorithmic hiring practices.
External Link(s)

Registration Citation

Citation
Baumann, Julia and Dorothea Kübler. 2025. "Can exploratory algorithms reduce discrimination in hiring?." AEA RCT Registry. September 01. https://doi.org/10.1257/rct.16496-1.0
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2025-08-26
Intervention End Date
2026-08-26

Primary Outcomes

Primary Outcomes (end points)
- Number of hires from male versus female applicant pool
- Willingness to follow algorithm’s recommendations (number of times the hiring choice was the same as what the algorithm recommended)
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
- Beliefs about ability of men and women in quiz
- Weight of early negative/positive experiences on subsequent hiring
- Managers’ bonus earnings
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We conduct a hiring experiment to study how managers respond to different algorithmic recommendations in recruitment decisions. Managers choose between male and female applicants performing a sports knowledge quiz. While men typically perform better on sports trivia quizzes than women (Bordalo et al, 2019, AER), our sports quiz is designed so that men do not perform better on average than women. If managers hire women despite the initial stereotype they may have against them in this task, they can learn about this true distribution and should thus not discriminate against women. Participants are assigned to one of three treatments: no algorithm (control), an explorative algorithm encouraging hiring from underexplored groups, or an exploitative algorithm favoring groups with higher observed productivity. Managers receive feedback on hired applicants and state beliefs about group performance over multiple rounds. We measure which algorithm managers follow more willingly and how this affects gender-biased hiring discrimination, also considering the influence of managers’ gender stereotypes.
Experimental Design Details
Not available
Randomization Method
Randomization done in office by a computer
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
For a pilot study, we collect 120 observations. Based on the effect size in this pilot, we will calculate the required sample size to reach at least 80% power in the main data collection.
Sample size: planned number of observations
120 in pilot study
Sample size (or number of clusters) by treatment arms
40 for each of the three treatment arms in pilot study
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
WZB Research Ethics Committee
IRB Approval Date
2024-11-21
IRB Approval Number
2024/11/275