AI or human? Applicants’ decisions in discrimination settings

Last registered on November 17, 2023

Pre-Trial

Trial Information

General Information

Title
AI or human? Applicants’ decisions in discrimination settings
RCT ID
AEARCTR-0012308
Initial registration date
November 07, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 17, 2023, 7:32 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
University of Cologne

Other Primary Investigator(s)

PI Affiliation
University of Cologne
PI Affiliation
University of Cologne

Additional Trial Information

Status
In development
Start date
2023-11-23
End date
2024-06-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
While there is agreement on the great potential for efficiencies and savings that the use of artificial intelligence (AI) can bring to recruitment, there is also an ongoing debate about the ethical and legal implications of hiring algorithms. The existing literature on the perception of hiring algorithms is ambiguous. Therefore, we investigate the question whether anticipated discrimination influences applicants’ preferences for a hiring algorithm. We conduct an online experiment based on the design of Dargnies et al. (2022) in which applicants are asked to decide whether a human manager or an AI should take their hiring decision. The novelty in our approach is the simulation of discrimination settings. We distinguish between taste-based and statistical discrimination. Using artificially formed groups, we can address a wide range of characteristics on which grounds candidates can be discriminated against. Our findings therefore extend to a broad group of applicants. The results of our study contribute to a better understanding of the potential of AI-assisted hiring processes, which can help to attract more diverse groups of applicants.
External Link(s)

Registration Citation

Citation
Santiago Wolf, Luisa, David Stommel and Paula Thevißen. 2023. "AI or human? Applicants’ decisions in discrimination settings." AEA RCT Registry. November 17. https://doi.org/10.1257/rct.12308-1.0
Experimental Details

Interventions

Intervention(s)
The laboratory experiment investigates whether the potential of discrimination influences applicants’ preferences for a hiring algorithm.
To this end, applicants can choose to be hired by a human manager or by an artificial intelligence in different hiring situations which entail the potential of discrimination.
Intervention Start Date
2023-11-23
Intervention End Date
2023-12-31

Primary Outcomes

Primary Outcomes (end points)
The primary outcome variable is the applicant’s decision on whether the hiring algorithm or the human manager takes their hiring decision.

The main analysis will test whether the choices for the decision makers differ depending on the potential of discrimination.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary outcomes are the applicants’ beliefs about the decision makers’ choices as well as their beliefs about the performances in the treatment groups. These outcomes give insights on the perceived potential of discrimination.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The experiment will be conducted via Prolific.

We conduct an online experiment based on the design of Dargnies et al. (2022). In the experiment, participants are assigned either the role of applicants or managers. Applicants perform two work sample tasks and one job task. Managers and the algorithm receive the work sample task performance of a worker pair of similar work sample task performance and decide whom to hire based on this. Before the hiring decision, both the manager and the algorithm receive a random subset of workers’ work sample and job task performance data as a training set. Applicants are informed about the training opportunity and how the algorithm and the manager take the hiring decision. Applicants are asked to decide whether a human manager or an artificial intelligence should take their hiring decision.
The novelty in our study is the stimulation of anticipated discrimination. Following standard approaches to labor market discrimination, we distinguish between taste-based and statistical discrimination. The perceived potential for discrimination is achieved using minimal groups based on colors as well as training opportunities for applicants.

Further survey evidence as well as beliefs about other participants’ actions and performances are elicited at the end of the experiment.
Experimental Design Details
Not available
Randomization Method
The assignment to roles (applicant, manager) as well as to the treatments (baseline, taste-based discrimination, statistical discrimination) is done in Prolific.
Assignment to the groups (in- or outgroup, training or non-training group) within the treatments will be done in oTree.
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
The number of clusters is the same as the number of observations.
Sample size: planned number of observations
1350 participants on Prolific
Sample size (or number of clusters) by treatment arms
We plan to have 1250 applicants and 100 managers.

From the 1250 applicants, 250 are assigned to the baseline condition. 500 belong to the taste-based discrimination treatment in which 250 belong to the green group and 250 to the purple group. 500 participants are assigned to the statistical discrimination treatment in which 250 belong to the training group and 250 to the non-training group.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Ethics Committee of the Faculty of Economic and Social Sciences (ERC-FMES) University of Cologne
IRB Approval Date
2023-09-05
IRB Approval Number
2300050LW