The Impact of AI-assistance on Recruitment Judgment

Last registered on July 21, 2025

Pre-Trial

Trial Information

General Information

Title
The Impact of AI-assistance on Recruitment Judgment
RCT ID
AEARCTR-0013527
Initial registration date
April 29, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 09, 2024, 2:09 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
July 21, 2025, 12:30 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
University of Michigan

Other Primary Investigator(s)

PI Affiliation
University of Michigan

Additional Trial Information

Status
In development
Start date
2024-04-29
End date
2026-05-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
The increasing prevalence of artificial intelligence (AI) in the job application process raises questions about its influence on recruiters' judgments and decisions. This study investigates how AI assistance in application materials affects recruiters' judgments of job applicants, focusing on both the perceived and actual use of AI. To address this, we will conduct a field experiment involving a campus-wide job opening for research assistants. Recruiters, as the primary participants, will evaluate applicants after reviewing their resumes and cover letters. The study employs a 2x2 experimental design, manipulating both the perceived and actual use of AI in generating the cover letters. By examining how recruiters evaluate applicants under these different conditions, we seek to understand the impact of AI assistance on their recruitment judgments. Our findings will contribute to the growing body of literature on the role of AI in hiring processes and its potential implications for both applicants and organizations.
External Link(s)

Registration Citation

Citation
Qiu, Jingyi and Qingyi Wang. 2025. "The Impact of AI-assistance on Recruitment Judgment ." AEA RCT Registry. July 21. https://doi.org/10.1257/rct.13527-1.3
Experimental Details

Interventions

Intervention(s)
We employ a 2×2 experimental design that manipulates both the perceived and actual use of AI-generated cover letters
Intervention Start Date
2024-04-29
Intervention End Date
2025-08-24

Primary Outcomes

Primary Outcomes (end points)
A recruiter's overall evaluation of an applicant profile
Primary Outcomes (explanation)
Recruiters' overall evaluations are provided by reporting a Willingness-to-Hire (WTH) score on a scale of 1 to 10. The ranking based on the WTH scores will be incentivized according to applicants’ rankings in the mini replication task completed during the intern period.

Secondary Outcomes

Secondary Outcomes (end points)
A recruiter's overall evaluation of an applicant's technical ability and willingness to exert effort
Secondary Outcomes (explanation)
Recruiters' evaluations of technical ability and willingness to exert effort are elicited as beliefs about the applicant’s performance in the assessment tasks and are incentivized.

Experimental Design

Experimental Design
We will conduct a field experiment in three steps. First, we will create a research assistant (RAs) position and post the job advertisement campus-wide at the University of Michigan. Applicants materials submitted will serve as the basis for evaluation. Second, we will recruit two groups of recruiters to evaluate the applicants and make hiring decisions. Our study involves two types of evaluators: PhD students with direct experience hiring RAs for similar roles, and HR professionals from Prolific. Third, each recruiter will review a series of applicant profiles, each including a resume and a cover letter. Here we employ a 2x2 experimental design.
Experimental Design Details
Not available
Randomization Method
Randomization done by a Qualtrics embedded randomizer
Randomization Unit
individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
NA
Sample size: planned number of observations
200 Prolific recruiters each reviewing 6 profiles 50 PhD student recruiters each reviewing 6 profiles
Sample size (or number of clusters) by treatment arms
The randomization will be done at individual evaluation level. However, given our randomization method, the expected number of observations falling in different conditions will be:
(1) 562 AI-generated and identified
(2) 188 AI-generated but not detected
(3) 562 human-generated and recognized
(4) 188 human-generated but mistaken as AI-generated
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Given our pilot study results which tests the effect of perceived use, given the mean value of the baseline overall evaluation (human-generated) is 6.14, and the standard deviation is 1.74, to observe a 0.2 sd treatment effect with 95% significance level and 80% power, we need at least 389 observations per group. If each recruiter evaluates 6 profiles, we need at least 130 recruiters.
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
University of Michigan
IRB Approval Date
2024-04-05
IRB Approval Number
HUM00252659
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information