Back to History

Fields Changed

Registration

Field Before After
Abstract The increasing prevalence of artificial intelligence (AI) in job application process raises questions about its influence on recruiters' perceptions and decisions. This study aims to investigate the effects of perceived AI assistance and actual AI assistance usage in application materials on recruiters' judgments of job applicants. To address this question, we will conduct an experiment involving a simulated job opening for research assistants. Lab participants, such as students, will be asked to submit their resumes, write a cover letter based on the job position and complete a technical test.The primary participants in this study will be recruiters, who will evaluate candidates after reviewing their resumes and cover letters. We will employ a 2x2 experimental design, manipulating both the perceived use of AI assistance and the actual use of AI assistance in the cover letters. By analyzing the differences in how recruiters evaluate applicants under different conditions of perceived and actual AI assistance usage, we seek to understand their impact on recruiters' judgments. The findings of this study will contribute to the growing body of research on the role of AI in hiring processes and its potential implications for applicants and organizations. The increasing prevalence of artificial intelligence (AI) in the job application process raises questions about its influence on recruiters' judgments and decisions. This study investigates how AI assistance in application materials affects recruiters' judgments of job applicants, focusing on both the perceived and actual use of AI. To address this, we will conduct a field experiment involving a campus-wide job opening for research assistants. Recruiters, as the primary participants, will evaluate applicants after reviewing their resumes and cover letters. The study employs a 2x2 experimental design, manipulating both the perceived and actual use of AI in generating the cover letters. By examining how recruiters evaluate applicants under these different conditions, we seek to understand the impact of AI assistance on their recruitment judgments. Our findings will contribute to the growing body of literature on the role of AI in hiring processes and its potential implications for both applicants and organizations.
Last Published May 09, 2024 02:09 PM May 08, 2025 02:27 AM
Intervention (Public) Please refer to the pre-analysis plan. We employ a 2×2 experimental design that manipulates both the perceived and actual use of AI-generated cover letters
Intervention End Date September 30, 2024 August 24, 2025
Primary Outcomes (End Points) Please refer to the pre-analysis plan. A recruiter's overall evaluation of an applicant profile
Primary Outcomes (Explanation) Recruiters' overall evaluations are provided by reporting a Willingness-to-Hire (WTH) score on a scale of 1 to 10. The ranking based on the WTH scores will be incentivized according to applicants’ rankings in the mini replication task completed during the intern period.
Experimental Design (Public) Please refer to the pre-analysis plan. We will conduct a field experiment in three steps. First, we will create a research assistant (RAs) position and post the job advertisement campus-wide at the University of Michigan. Applicants materials submitted will serve as the basis for evaluation. Second, we will recruit two groups of recruiters to evaluate the applicants and make hiring decisions. Our study involves two types of evaluators: PhD students with direct experience hiring RAs for similar roles, and HR professionals from Prolific. Third, each recruiter will review a series of applicant profiles, each including a resume and a cover letter. Here we employ a 2x2 experimental design.
Randomization Method randomization done in office by a computer Randomization done by a Qualtrics embedded randomizer
Planned Number of Observations 200 participants 200 Prolific recruiters each reviewing 8 profiles 50 PhD student recruiters each reviewing 8 profiles
Sample size (or number of clusters) by treatment arms 100 in the high group, 100 in the low group The randomization will be done at individual evaluation level. However, given our randomization method, the expected number of observations falling in different conditions will be: (1) 750 AI-generated and identified (2) 250 AI-generated but not detected (3) 750 human-generated and recognized (4) 250 human-generated but mistaken as AI-generated
Power calculation: Minimum Detectable Effect Size for Main Outcomes Given our pilot study results which tests the effect of perceived use, given the mean value of the baseline overall evaluation (human-generated) is 6.14, and the standard deviation is 1.74, to observe a 0.2 sd treatment effect with 95% significance level and 80% power, we need at least 389 observations per group. If each recruiter evaluates 8 profiles, we need at least 98 recruiters.
Intervention (Hidden) Please refer to the pre-analysis plan. Each applicant has two cover letters: one written independently without AI (under our supervision), and one we generate using AI with standardized prompts based on their resume. To manipulate the actual use of AI, either the human- or AI-generated cover letter will be randomly assigned to each profile. To manipulate recruiters’ perceptions, we will provide a noisy signal indicating whether AI assistance is likely to be used in the cover letter.
Secondary Outcomes (End Points) A recruiter's overall evaluation of an applicant's technical ability and willingness to exert effort
Secondary Outcomes (Explanation) Recruiters' evaluations of technical ability and willingness to exert effort are elicited as beliefs about the applicant’s performance in the assessment tasks and are incentivized.
Back to top