Experimental Study on the Impact of Generative Artificial Intelligence on Individual Career Preferences

Last registered on April 01, 2026

Pre-Trial

Trial Information

General Information

Title
Experimental Study on the Impact of Generative Artificial Intelligence on Individual Career Preferences
RCT ID
AEARCTR-0017719
Initial registration date
March 27, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 01, 2026, 10:19 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
Harbin Institute of Technology (Shenzhen)

Other Primary Investigator(s)

Additional Trial Information

Status
Completed
Start date
2026-03-21
End date
2026-03-29
Secondary IDs
N/A
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study investigates whether and how generative artificial intelligence (AI) influences individuals’ career choice preferences. Using a 2 (AI intervention: with vs. without) × 3 (decision context: subjective ideal, expert guess, peer guess) mixed factorial experimental design, we randomly assigned 93 university students to either a control group or an AI intervention group. Participants role-played as job seekers and rated six jobs across three decision contexts. Outcomes include skill-matched job ratings, rankings, and top-1 choice probabilities. We hypothesize that generative AI increases preference for skill-matched jobs across all three contexts and that job familiarity moderates this effect. Primary analyses include independent t-tests, linear mixed models, logistic regression, and moderation analysis with simple slopes.
External Link(s)

Registration Citation

Citation
wang, xinyue. 2026. "Experimental Study on the Impact of Generative Artificial Intelligence on Individual Career Preferences." AEA RCT Registry. April 01. https://doi.org/10.1257/rct.17719-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
The intervention consists of providing static, pre-generated Artificial Intelligence (AI) advice during a simulated career decision-making task. Participants role-play as anonymous job seekers and are randomly assigned to one of two conditions:
1. Control Group (Independent Evaluation): Participants review the personal statements of anonymous job seekers—including education, internships, skills, and hobbies—and independently rate the person-job fit for six distinct job offers on a 1-to-10 scale. No external assistance or reference information is provided.
2. Treatment Group (AI-Assisted Evaluation): Participants complete the same evaluation task but receive an exogenous information shock: static AI-generated advice is displayed alongside each job offer. The advice, pre-generated by a large language model acting as a “senior career development consultant,” offers tailored analyses of the matching degree, highlighting professional alignments and identifying potential mismatches (e.g., distinguishing core competencies from hobby-based interests).
The AI advice is presented statically to eliminate confounding factors related to participants’ prompt-engineering skills or variation in human–AI interaction frequency, ensuring uniform treatment administration across all participants in the treatment group.
Intervention (Hidden)
No
Intervention Start Date
2026-03-24
Intervention End Date
2026-03-29

Primary Outcomes

Primary Outcomes (end points)
1. Skill-matched job rating: The average rating (on a 1-to-10 scale) assigned to three pre-defined skill-matched jobs. Skill-matched jobs are identified based on alignment between the job seeker’s profile and job requirements, as validated by the AI advice and expert judgment.
2. Skill-matched job rank: The rank order of skill-matched jobs among the six job offers, with lower ranks indicating higher preference (rank 1 = most preferred). This is derived from the sorting task in each experimental phase.
3. Top-1 choice indicator: A binary variable equal to 1 if the participant selects a skill-matched job as their most preferred option in the subjective ideal context, and 0 otherwise.
These outcomes are measured separately for each of the three decision contexts: subjective ideal, expert guess, and peer guess.
Primary Outcomes (explanation)
1. Skill-matched job rating: For each participant, we compute the mean rating across the three skill-matched jobs. Ratings are measured on a 1–10 scale, where higher scores indicate greater perceived person-job fit. Skill-matched jobs are predetermined based on the job seeker’s profile (e.g., for Job Seeker A: Analyst, Consultant, Industry Researcher; for Job Seeker B: Product Manager, Business Analyst, Risk Strategist).
2. Skill-matched job rank: Participants rank the six job offers from most to least preferred. We extract the rank positions (1–6) of the three skill-matched jobs and take the minimum rank (i.e., the highest-preference skill-matched job) as the primary rank outcome. Lower values indicate stronger preference.
3. Top-1 choice indicator: In the subjective ideal context, participants select their single most preferred job. This variable is coded as 1 if the chosen job belongs to the skill-matched set, and 0 otherwise. This binary outcome captures the discrete choice effect of AI intervention on the most consequential decision point.

Secondary Outcomes

Secondary Outcomes (end points)
1. Decision confidence: Measured on a 7-point Likert scale (1 = not at all confident, 7 = very confident) assessing participants’ confidence in their person-job fit ratings and final rankings.
2. Perceived task difficulty: Measured on a 7-point Likert scale (1 = very easy, 7 = very difficult) capturing participants’ subjective assessment of the complexity of the evaluation task.
3. AI trust and adoption (treatment group only): Three items measured on 7-point Likert scales assessing (a) trust in AI advice, (b) use of AI advice as a reference, and (c) perceived influence of AI advice on final ratings.
4. Effort: Measured on a 7-point Likert scale (1 = strongly disagree, 7 = strongly agree) capturing self-reported effort investment in the ranking task.
5. Rationality of decision: Measured on a 7-point Likert scale assessing participants’ perception that their final ratings and rankings constitute a reasonable career decision.
Secondary Outcomes (explanation)
1. Decision confidence: Derived from confidence in final ranking.
2. Perceived task difficulty: Single-item measure assessed after the full experiment.
3. AI trust and adoption: Derived from confidence questionnaire score.
4. Effort: Derived from confidence questionnaire score.

Experimental Design

Experimental Design
This study employs a 2 × 3 mixed factorial experimental design. The between-subjects factor is AI intervention (control vs. treatment), and the within-subjects factor is decision context (subjective ideal, expert guess, peer guess).
Participants are randomly assigned to either the control group (no AI advice) or the treatment group (AI advice provided). All participants complete three decision-making phases in a fixed order: (1) subjective ideal context, (2) expert guess context, and (3) peer guess context. In each phase, participants evaluate six job offers on a 1-to-10 scale and rank them by preference.
The primary outcomes are: (a) average rating for skill-matched jobs, (b) rank of skill-matched jobs, and (c) an indicator for whether a skill-matched job is selected as the top choice. Data are collected via an online platform. The study is pre-registered prior to data analysis.
Experimental Design Details
This study employs a 2 (AI intervention: control vs. treatment) × 3 (decision context: subjective ideal, expert guess, peer guess) mixed factorial design. AI intervention is a between-subjects factor; decision context is a within-subjects factor with fixed order.
1. Randomization and Sample:Participants are randomly assigned to control or treatment group using a computer-generated random number sequence. A total of 93 participants were recruited from Harbin Institute of Technology (Shenzhen). Eligibility criteria include current enrollment as an undergraduate or graduate student. All participants completed the full experiment.
2. Experimental Procedure:All participants complete three phases in a fixed order:
(1)Subjective Ideal Context: Participants evaluate six job offers on a 1–10 scale based on personal preference. Treatment group participants receive static AI-generated advice for each job.
(2)Expert Guess Context: Participants guess the ratings a senior HR expert would assign. Financial incentives are introduced: a bonus (up to 10 CNY) is awarded based on rating accuracy (absolute difference ≤1 = 10 CNY; ≤3 = 5 CNY; >3 = 0 CNY) for a randomly selected job. Treatment group participants receive the same AI advice.
(3)Peer Guess Context**: Participants guess the average rating of other participants. Incentive: 5 CNY bonus if rating error ≤1 for a randomly selected job. Treatment group participants receive the same AI advice.
3. Role-Playing and Comprehension
Participants are randomly assigned to role-play as one of two anonymous job seekers (A or B). Each profile includes education, internships, skills, and hobbies. Before the rating tasks, participants must correctly answer five comprehension questions to proceed, ensuring understanding of the role.
4. Job Offers
Six job offers are presented for each job seeker. Three are pre-defined as skill-matched jobs*(aligned with qualifications) and three as interest-matched jobs (aligned with hobbies but lower skill alignment). Classification is validated by AI advice and expert judgment.
5. AI Advice (Treatment Group Only)
Static AI-generated advice is displayed below each job description. Pre-generated by Deepseek acting as a “senior career development consultant,” the advice provides tailored person-job fit analysis, highlighting professional alignments and explicitly identifying mismatches for interest-matched jobs. Static presentation eliminates confounding from prompt-engineering skills or interaction variation.
6. Post-Experiment Survey
Participants complete a survey measuring.
7. Analysis Plan
Plan to finish independent t-tests and Mann-Whitney U tests comparing control and treatment groups on skill-matched job ratings and ranks.
Randomization Method
Computer-generated randomization. The randomization was implemented automatically upon participant enrollment and before any experimental tasks were presented.
Randomization Unit
The randomization was conducted at the individual participant level. Each participant was independently assigned to either the control group or the treatment group. No clustering or group-level randomization was employed, as the experimental design involves a between-subjects manipulation of AI intervention with individual-level assignment.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Not applicable (non-clustered design).
Sample size: planned number of observations
100 to 120 individual participants.
Sample size (or number of clusters) by treatment arms
Control group: 50 to 60 individuals; Treatment group (AI-advice): 50 to 60 individuals.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
A priori power analysis is conducted to determine the minimum detectable effect size (MDE) for the primary outcome—skill-matched job rating—given the planned sample size. Calculations assume a two-sample independent t-test design comparing the control and treatment groups. Parameters: - Planned total sample size: N = 100 (n₁ = 50, n₂ = 50) - Test: Two-sided independent t-test - Significance level: α = 0.05 - Power: 1−β = 0.80 - Standard deviation (SD): Assumed pooled SD = 1.20 (on a 1–10 scale), based on prior pilot data and similar experimental studies in the literature Minimum Detectable Effect Size: - Cohen’s d: 0.57 - Raw difference (on 1–10 scale): 0.68 Interpretation: With the planned sample size of 100 participants (50 per group), the study is powered to detect a moderate effect size of Cohen’s d = 0.57 or larger between the control and treatment groups for the primary outcome. This corresponds to a raw difference of approximately 0.68 points on the 10-point skill-matched job rating scale.
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number
Analysis Plan

Analysis Plan Documents

AEA——数据分析计划.pdf

MD5: 996967e11de37f3840fb47726b23816a

SHA1: aa1bcdd033879b2f14fd56c5585269d15133ffff

Uploaded At: March 27, 2026

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials