Experimental Study on the Impact of Generative Artificial Intelligence on the Quality of Individual Career Decision-making

Last registered on January 22, 2026

Pre-Trial

Trial Information

General Information

Title
Experimental Study on the Impact of Generative Artificial Intelligence on the Quality of Individual Career Decision-making
RCT ID
AEARCTR-0017680
Initial registration date
January 18, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
January 22, 2026, 1:45 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
Harbin Institute of Technology (Shenzhen)

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2025-11-03
End date
2026-03-14
Secondary IDs
N/A
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This randomized controlled experiment studies how generative artificial intelligence affects college students' career decision-making. Participants were randomly assigned to rank job opportunities with or without the assistance of generative AI. The quality of decision-making is objectively measured by comparing the rankings of participants with the benchmarks derived from the BERT-based semantic matching model. This study tested whether artificial intelligence has improved the accuracy of job matching, enhanced decision-making confidence, and whether these effects vary depending on individual characteristics (such as gender, education level, major) or trust in artificial intelligence. The research results aim to provide information for the application of artificial intelligence in career counseling and offer objective indicators for evaluating the quality of decision-making.
External Link(s)

Registration Citation

Citation
wang, xinyue. 2026. "Experimental Study on the Impact of Generative Artificial Intelligence on the Quality of Individual Career Decision-making." AEA RCT Registry. January 22. https://doi.org/10.1257/rct.17680-1.0
Sponsors & Partners

Sponsors

Experimental Details

Interventions

Intervention(s)
Participants are randomly assigned to either an AI-assisted decision-making condition or a control condition. In the AI-assisted condition, participants are allowed to consult a generative artificial intelligence tool when completing a simulated career decision task. In the control condition, participants complete the same task without access to any external decision-support tools. Aside from the availability of AI assistance, all other experimental procedures are identical across groups.
Intervention Start Date
2026-03-01
Intervention End Date
2026-03-02

Primary Outcomes

Primary Outcomes (end points)
Career decision quality, measured as the rank-order consistency between participants’ job-offer rankings and an algorithm-generated benchmark ranking.
Primary Outcomes (explanation)
Career decision quality is operationalized using Kendall’s Tau rank correlation coefficient. For each participant, the coefficient is calculated between the participant’s ranking of job offers and a benchmark ranking generated by a person–job matching algorithm based on semantic similarity between résumé information and job descriptions. Higher values indicate greater alignment with the benchmark and higher decision quality.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary outcomes include: (1) alternative measures of ranking accuracy such as Spearman’s rho and top-k overlap rates; (2) self-reported decision confidence; (3) perceived decision difficulty; and (4) adoption of AI advice in the treatment group.
Secondary Outcomes (explanation)
Alternative ranking-based outcomes are computed using standard non-parametric measures of rank similarity. Decision confidence and perceived difficulty are measured using Likert-scale survey items administered after the task. AI advice adoption is measured by the degree to which participants’ final rankings align with the AI-generated recommendations.

Experimental Design

Experimental Design
The study employs a randomized controlled experimental design. Eligible participants are randomly assigned at the individual level to either an AI-assisted treatment group or a control group. All participants complete the same simulated career decision task, with the only difference being access to a generative AI tool in the treatment condition. Outcomes are measured immediately after task completion.
Experimental Design Details
Not available
Randomization Method
Randomization is conducted by a computer using a pre-generated random assignment algorithm. Participants are assigned to treatment or control groups at the individual level, with stratification by key baseline characteristics.
Randomization Unit
Randomization is performed at the individual level. Prior to randomization, participants are stratified by gender (Sex), education level (Education), and academic major (Major) to ensure balanced distribution of these characteristics across the control and treatment groups. Within each stratum, participants are then randomly assigned to either the control group (no AI assistance) or the treatment group (AI-assisted).
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Not applicable (non-clustered design).
Sample size: planned number of observations
180 to 240 individual participants.
Sample size (or number of clusters) by treatment arms
Control group: 60 to 120 individuals; Treatment group (AI-assisted): 60 to 120 individuals.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
A priori power calculations were conducted for the primary outcome, Kendall’s τ (tau), a measure of ranking consistency. With a total sample size of N = 180 (90 per arm), an alpha level of 0.05 (two-tailed), and assuming a standard deviation of τ derived from pilot data or prior literature (estimated SD ≈ 0.25), the design achieves 80% power to detect a minimum detectable effect (MDE) of approximately 0.20 in Kendall’s τ. This corresponds to a small-to-moderate standardized effect size (Cohen’s d ≈ 0.40). The calculation assumes no clustering and uses an independent two-sample t-test. If the achieved sample size reaches the upper target of N = 240 (120 per arm), the MDE decreases proportionally.
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number