A Randomized Controlled Trial of an AI-Based Guidance Tool for Post-Secondary Planning

Last registered on November 26, 2025

Pre-Trial

Trial Information

General Information

Title
A Randomized Controlled Trial of an AI-Based Guidance Tool for Post-Secondary Planning
RCT ID
AEARCTR-0017212
Initial registration date
November 19, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 25, 2025, 7:35 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
November 26, 2025, 3:17 PM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
KU Leuven

Other Primary Investigator(s)

PI Affiliation
Leuven Economics of Education Research, KU Leuven

Additional Trial Information

Status
In development
Start date
2025-11-30
End date
2026-11-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Persistent gaps in higher education attainment for students from migrant backgrounds may be driven by deficits in non-cognitive skills and information asymmetries. This study outlines a multi-armed randomized controlled trial (RCT) to test whether an AI chatbot can increase these students' intention to enroll in higher education. We randomly assign secondary school clusters to a control group or one of three treatment arms that incrementally add engaging information, self-efficacy and patience-building modules, and relatable role-model stories.
External Link(s)

Registration Citation

Citation
De Witte, Kristof and Jaime Polanco-Jimenez. 2025. "A Randomized Controlled Trial of an AI-Based Guidance Tool for Post-Secondary Planning." AEA RCT Registry. November 26. https://doi.org/10.1257/rct.17212-1.2
Experimental Details

Interventions

Intervention(s)
The intervention consists of a structured interaction with an AI-driven educational chatbot designed to assist secondary school students with migrant backgrounds. Participants are randomly assigned to one of four AI personas to test specific psychological levers regarding higher education enrollment:
1. Control (Neutral): A purely factual, reactive information assistant (testing rational persuasion).
2. Treatment 1 (Friendly): An engaging, enthusiastic guide focusing on information and simulated experiences (testing emotional engagement).
3. Treatment 2 (Mentor): Adds self-efficacy building and future-orientation exercises to the Friendly persona (testing self-efficacy).
4. Treatment 3 (Storyteller): Adds simulated peer testimonials and relatable role-modeling to the Mentor persona (testing social identification).

Intervention Start Date
2025-11-30
Intervention End Date
2026-11-30

Primary Outcomes

Primary Outcomes (end points)
Intention to Enroll in Higher Education.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
1. Academic Self-Efficacy
2. Revealed Preference (Information Seeking)
3. Future Planning Concreteness
4. Perceived AI Core Style (Manipulation Check)
Secondary Outcomes (explanation)
1. Academic Self-Efficacy: Composite score of 3 Likert-scale items (e.g., "I believe I have what it takes to succeed...").
2. Revealed Preference: Binary outcome (1/0) indicating if the student requested an optional "University Starter Pack" PDF at the end of the session.
3. Future Planning: Ordinal coding of open-text responses regarding specific next steps.
4. Perceived AI Style: Semantic differential scales to validate if the AI was perceived as warm, objective, supportive, or narrative-driven as intended.

Experimental Design

Experimental Design
This is a multi-armed Randomized Controlled Trial (RCT) with a stratified design. Participating individuals from schools/NGOs are randomly assigned to one of four experimental groups to minimize contamination. Students within each cluster interact with a specific version of an AI chatbot:

1. Control (Neutral): A purely factual, reactive information assistant (testing rational persuasion).
2. Treatment 1 (Friendly): An engaging, enthusiastic guide focusing on information and simulated experiences (testing emotional engagement).
3. Treatment 2 (Mentor): Adds self-confidence building and future-orientation exercises to the Friendly persona (testing self-efficacy).
4. Treatment 3 (Storyteller): Adds simulated peer testimonials and role-modeling to the Mentor persona (testing social identification).
Data is collected via pre- and post-interaction surveys embedded directly in the chat application.
Experimental Design Details
Not available
Randomization Method
Computerized randomization (stratified by school).

Randomization Unit
Individual (Stratified by School)
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
20 Schools
Sample size: planned number of observations
800 Students
Sample size (or number of clusters) by treatment arms
200 students Control, 200 students Friendly, 200 students Mentor, 200 students Storyteller (Stratified across 20 schools).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Power calculations were performed for a stratified randomized design (individual randomization blocked by school) using the 'Within' estimator (School Fixed Effects) at a 5% significance level with 80% power. Primary Scenario (Target): With a planned sample of 800 students (approx. 40 schools), and assuming that the inclusion of baseline covariates and school fixed effects yields an effective R-squared of 0.30, the design has 80% power to detect a Minimum Detectable Effect (MDE) of 0.235 standard deviations in the primary outcome. Sensitivity Analysis: We recognize that power depends on recruitment success. We estimated the sample size required to detect a small effect size of 0.179 SD. - Cluster Design (Conservative): If we were forced to randomize at the school level (ICC=0.05), detecting 0.179 SD would require 1,569 students (79 schools). - Optimistic Scenario (Higher Enrollment): If recruitment exceeds expectations and reaches 920 students across 46 schools, then: The MDE improves to 0.179 SD under the planned stratified individual-randomization design. This represents the best-case scenario given our design and assumptions. All power calculations were conducted using standard formulas for clustered and stratified experimental designs (Bloom 2005; McKenzie 2012), incorporating the variance reduction from covariate adjustment and fixed effects.
IRB

Institutional Review Boards (IRBs)

IRB Name
Toetsing Privacy en Ethiek (PRET)
IRB Approval Date
2025-11-25
IRB Approval Number
G-2025-9672-R2(MAR)