AI Recommendations for Centralized University Applications

Last registered on April 23, 2026

Pre-Trial

Trial Information

General Information

Title
AI Recommendations for Centralized University Applications
RCT ID
AEARCTR-0018368
Initial registration date
April 14, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 23, 2026, 9:23 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Yale University

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2026-04-20
End date
2026-05-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study uses a web-based platform to provide college-application recommendations to high-school students in Greece based on each student’s own profile (e.g., academic track/grade, interests, location preferences, and self-reported priorities). Students will interact with the platform, receive a list of additional recommended college programs, and complete a short survey about their perceptions of the recommendation and their intended choices. The study will measure how students evaluate recommendations provided by a web-based system and which criteria they consider legitimate, useful, and trustworthy when forming application plans.

Students will:
1. randomly split to control and treatment groups (in ratio 1:2),
2. access the platform,
3. enter or confirm basic profile information and preferences relevant to college applications,
4. receive a recommended college-application portfolio generated by the platform based on their profile, and
5. complete a brief questionnaire measuring perceptions of the recommendation and intended adoption.

To study how students weigh different recommendation trade-offs, the platform will use student responses to present recommendations that emphasize different criteria (or make these criteria more/less salient), such as location preferences, fit with personal goals, or admission probability.

We will analyze differences in outcomes before and after the platform experience and by school and student characteristics (e.g., gender, grade, self-reported interests, and academic performance indicators).
External Link(s)

Registration Citation

Citation
Monachou, Faidra. 2026. "AI Recommendations for Centralized University Applications." AEA RCT Registry. April 23. https://doi.org/10.1257/rct.18368-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2026-04-20
Intervention End Date
2026-05-01

Primary Outcomes

Primary Outcomes (end points)
The study will include the following outcome variables:
A. Recommendation Use and Exploration
• Recommendation Adoption (Binary): Indicates whether the student reports that they would list one or more of the programs recommended by the platform rather than retain only their initial program choices.
• Exploration Intensity (Numerical): Measures the extent of student interaction with recommendations, such as number of recommendations viewed and whether the student chooses to see 5 additional recommendations.
• Exploration Breadth (Numerical): Captures how broadly the student explored across different fields of study, institutions, or geographic locations.
• Recommendation Consideration (Binary / Numerical): For each recommended program, indicates whether the student clicks that they would consider listing it.
• Recommendation Rejection (Binary / Numerical): For each recommended program, indicates whether the student clicks that they would not consider listing it.
• Choice-Set Update (Binary): Indicates whether the student changes any of the main degree programs selected after interacting with the app.
• Top-Choice Update (Binary): Indicates whether the student’s stated top choice changes after interacting with the app.
• Preference-List Revision Intent (Numerical): Measures how likely the student reports they are to change some of the choices in their final application before submission.
• Additional-Recommendation Click-Through (Binary): Indicates whether the student elects to view the optional 5 additional recommendations.
B. Baseline Preferences and Stated Choices
• Initial Main Choice Set (Categorical): The set of up to five degree programs the student selects before interacting with the app.
• Initial Top Choice (Categorical): The program the student identifies as their current top choice before interacting with the app.
• Post-Interaction Main Choice Set (Categorical): The set of up to five degree programs the student selects after interacting with the app.
• Post-Interaction Top Choice (Categorical): The program the student identifies as their top choice after interacting with the app.

C. Knowledge and Awareness
• Program Knowledge (Numerical): Assesses the student’s self-reported knowledge of available college programs after using the platform.
• Awareness of Available Options (Numerical): Measures whether the student reports being aware of the different degree program options available to them.
• Perceived Information Gain (Numerical): Measures whether the student reports that the app provided information about some programs that they did not already know.
• Program Discovery (Numerical): Measures whether the student reports that the app introduced them to programs they had not previously considered.
D. Trust, Understanding, and Goal Alignment
• Trust in Intent (Numerical): Measures trust that the platform is acting in the student’s best interest.
• Trust in Competence (Numerical): Measures trust that the platform is capable of generating accurate and appropriate recommendations.
• Understanding of Rationale (Numerical): Measures the student’s self-assessed understanding of why the recommendation was generated.
• Compatibility of Goals (Numerical): Measures perceived alignment between the platform’s objectives and the student’s own goals when forming the recommendation.
E. Confidence, Satisfaction, and Motivation
• Application Readiness (Numerical): Measures whether the student feels they have enough information to complete or finalize their application list.
• Admission Confidence (Numerical): Measures how confident the student feels that they will be admitted to a program they would be happy to attend.
• Decision Uncertainty (Numerical): Measures the extent to which the student expects to revise their choices before submitting the final application.
• Satisfaction with Experience (Numerical): Measures overall satisfaction with the platform experience, including ease of use, perceived usefulness, and relevance of recommendations.
• Sense of Ambition (Numerical): Captures the extent to which the platform experience encourages the student to consider more demanding, competitive, or aspirational college programs.
F. Perceived Usefulness and Future Demand
• Future Use Interest (Binary): Indicates whether the student would like to have access to the app again before submitting their final applications.
• Open-Ended Evaluation (Text): Captures what the student found helpful or unhelpful about the app and what changes could make it more useful to future students.
• Application-Process Reflection (Text): Captures students’ open-ended reflections on how they are thinking about choosing which degree programs to list.
Some measures may be based on self-reported responses, while others may rely on anonymized interaction data generated during use of the platform.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Students will:
1. randomly split to control and treatment groups (in ratio 1:2),
2. access the platform,
3. enter or confirm basic profile information and preferences relevant to college applications,
4. receive a recommended college-application portfolio generated by the platform based on their profile, and
5. complete a brief questionnaire measuring perceptions of the recommendation and intended adoption.

To study how students weigh different recommendation trade-offs, the platform will use student responses to present recommendations that emphasize different criteria (or make these criteria more/less salient), such as location preferences, fit with personal goals, or admission probability.
Experimental Design Details
Not available
Randomization Method
We use a computer-based random number generator to randomly split to control and treatment groups (ratio 1:2).
Randomization Unit
Individual students is the unit of randomization.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
N/A
Sample size: planned number of observations
We aim for 1100 students across 14-17 schools.
Sample size (or number of clusters) by treatment arms
We aim for 1100 students across 14-17 schools, split into control and treatment groups in 1:2 ratio, so 367 students are expected to be in control and 733 in treatment if we have 1100 participating students in total.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Yale University IRB
IRB Approval Date
2026-03-20
IRB Approval Number
2000041838