Intervention(s)
This experiment studies whether a structured AI-assisted advising session, conducted before course registration opens, changes how students assess their own academic readiness and alters the courses they ultimately choose to enrol in. The intervention is built around a single 30-minute session with an AI tool, administered in the week before the registration window opens for the following semester. All students complete a short baseline survey before the session and a post-session survey immediately after. The primary outcome, course enrollment, is obtained from administrative registrar records after the add/drop period closes.
Students are randomly assigned to one of three arms. The control arm receives no structured session. These students proceed through the normal registration process without any intervention from the research team. They complete the baseline and post-session surveys at the same scheduled times as students in the active arms, with the post-session belief question administered after being informed that a session took place for other students and that they are being asked to record their current course planning thinking. This design ensures that any difference between the control arm and the active arms reflects the session itself rather than the act of completing a survey.
The productivity arm attends the 30-minute session and uses an AI tool to work through a set of practice problems drawn from the beginning of the harder course curriculum. The tool is configured to assist with problem-solving, explain underlying concepts, and check the student's reasoning step by step. It is explicitly not configured to comment on the student's ability, predict her likelihood of success in the harder course, or compare her performance to any benchmark or reference group. The session therefore activates a productivity channel, in which AI assistance raises the student's immediate task performance, while holding the belief channel at zero. Students in this arm leave the session having practised relevant material with AI support but without having received any personalised assessment of their academic readiness.
The feedback arm attends an identical 30-minute session using the same AI tool and the same set of practice problems. The session format, the problem difficulty distribution, and the interface are indistinguishable from those in the productivity arm up to the final few minutes of the session. The sole difference is what happens at the end. After completing the problem set, the student receives a personalised readiness assessment generated by the AI on the basis of her observable behaviour during the session. The assessment identifies which categories of problem the student handled confidently and which she struggled with, and it provides a probabilistic estimate of her likely success in the harder course, stated numerically as a percentage chance of earning a passing grade. This assessment is the empirical realisation of the ability signal in the theoretical model. Crucially, its informativeness is endogenous to how the student engaged during the session: a student who attempted harder problems, asked more substantive questions, and iterated on targeted feedback receives a more precise and more reliable assessment than one who attempted only easy questions or interacted with the tool in a surface-level way. This feature distinguishes the feedback treatment from a conventional information intervention that delivers the same message to every student regardless of her behaviour during the session.
Comparing the feedback arm to the control arm identifies the total effect of the structured session on beliefs and enrollment, combining both the productivity and the belief-updating channels. Comparing the feedback arm to the productivity arm isolates the belief channel alone, because the session format, AI tool, problem set, and duration are held constant across these two arms and the only difference is whether the student receives the personalised readiness assessment at the end. This arm comparison is the key identification strategy for the paper's central theoretical claim that AI functions as an endogenous self-knowledge technology rather than a uniform information delivery device.
One important design feature must be stated clearly. Students in the control arm and the productivity arm are not prevented from using AI tools independently during the period between their session and their registration decision. In a contemporary university setting this restriction would be practically unenforceable, and attempting to impose it would introduce ethical complications and differential attrition that would compromise the internal validity of the study. The comparison being made is therefore not between AI access and no AI access, but between a structured, institutionally designed AI advising session and the unstructured status quo in which students make their registration decision using whatever resources they ordinarily draw on. This framing is both honest and policy-relevant, because the practical question facing university administrators is not whether students should use AI at all but whether a structured institutional session adds measurable value over and above unguided use.
The study recruits undergraduate students who are approaching a genuine registration decision in which a harder course is clearly available as an alternative to a standard option. The most natural settings are introductory quantitative sequences, such as introductory statistics, introductory microeconomics, or a foundational mathematics course, where the harder follow-on course is clearly distinguishable from the easier one, where students face genuine uncertainty about whether they are ready for the more demanding option, and where AI-assisted practice on relevant problem types is feasible within a 30-minute session. Students are recruited during the first week of the current semester so that the treatment falls naturally in the period when course selection for the following semester is actively being considered and before any registration decision has been made. Recruitment, random assignment, and all session activities are completed before the registration window opens, ensuring that the intervention precedes rather than follows the outcome it is designed to influence.
The experiment involves no deception. Students in all three arms are informed at recruitment that the study examines how different types of academic support sessions affect course planning decisions, and that their eventual enrollment and grade records will be obtained from the registrar for research purposes. Students in the feedback arm are told before the session begins that they will receive a personalised summary of their performance at the end. Informed consent is obtained from all participants prior to the baseline survey, and the study has received a positive ethics opinion from the Research Ethics Committee of the University of Economics and Business, Vietnam National University, Hanoi (Decision No. 2026-REC-UEB, April 2026).