|
Field
Abstract
|
Before
This study evaluates the impact of promoting the use of GPT EconometríaUAI, a conversational AI assistant, on undergraduate students’ learning outcomes in econometrics. The intervention will be implemented during the second semester of 2025 at Universidad Adolfo Ibáñez across seven sections of the Econometrics course. Students will be individually randomized within sections into treatment and control groups. The treatment group will receive a series of emails encouraging the use of GPT EconometríaUAI for study support, while the control group will not receive such promotion (placebo emails may be used).
The primary outcome is student achievement on the first midterm exam. Secondary outcomes include satisfaction with the course and students’ perceptions of their learning and study experience. The study will provide experimental evidence on the role of AI-based study assistants in shaping both academic performance and broader student experiences in higher education.
|
After
This study evaluates the impact of promoting the use of GPT EconometríaUAI, a conversational AI assistant, on undergraduate students’ learning outcomes in econometrics. The intervention will be implemented during the second semester of 2025 at Universidad Adolfo Ibáñez across seven sections of the Econometrics course. Students will be individually randomized within sections into treatment and control groups. The treatment group will receive a series of emails encouraging the use of GPT EconometríaUAI for study support, while the control group will not receive such promotion (placebo emails may be used).
The primary outcome is student achievement on the first midterm exam. Secondary outcomes include satisfaction with the course and students’ perceptions of their learning and study experience. The study will provide experimental evidence on the role of AI-based study assistants in shaping both academic performance and broader student experiences in higher education.
After the midterm intervention, I implemented an additional intervention for the final exam. This second intervention targeted how students used the GPT rather than whether they used it. Students assigned to treatment received three emails prior to the final exam encouraging learning-oriented use of the tool. The messages promoted tutor-style interaction with step-by-step reasoning, provided a structured prompt to generate practice questions and feedback, and encouraged verification of answers. Control students did not receive guidance emails but retained full access to the GPT, as in RCT 1. To measure compliance and spillovers, the final exam survey asked whether students received the emails and whether they shared the advice.
I implemented a second randomization prior to the final exam using the same sample used for RCT 1. This randomization was independent of RCT 1 and followed the same stratification scheme based on section, gender, and high school type. Within each stratum, half of the students were assigned to treatment and half to control. Similar to the first intervention, the primary outcome is student achievement on the final exam, and secondary outcomes include students’ use and perceptions of their learning and study experience.
|