Does AI training increase teacher productivity?

Last registered on February 10, 2026

Pre-Trial

Trial Information

General Information

Title
Does AI training increase teacher productivity?
RCT ID
AEARCTR-0017831
Initial registration date
February 03, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
February 10, 2026, 6:01 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
February 10, 2026, 7:46 AM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Yale University

Other Primary Investigator(s)

PI Affiliation
Monash University
PI Affiliation
Curtin University

Additional Trial Information

Status
In development
Start date
2026-03-09
End date
2026-06-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study is designed as a randomized training and survey experiment embedded in an eight-week professional development program on the use of artificial intelligence (AI) in education. The design combines randomized exposure to training content with repeated survey-based measurements, allowing for causal inference on both the overall effects of AI training and the specific role of instruction focused on AI oversight.

Recruitment targets in-service primary and secondary school teachers nationwide. Participation in the training program is free of charge. To incentivize participation, teachers receive a certificate issued by the University of Crete in Greece that is globally recognized. Eligible teachers are randomly assigned to training cohorts based on program capacity (T1). Teachers assigned to the initial training cycle begin the program immediately and form the treatment group, while those assigned to a subsequent cycle experience a delay and serve as the control group during the initial evaluation period.

All teachers in the first training cycle receive a common core curriculum during weeks 1–5. This core training focuses on the use of AI tools in educational practice and includes instructional videos, practical exercises, comprehension checks, and applied assignments such as lesson planning and the development of educational materials. The objective of this phase is to establish baseline competence in AI-assisted educational tasks.

In week 5 of the program, participants in the treatment group are randomly and evenly assigned to one of two subgroups. The first subgroup (T2A) receives specialized instruction in AI oversight during weeks 6–8. This module emphasizes evaluating the quality and reliability of AI-generated outputs, identifying bias and errors, cross-checking sources, and applying human judgment when using AI recommendations. The second subgroup (T2B) receives training in alternative digital educational tools, such as online quizzes and other non-AI instructional technologies. This subgroup serves as a comparison group for isolating the causal effect of AI oversight instruction.

Randomization into the T2A and T2B subgroups is conducted at the individual level and implemented by the training platform prior to the start of week 6. Assignment is independent of teacher characteristics and baseline outcomes, ensuring comparability across groups.

Outcomes are measured through three digital survey waves: a baseline survey before the start of training (week 0), a midline survey at the end of the core curriculum (week 5), and an endline survey following completion of the full program (week 8). Each survey includes randomized task-based modules and vignette-style grading scenarios. In productivity and creativity tasks, teachers are randomly assigned to complete tasks either with or without AI assistance, allowing identification of within-person effects of AI support. In grading scenarios, teachers are randomly exposed to AI recommendations of varying strictness, enabling measurement of reliance on, and oversight of, algorithmic advice.
External Link(s)

Registration Citation

Citation
Goulas, Sofoklis, Rigissa Megalokonomou and Panagiotis Sotirakopoulos. 2026. "Does AI training increase teacher productivity?." AEA RCT Registry. February 10. https://doi.org/10.1257/rct.17831-1.1
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2026-03-09
Intervention End Date
2026-06-01

Primary Outcomes

Primary Outcomes (end points)
Outcomes are measured through three digital survey waves: a baseline survey before the start of training (week 0), a midline survey at the end of the core curriculum (week 5), and an endline survey following completion of the full program (week 8). Each survey includes randomized task-based modules and vignette-style grading scenarios. In productivity and creativity tasks, teachers are randomly assigned to complete tasks either with or without AI assistance, allowing identification of within-person effects of AI support. In grading scenarios, teachers are randomly exposed to AI recommendations of varying strictness, enabling measurement of reliance on, and oversight of, algorithmic advice.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The study is implemented as a randomized training and survey experiment embedded in an eight-week professional development program on the use of artificial intelligence (AI) in education. The experimental design consists of two stages of random assignment, allowing for the evaluation of both the overall impact of participation in AI training and the specific contribution of instruction focused on AI oversight.

In the first stage (T1), eligible in-service primary and secondary school teachers are randomly assigned to training cohorts based on program capacity. Teachers assigned to the first cohort begin the training program immediately and constitute the treatment group, while those assigned to a later cohort experience a delayed start and serve as a control group during the initial evaluation period.

All teachers in the first training cohort receive a common core curriculum during weeks 1–5. This curriculum covers practical uses of AI tools in educational settings and includes instructional videos, applied exercises, and assignments such as lesson planning and educational material development. This phase ensures that all treated participants receive a shared baseline level of exposure to AI-assisted educational tasks.

In the second stage of randomization (T2), conducted at the end of week 5, teachers in the treatment group are randomly and evenly assigned at the individual level to one of two instructional tracks during weeks 6–8. One group receives training focused on AI oversight, including evaluating AI-generated outputs, identifying errors and bias, and applying human judgment when using AI recommendations. The second group receives training in alternative digital educational tools that do not involve AI, serving as a comparison condition for isolating the effect of AI oversight instruction.
Experimental Design Details
Not available
Randomization Method
All randomization is conducted by a computer using automated random assignment implemented through the training platform.
Randomization Unit
The primary unit of randomization is the individual teacher.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
N/A
Sample size: planned number of observations
1,200 teachers
Sample size (or number of clusters) by treatment arms
Delayed-start control (T1 control): 600
Immediate start + AI oversight (T1 treatment, T2A): 300
Immediate start + alternative tools (T1 treatment, T2B): 300
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
University of Crete
IRB Approval Date
2026-10-21
IRB Approval Number
177/21.10.2025