System-Based Approaches to Improving Teaching Effectiveness

Last registered on September 08, 2025

Pre-Trial

Trial Information

General Information

Title
System-Based Approaches to Improving Teaching Effectiveness
RCT ID
AEARCTR-0014870
Initial registration date
September 02, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
September 08, 2025, 7:30 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
University of Chicago

Other Primary Investigator(s)

PI Affiliation
Associate Professor, Terry Coolege of Business, University of Georgia
PI Affiliation
Associate Professor, Department of Agricultural, Environmental, and Development Economics, The Ohio State University
PI Affiliation
Professor, Alfred Lerner College of Business & Economics, University of Delaware
PI Affiliation
Associate Professor, Alfred Lerner College of Business & Economics, University of Delaware

Additional Trial Information

Status
On going
Start date
2024-06-01
End date
2026-03-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Teaching quality is a key input into student learning but is typically poor in most low-income settings. One intervention that has been championed by the World Bank is the Teach tool, a classroom observation form that captures 30 elements of teaching practices that are each associated, but not yet rigorously causally linked, with student learning. Although this tool has been implemented in 30 countries to monitor and improve teaching quality together with observer feedback, whether it, in fact, does improve teaching quality is unknown. This study would be the first to test the effectiveness of this intervention as it is rolled out in 20,000 schools in Andhra Pradesh. We seek to address two main gaps in the evidence:
1. Does being observed using the Teach tool and receiving general feedback improve teacher effort and student outcomes?
2. How does improving in a given dimension of the rubric affect student learning?
External Link(s)

Registration Citation

Citation
Beg, Sabrin et al. 2025. "System-Based Approaches to Improving Teaching Effectiveness." AEA RCT Registry. September 08. https://doi.org/10.1257/rct.14870-1.0
Experimental Details

Interventions

Intervention(s)
The Teach tool was developed to provide principals, observers, or other stakeholders a window into what teaching practices are happening in a classroom. It is a 30-item World Bank observation form that highlights teaching elements associated with better student outcomes, grouped into four broad categories: 1) Time on Learning, 2) Classroom Culture, 3) Instruction, and 4) Socioemotional Skills.

We will evaluate two different implementations of the Teach tool: the status quo of implementation and a variant that may improve its efficacy. In both implementations, all observers are trained to use the Teach tool through a cascade model – Learning for Equity, our implemntation partner, trains master trainers who train the observers. Once trained, observers are instructed to begin observations using the Teach tool. The observation is an in-depth undertaking, requiring each observer to sit and observe a class for 30 minutes and provide feedback into the Teach tool tablet app.

In the first treatment (T1), teachers will be observed, told their scores, and provided with a video on how to improve one of the teaching aspects that is deficient in the state. However, the content of this video may not reflect a particular teacher’s deficiency. In the second treatment (T2), teachers will receive the same T1, but additionally receive ongoing coaching and mentoring from the observer on how to improve in specific areas in which they are lacking. Control schools will not receive any changes. Observers were trained in June-July 2024 and began conducting classroom observations in September 2024.

We will randomize the videos that teacher receive that will allow us to separate the act of being observed (which should affect all competencies equally) from the effect of the randomly assigned targeted follow-up. In treatment 1, the video will be randomly selected from the entire video library. In treatment 2, the assignment will be randomly selected from the three lowest-scoring competencies of a given teacher.
Intervention Start Date
2024-06-01
Intervention End Date
2026-03-01

Primary Outcomes

Primary Outcomes (end points)
1. Student-level outcomes: learning and socio-emotional development.
2. Teacher-level outcomes: classroom practices as measured by the TEACH tool overall and subdomain scores
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We conduct a cluster-randomized controlled trial with two treatment groups and one control group. The randomization was conducted at the school level stratified by district.

In the first treatment (T1), teachers will be observed following the TEACH tool and told their scores. However, the content of this video may not reflect a particular teacher’s deficiency. In the second treatment (T2), teachers will receive the same T1, but additionally receive ongoing coaching and mentoring from the observer on how to improve in specific areas in which they are lacking. Control schools will not receive any changes.

As TEACH tool is also being implemented in all schools across the state (Andhra-Pradesh), we will further use state-wide data of students’ test scores to assess the effectiveness of TEACH tool compared to our control group.

School-level randomization:
Among the 20,000 schools in Andhra Pradesh that are due to receive the program, we first randomly selected 501 schools. Among these schools, we then randomly assigned 167 schools into each treatment arm. Following randomization, we discovered that several schools were closed. To increase statistical power, we then increased the sample size to add in additional 100 schools to the Teach arm. Therefore, the final sample is 601 schools.


Experimental Design Details
Not available
Randomization Method
Randomization done in office by a computer using Stata.
Randomization Unit
The unit of randomization is the school level.
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
601 schools.
Sample size: planned number of observations
1. Student survey: 20 students per school, 12,200 students 2. Teacher survey: up to 2 teachers per school, or 1202 teachers. Not all schools will have at least 2 teachers. 3. Classroom-observation: up to 1202 observation events per treatment arm, observing two teachers from each school twice. Some schools might not be in session or have two teachers 4. Headmaster: up to 601 head-teachers. Some headmasters might also be a teacher. Beside surveys, we will also use state-wide administrative student exams administered three times a year in Telugu (local language), English, Math, and EVS (i.e., science) (sample size = 240,000 students). These data are at the student level and contain individual question responses. Further, we will conduct an Observer survey collecting detailed information on school observers and attitudes (sample size = up to 300 teachers. Some observers might also be headmasters.)
Sample size (or number of clusters) by treatment arms
Survey sample:
267 schools in Teach
167 schools in Teach+
167 schools in control

Administrative data sample should include the whole state.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
IFMR
IRB Approval Date
2025-06-25
IRB Approval Number
N/A