Back to History Current Version

Improving pedagogy at-scale in low-income countries

Last registered on April 12, 2022

Pre-Trial

Trial Information

General Information

Title
Improving pedagogy at-scale in low-income countries
RCT ID
AEARCTR-0009211
Initial registration date
April 11, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 12, 2022, 8:26 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
University of Virginia

Other Primary Investigator(s)

PI Affiliation
NewGlobe
PI Affiliation
NewGlobe
PI Affiliation
NewGlobe
PI Affiliation
NewGlobe

Additional Trial Information

Status
In development
Start date
2022-04-04
End date
2022-09-30
Secondary IDs
This work builds off previous internal RCTs testing the effectiveness of similar pedagogical changes to our lessons. However, these trials were conducted in a different country, and the data collected for those will not be used for analysis in the current study in any way. These other studies are unpublished as of the time of pre-registration of the current RCT.
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Strong pedagogy is a key component of rich classroom environments. As such, it is also a determinant of the resulting learning outcomes of students. This is particularly important in low- and middle-income countries (LMIC), where teacher content-knowledge, preparation, and educational background are typically weak. Therefore, focusing on strengthening pedagogical practices in these contexts is a promising avenue to improve education, as this type of interventions can have large returns to learning due to a sense of being a “low-hanging fruit”, while being near fiscally-neutral. However, improving pedagogy at-scale in LMIC has been relatively under-studied, partly because of the difficulty of successfully implementing these reforms in many classrooms at once.

In this study, we will explore whether two pedagogical changes rooted in the “science of learning”, namely “spaced retrieval” and “interleaving” can actually deliver higher learning outcomes at-scale in 312 Liberian public schools, via two separate randomized trials. Given the centralized and scripted nature of lesson delivery in Bridge Liberia, the implementing agency of these trials, we have the opportunity to seamlessly incorporate these changes into the lessons, and ensure that as long as lessons are delivered, take-up of the pedagogical changes follows suit.
External Link(s)

Registration Citation

Citation
de Maricourt, Clotilde et al. 2022. "Improving pedagogy at-scale in low-income countries." AEA RCT Registry. April 12. https://doi.org/10.1257/rct.9211-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Interventions(s): This study will consist of two different RCTs modifying pedagogical practices, by introducing “spaced retrieval” in grade 3 science lessons, and “interleaving” in math lessons for grades 4-6. Schools will be randomly assigned to being part of the treatment group, or the comparison group, and will be part of that experimental group for both trials. Below, we describe the specific interventions in greater detail:

Spaced Retrieval in Grade 3 Science
This intervention aims to give a daily dosage of “mixed review” questions to students in Grade 3 Science, and to ensure that they are given an opportunity first to attempt to answer the question without any support from the book. It will give them a chance to find the answer in their text. After that, the teacher will share the correct answer. “Mixed Review” activities will vary between full lesson length of 45 minutes (12 lessons), 15 minutes (14 lessons), and 5 minutes (35 lessons), with the number of questions varying proportionally according to how much time is allotted for the review.
Interleaving in Upper Primary Mathematics
The current status quo of the math lessons is that each lesson has 20 practice problems, and all of those problems are blocked practice (i.e. aligned to the content of that lesson). This intervention would include changing those practice problems from blocked to interleaved. 40% of the practice problems would remain focused on the topic taught that day, while 60% of the practice problems would be distributed among the past 4 “levels” of content. This technique creates an opportunity for retrieving the interleaved content, but it also creates what cognitive scientists call “desirable difficulty”: the brain retains new information much better and retains it for longer if it meets a certain threshold of rigor (when the difficulties are palpable but learners have the means to overcome them).
Intervention Start Date
2022-04-11
Intervention End Date
2022-09-30

Primary Outcomes

Primary Outcomes (end points)
Standardized school grades for Math and Science at the end of the term (~18 weeks after the expected start)
Primary Outcomes (explanation)
The outcomes for test scores, and enrollment come from our administrative data. The internal data collection process is mostly standardized within our network, allowing us to make direct comparisons for most outcomes and indicators across contexts. Similarly, within each context, all children take the same test within a testing round (i.e., during marking period tests and semester exams), allowing for comparisons across schools within each context.

For the trial on “spaced retrieval”, we plan to use Science outcomes as the main set of outcomes. For the interleaving trial, we plan to use Math scores, both standardized by grade, and pooled within a single regression but with grade fixed effects.

There is a possibility that we will get access to item-level data for one or both trials. If that is the case for all schools and pupils, we will present results using two types of outcomes: 1) with the outcome variable being the percentage of questions answered correctly, and 2) an IRT (2 parameter) score for subject*pupil (models run by grade). We will also report the correlation between these two types of scores, and if that is higher than 95%, we will show only one set of results. If we do not have access to item-level data, we will simply use the percentage of questions answered correctly as the outcome.

To get at the intent-to-treat effects (ITT), we will run a linear model with each outcome regressed on an indicator for whether each school was treated, along with fixed-effects for each randomization strata and the clustering of standard errors by school. Beyond the ITT, we also plan to study the treatment-on-the-treated effect (TOT). To do so, we will classify each school as “complier” or “not complier”. We will then define “complier” in two ways: 1) average lesson completion rate for the targeted subject in each trail above 50%, above 75%, above 90%, and 2) average lesson completion rate above the sample median, and the percentiles 75 and 90. The treated group will be then the set of schools in the treatment group, who were compliers given the categories above.

Finally, we will also carry out sub-group analyses by class size, school enrollment, baseline student performance, school location, lesson completion rates, and teacher and pupil attendance rates.

Secondary Outcomes

Secondary Outcomes (end points)
-Standardized school grades for other subjects not in the primary set of outcomes at the end of the term (~18 weeks after the expected start)
-Pupil and teacher attendance
-Student dropout and enrollment
-Lesson completion rates
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
To minimize potential spillovers of the treatment, we randomly allocate treatment at the school level. To increase statistical power, we create randomization strata that absorb some of the variation in outcomes. In particular, we create 8 strata based on previous academic performance for schools for which we have that data. For those schools that we do not have baseline assessment data, we stratify in 4 groups based on pupil attendance. For those schools with neither set of baseline data, we pool them into a single strata. Finally, we leave in a single strata 14 schools which we know that have had operational issues at the start of the semester, and as such may not be able to implement the intervention. We reserve the right to exclude them from the analysis, if the qualitative reports from the field point in the direction of implementation not being successful in the strata. In total, we are left with 14 different strata, within which treatment was randomly assigned to approximately half of all schools.
Experimental Design Details
Randomization Method
Randomization performed using Stata 17 by researchers.
Randomization Unit
Schools
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
312 schools
Sample size: planned number of observations
Given that the average classroom across the 312 schools, has 27.4 pupils, and there is on average one stream per grade, we expect to have 25,646 pupils for the interleaving trial, and 8,549 for the spaced retrieval trial.
Sample size (or number of clusters) by treatment arms
Half of all schools are assigned to treatment, so we expect to have 156 schools in the treatment group.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
For the spaced retrieval trial, we expect to have that having on average 27 students per school across all 312 schools in the sample, with 50% assigned to treatment, assuming an R2 explained by individual and school-level covariates to be 0.45, no-show rate (or missingness rate) of 30% and intra-class correlations of 0.20, consistent with our previous internal analytical work, we would expect a minimum-detectable effect (MDE) of 0.16 standard deviations (SD). We use the same parameters for the trial on interleaving, except that the number of students per school is 81, as this trial covers three grades. In this case, the MDE is 0.15 SDs.
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials