Back to History Current Version

Individualized Academic Support for Hard-to-Reach Students in the Time of Coronavirus: Experimental Evidence from Kenya

Last registered on December 21, 2020


Trial Information

General Information

Individualized Academic Support for Hard-to-Reach Students in the Time of Coronavirus: Experimental Evidence from Kenya
Initial registration date
December 19, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
December 21, 2020, 11:25 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.



Primary Investigator

University of Virginia

Other Primary Investigator(s)

PI Affiliation
University of Virginia

Additional Trial Information

On going
Start date
End date
Secondary IDs
Schools serving over 1.5 billion children globally have mandated temporary closures due to coronavirus. The COVID-19 pause on in-person schooling threatens to exacerbate learning poverty around the world. Much ongoing scholarship on minimizing coronavirus learning loss understandably focuses on online learning. However, many young people have no access to internet at home or at school. Given access challenges, how can schools support the hard-to-reach learners most likely to fall behind? This study examines the effectiveness of academic support using a relatively basic technology—teacher-student phone calls. We will examine the causal effect of phone calls in Kenya on student engagement, school connectedness, and academic achievement. We will test two types of calls with different design features to understand the optimality of different types of remote support. In particular, this study has two treatment branches: the “accountability” arm, which will test short calls that encourage children to keep engaging with other remote resources provided to them , and the “tutoring” arm, which will test longer calls that include all the features of the accountability arm plus a short lesson on a grade-appropriate topic.
External Link(s)

Registration Citation

Rodriguez Segura, Daniel and Beth Schueler. 2020. "Individualized Academic Support for Hard-to-Reach Students in the Time of Coronavirus: Experimental Evidence from Kenya." AEA RCT Registry. December 21.
Experimental Details


The RCT will take place in Bridge schools across Kenya, in a total of 105 eligible primary schools. At baseline, each school had students in grades 1-8, and had on average 253 students across all grades. This intervention focuses on grades 3, 5, and 6 where schools had on average 75 students in total, across 1 stream per grade. In total, the sample will consist of approximately ~7,000-8,000 students, pooled across all three grades. Randomization is performed at the school level, where 70 schools are assigned to treatment, and 35 schools serve as the control group. Among the treatment schools, 35 will be in treatment group 1 (“accountability”), and 35 will be in treatment group 2 (“tutoring”). For more details about design of the calls, please see the appendix (“Appendix A: Detailed intervention overview”). Finally, the expected timeline for this intervention is set to start in fall of 2020, and we expect the intervention to last ~6 weeks, although in practice it will conclude as soon as the funding for airtime has been exhausted.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
Math test scores.
Primary Outcomes (explanation)
We will rely on pre-lockdown assessment data (school grades from July and October 2019, and February 2020) as a baseline measurement, and post-lockdown assessment data as an endline measurement (January and February, 2021). We will use individual subject scores separately, and a composite score, consisting of scores in multiple subjects.

We will also conduct an endline phone-based assessment of pupil learning. Parents will be randomly sampled from the treatment and control groups. The calls will be made by our partner's staff during the week after the conclusion of the project. The focus of the survey will primarily be their child’s learning outcomes in Math. There will be two types of questions in the phone-based assessment: “core numeracy” questions, which reflect the students’ ability to do basic operations and are constant across grades, and “curriculum-aligned” questions, which reflect more what teachers discussed during the phone calls and vary by grade. We will use this assessment data as an outcome, first treating it as single test, and then treating it as two separate tests, for the “core numeracy”, and the “curriculum-aligned” questions.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
There are 105 schools in the experimental sample of eligible schools . In order for a school to qualify as “eligible”, the school had to have their exact latitude and longitude available in the data system, and baseline data for our target grades. All eligible schools have one stream in grades 3, 5, and 6. Since the intervention is assigned at the school-level, the students in the experimental sample will be all students enrolled in grades 3, 5, and 6 from all 105 schools. Column 1 in Table 1 shows some descriptive statistics about the communities, schools, and classes where the experiment will take place. Figure 1 below also provide a sense of the context where these schools are located.
Experimental Design Details
Although we expect to have over ~7,000 students in the experiment, logistical constraints do not allow for class- or individual-level randomization to maximize power. Even if this were logistically feasible, we worry that this intervention is particularly susceptible to frictions between called and uncalled students, or teachers assigned to make calls and those that were not. We are aware that the disadvantage of randomizing at the school-level is that power is severely harmed, and we try to maximize power by double-blocking on covariates which we hypothesize explain some of the variation in the outcome. First, we create three bins within the state by the population within a 5 km radius surrounding each school, as a proxy for urban/peri-urban/rural location. These bins span from ~6000 people to ~55,000 for the rural category, from ~55,000 to ~170,000 for the peri-urban category, and greater than ~170,000 (until ~1,850,000) for the urban category. The GIS population data comes from Bosco et al (2017), downloaded at a resolution of 1-km grids at the equator. Then, we split each bin into quintiles representing baseline exam-scores for each school. For each school, a weighted average z-score across the target grades was calculated based on the school’s math scores. Given our partner’s approach to testing and data collection, these scores are comparable across the state.
This blocking procedure leaves 15 randomization blocks, each with information about the type of location, and the baseline achievement level of each school. We randomly assign treatment to schools within each of these 15 blocks. All blocks have 7 schools within them. 10 of the blocks have 5 treated schools, and 5 blocks have 4 treated schools. Among the treated schools in each block, the treatment arm (“accountability” or “tutoring”) was then randomly assigned. Among the 5 blocks with 4 treated schools, exactly half of all treated schools are assigned to one of the treatment arms. Among the 10 blocks with 5 treatment schools, half of these blocks assign 3 schools to one treatment and 2 to the other treatment, and the other half of these blocks has the inverse assignment of treatment schools, for a highly balanced distribution of treatment assignment.
Randomization Method
The randomization was performed by the researchers using Stata and the list of schools in the treatment and control groups was then shared with our partner organization for the implementation of the treatment.
Randomization Unit
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
105 schools, with three grades per school (grades 3, 5, and 6), for a total of about 7,000 students.
Sample size: planned number of observations
About 7,000 students.
Sample size (or number of clusters) by treatment arms
105 schools (35 in each experimental branch). Three grades per school (grades 3, 5, and 6), for a total of about 7,000 students.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Regarding power calculations, school-administered baseline assessments in the two months prior to school shutdowns, which will provide a valuable departing point to quantify COVID learning loss (relative to historic data), and a covariate for increasing the precision of experimental estimates. After accounting for covariates and blocking explaining 30% of the variation in outcomes, intra-class correlation (by school*grade) of 0.30 and teachers reaching only 80% of the students, we calculate 0.80 power to detect a minimum treatment/control contrast of 0.07 SD between treatment arms, and 0.06 SD between treatment and control.

Institutional Review Boards (IRBs)

IRB Name
University of Virginia
IRB Approval Date
IRB Approval Number
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials