Ed versus EdTech
Last registered on May 17, 2019


Trial Information
General Information
Ed versus EdTech
Initial registration date
June 14, 2018
Last updated
May 17, 2019 12:52 PM EDT

This section is unavailable to the public. Use the button below to request access to this information.

Request Information
Primary Investigator
Freeman Spogli Institute for International Studies, Stanford University
Other Primary Investigator(s)
PI Affiliation
Stanford University
PI Affiliation
Stanford University
PI Affiliation
University of California at Santa Cruz
Additional Trial Information
On going
Start date
End date
Secondary IDs
Computer assisted learning (CAL), online courses, MOOCs, and other forms of educational technology have been touted as revolutionizing the way in which students are educated. Of particular importance is the question of whether technology-based interventions can improve education for disadvantaged populations. Although EdTech is being rapidly deployed throughout the developed and developing world there is limited evidence on whether and how it affects academic outcomes. Ultimately, it is important to determine if EdTech has net effects relative to other, more traditional forms of instruction and learning. The purpose of our study is to examine whether and why CAL has a positive impact on student achievement in general but also relative to more traditional forms of supplemental learning (such as pencil and paper workbooks). To fulfill this purpose, we conduct an RCT involving more than 4,000 fourth to sixth grade schoolchildren that are boarding in 130 schools in rural China. The RCT includes three treatment arms: a supplemental CAL arm, a traditional supplemental learning (pencil and paper workbooks) arm, and a control arm. The supplemental learning offered by the first two treatment arms is the same in terms of time and content. The traditional supplemental learning arm serves as an additional comparison group (in addition to the control arm) to the supplemental CAL arm. With this design, we can estimate the effect of supplemental learning (CAL or workbook) in addition to the net effect of CAL (versus traditional learning) on student outcomes. Thus, we can test whether it is the “Ed” or the “Tech” in “EdTech” that makes a difference.
External Link(s)
Registration Citation
Fairlie, Robert et al. 2019. "Ed versus EdTech." AEA RCT Registry. May 17. https://www.socialscienceregistry.org/trials/3086/history/46678
Experimental Details
Intervention Start Date
Intervention End Date
Primary Outcomes
Primary Outcomes (end points)
Student math achievement on 35 minute exam during endline survey
Primary Outcomes (explanation)
Secondary Outcomes
Secondary Outcomes (end points)
Please see the pre-analysis plan document
Secondary Outcomes (explanation)
Please see the pre-analysis plan document
Experimental Design
Experimental Design
We designated each of 27 country-grades (9 counties and primary school grades 4-6) in our sample as strata or blocks. We then randomly allocated classes (one class per school-grade) within these strata to one of three different treatment conditions (T1 = Supplemental CAL, T2 = Supplemental Workbook, or C = Control):
A. Supplemental CAL (T1) 116 classes (in 88 schools)
B. Supplemental Workbook (T2) 118 classes (in 86 schools)
C. Control (C) 118 classes (in 85 schools)
Experimental Design Details
Not available
Randomization Method
Randomization done in office by a computer
Randomization Unit
Was the treatment clustered?
Experiment Characteristics
Sample size: planned number of clusters
352 classes
Sample size: planned number of observations
Approximately 4,000 students
Sample size (or number of clusters) by treatment arms
A. Supplemental CAL (T1) 116 classes (in 88 schools)
B. Supplemental Workbook (T2) 118 classes (in 86 schools)
C. Control (C) 118 classes (in 85 schools)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Power calculations were conducted before the beginning of the trial using Optimal Design software (Spybrook et al., 2009). We conservatively used the following parameters to estimate the sample size for the study: • Intraclass correlation coefficient (adjusted for county-grade fixed effects): 0.10 • Average number of boarding students per class: 11 • R-squared of 0.40 With alpha = 0.05 and beta = 0.8, we estimated that we would need 115 classes per treatment arm for a minimum detectable effect size (MDES) of 0.14 SDs. We expect to only lose a small amount of statistical power due to (low rates of) student attrition (from the start to the end of the school year). Using information from past large-scale surveys in rural primary schools in western China, we assume an attrition rate of approximately 5%.
IRB Name
Stanford University
IRB Approval Date
IRB Approval Number
Analysis Plan

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information