Do Students Benefit from Personalized Learning? Experimental Evidence from India
Last registered on September 27, 2017


Trial Information
General Information
Do Students Benefit from Personalized Learning? Experimental Evidence from India
Initial registration date
September 27, 2017
Last updated
September 27, 2017 4:27 PM EDT
Primary Investigator
New York University
Other Primary Investigator(s)
PI Affiliation
Harvard University
PI Affiliation
University of California, San Diego
Additional Trial Information
On going
Start date
End date
Secondary IDs
There is mounting evidence indicating that schoolchildren in many developing countries lag far behind their expected grade-level performance. Remedial education can help these low-performing students, but it does not address the needs of their high-performing peers. Ability-based grouping can benefit both types of students, but it may be too coarse to address students' individual learning needs. We will conduct an experiment in 14 public "model" schools in Rajasthan, India to evaluate the impact of personalized instruction for 3,331 students in grades 6 to 8, as delivered by a computer-assisted learning software. We will compare a version of the software that provides students with only grade-appropriate activities (the typical approach used by most software products) with a fully and a partially customized version of the program, as well as with a remedial version of the program. We plan to use these comparisons to understand what type of personalization is most (cost-)effective in improving student learning.
External Link(s)
Registration Citation
de Barros, Andreas, Alejandro Ganimian and Karthik Muralidharan. 2017. "Do Students Benefit from Personalized Learning? Experimental Evidence from India." AEA RCT Registry. September 27.
Sponsors & Partners

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Experimental Details
We will use a computer-assisted learning (CAL) software to randomly assign students to either a "control" (i.e., business-as-usual) or one of three "treatment" (i.e., intervention) conditions. The software is called Mindspark and it was developed by Educational Initiatives (EI), a leading Indian education firm, over a 10-year period. It is currently used by over 400,000 students, it has a database of over 45,000 questions, and it administers over a million questions across its users every day. It can be delivered in school during the school day, before/after school at stand-alone centers, and through a self-guided online platform. It is also platform-agnostic: it can be deployed through computers, tablets, or smartphones, and it can be used both online and offline. A randomized evaluation of the before/after school version of the program in 2015 found that it had a large impact on the math and language achievement of middle school students in Delhi (see Muralidharan et al. 2016). We will use the in-school version of the software which is targeted at Indian government schools. We do not evaluate the impact of the software by itself. Instead, we exploit the existing software to study the impact of personalization on student learning.
Intervention Start Date
Intervention End Date
Primary Outcomes
Primary Outcomes (end points)
We will administer paper-based, multiple-choice student assessments of math before (pre-test) and after (post-test) students' interaction with Mindspark. This outcome is our main interest.
Primary Outcomes (explanation)
Following Muralidharan et al. (2016), we plan to: (a) develop an item map, ensuring that we cover a broad range of topics and domains; (b) draw on publicly-released items from domestic assessments (e.g., Student Learning Survey, Quality Education Study, and the Andhra Pradesh Randomized Studies in Education) and international assessments administered in India and other developing countries (e.g., the Program for International Student Assessment, the Trends in International Mathematics and Science Study, and Young Lives); (c) create different test booklets for adjacent grade combinations; (d) pilot the assessments with a small sample of students who are comparable to those who will participate in this study; and (e) scale and link the results across grades using Item Response Theory. We will also link our assessments to those of the impact evaluation in New Delhi to allow for comparisons across both studies.
Secondary Outcomes
Secondary Outcomes (end points)
To assess mechanisms, we will also collect the data from all the interactions that students have with the Mindspark platform (both its current and adapted versions). These data include: (a) students' initial preparation, as diagnosed by the software on the first session; (b) the set of questions (and grade-based level difficulty) presented to each student by the software; (c) students' response times to each question; and (d) students' answers to each question.
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
We will randomly assign students within each section to one of four experimental groups: (a) a control group ("not customized"), in which students are provided with activities on the Mindspark platform that matched their enrolled grade level; (b) a "fully customized" treatment group (T1), in which students complete activities that fully adjust to their "diagnosed" grade level (i.e., the way the Mindspark platform currently works); (c) a "partially customized" treatment group (T2), in which students complete activities that adjust to their diagnosed grade level, but in which the degree of customization is limited to only one grade level below or above that level; or (d) a "remedial" treatment group (T3), in which students complete activities that adjust to their diagnosed grade level, but in which the degree of customization is limited to only grades at or below that level.

We will randomly assign students in equal shares to one of the above experimental groups, at the individual level, within each section stratum. Group sizes may differ slightly when the number of students in stratum is not divisible by four. In this case, we randomly assign "left-over" observations (or "misfits") to one of the four groups, within each stratum.
Experimental Design Details
Randomization Method
Randomization done in office by a computer (using the -randtreat- package, in Stata).
Randomization Unit
Individual. See "Experimental Design (Public)" for information on stratification and handling of "misfits" (in case the number of observations in a given stratum is not divisible by four).
Was the treatment clustered?
Experiment Characteristics
Sample size: planned number of clusters
3,331 students in 14 schools (note that the treatment is assigned at the individual level - i.e., the design is not clustered)
Sample size: planned number of observations
Sample size (or number of clusters) by treatment arms
Equal split of students across four arms (three treatments and one control), with random allocation of "left-over" observations within sections (if the number of students in a given section is odd)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
0.121 standard deviations
IRB Name
Institute for Financial Management and Research (IFMR)
IRB Approval Date
IRB Approval Number
Post Trial Information
Study Withdrawal
Is the intervention completed?
Is data collection complete?
Data Publication
Data Publication
Is public data available?
Program Files
Program Files
Reports and Papers
Preliminary Reports
Relevant Papers