Last registered on September 25, 2017

Trial Information

Name

Affiliation

Harvard University

PI Name

PI Affiliation

NYU Steinhardt School of Culture, Education, and Human Development

PI Name

PI Affiliation

The Abdul Latif Jameel Poverty Action Lab (J-PAL) South Asia

PI Name

PI Affiliation

University of California, San Diego

Status

On going

Start date

2017-09-11

End date

2018-12-31

Keywords

Additional Keywords

Secondary IDs

Abstract

In mathematics instruction, it is quite common for teachers to explain a topic and then assign their students a number of practice exercises - either as classwork or homework - to build "procedural knowledge" (i.e., knowledge about the algorithms to be followed to solve a specific problem) and "fluency" (i.e., capacity to solve these problems rapidly). Although there is general agreement among educators that this strategy is beneficial, prior studies have not determined the extent to which students benefit more from this practice. We will use a computer-assisted learning (CAL) software called "Mindspark" to randomly assign 5,756 students in grades 4-7 in 9 private schools in India to receiving or not receiving practice exercises after they learn a new mathematical concept, and to assess the impact on their procedural knowledge and fluency.

External Link(s)

Citation

Barros, Andreas et al. 2017. "How Much Do Students Benefit from Practice Exercises? Experimental Evidence from India." AEA RCT Registry. September 25. https://www.socialscienceregistry.org/trials/2455/history/21789

Experimental Details

Intervention(s)

We will use a computer-assisted learning (CAL) software to randomly assign students to either a "control" (i.e., business-as-usual) or a "treatment" (i.e., intervention) condition. The software is called Mindspark and it was developed by Educational Initiatives (EI), a leading Indian education firm, over a 10-year period. It is currently used by over 400,000 students, it has a database of over 45,000 questions, and it administers over a million questions across its users every day. It can be delivered in school during the school day, before/after school at stand-alone centers, and through a self-guided online platform. It is also platform-agnostic: it can be deployed through computers, tablets, or smartphones, and it can be used both online and offline. A randomized evaluation of the before/after school version of the program in 2015 found that it had a large impact on the math and language achievement of middle school students in Delhi (see Muralidharan et al. 2016).

We will use the in-school version of the software which is currently accessed by over 86,000 students in 187 private schools in India and abroad. We do not evaluate the impact of the software by itself. Instead, we exploit the existing software and user database to study the impact of practice exercises on student learning. Students at private schools can interact with Mindspark during school hours and after school hours. They typically interact with the software once a week for 45 minutes per week in school, and once a week for 42 minutes per week at home.

We will use the in-school version of the software which is currently accessed by over 86,000 students in 187 private schools in India and abroad. We do not evaluate the impact of the software by itself. Instead, we exploit the existing software and user database to study the impact of practice exercises on student learning. Students at private schools can interact with Mindspark during school hours and after school hours. They typically interact with the software once a week for 45 minutes per week in school, and once a week for 42 minutes per week at home.

Intervention Start Date

2017-09-18

Intervention End Date

2018-03-02

Primary Outcomes (end points)

We will administer paper-based, multiple-choice student assessments of math before (pre-test) and after (post-test) students' interaction with Mindspark. This outcome is our main interest.

To assess mechanisms, we will also collect the data from all the interactions that students have with the Mindspark platform (both its current and adapted versions). These data include: (a) students' initial preparation, as diagnosed by the software on the first session; (b) the set of questions (and grade-based level difficulty) presented to each student by the software; (c) students' response times to each question; and (d) students' answers to each question.

To assess mechanisms, we will also collect the data from all the interactions that students have with the Mindspark platform (both its current and adapted versions). These data include: (a) students' initial preparation, as diagnosed by the software on the first session; (b) the set of questions (and grade-based level difficulty) presented to each student by the software; (c) students' response times to each question; and (d) students' answers to each question.

Primary Outcomes (explanation)

Following Muralidharan et al. (2016), we plan to: (a) develop an item map, ensuring that we cover a broad range of topics and domains; (b) draw on publicly-released items from domestic assessments (e.g., Student Learning Survey, Quality Education Study, and the Andhra Pradesh Randomized Studies in Education) and international assessments administered in India and other developing countries (e.g., the Program for International Student Assessment, the Trends in International Mathematics and Science Study, and Young Lives); (c) create different test booklets for adjacent grade combinations; (d) pilot the assessments with a small sample of students who are comparable to those who will participate in this study; and (e) scale and link the results across grades using Item Response Theory. We will also link our assessments to those of the impact evaluation in New Delhi to allow for comparisons across both studies.

Secondary Outcomes (end points)

Secondary Outcomes (explanation)

Experimental Design

We will randomly assign students to either: (a) a ``treatment'' group, which will receive a set of practice exercises during their interaction with the math content of the Mindspark software; or (b) a ``control'' group, which will not receive those exercises (see "Intervention (Public)" for the learning trajectories of control and treatment groups).

We will randomly assign students to either experimental group within each section-by-performance level stratum. We will first categorize students in each grade based on their performance on the Mindspark platform before the study begins (below or at/above the median for their section) and then run two separate lotteries: one for students performing below the median and one for students performing at or above the median.

Group sizes may differ slightly when the number of students in stratum is odd. In this case, we randomly assign these left-over observations (or "misfits") to one of the two groups, within each stratum.

We will randomly assign students to either experimental group within each section-by-performance level stratum. We will first categorize students in each grade based on their performance on the Mindspark platform before the study begins (below or at/above the median for their section) and then run two separate lotteries: one for students performing below the median and one for students performing at or above the median.

Group sizes may differ slightly when the number of students in stratum is odd. In this case, we randomly assign these left-over observations (or "misfits") to one of the two groups, within each stratum.

Experimental Design Details

Randomization Method

Randomization done in office by a computer (using the -randtreat- package, in Stata).

Randomization Unit

Individual. See "Experimental Design (Public)" for information on stratification and handling of "misfits" (in case the number of observations in a given stratum is odd).

Was the treatment clustered?

No

Sample size: planned number of clusters

5,756 students in 9 schools (note that the treatment is assigned at the individual level - i.e., the design is not clustered)

Sample size: planned number of observations

5,756 students

Sample size (or number of clusters) by treatment arms

Equal split of students across two treatment arms (with random allocation of "left-over" observations within sections, if the number of students in a given section is odd)

Minimum detectable effect size for main outcomes (accounting for sample design and clustering)

0.06 standard deviations

IRB

INSTITUTIONAL REVIEW BOARDS (IRBs)

IRB Name

Institute for Financial Management and Research (IFMR)

IRB Approval Date

2017-09-10

IRB Approval Number

n/a

Post Trial Information

Is the intervention completed?

No

Is data collection complete?

Data Publication

Is public data available?

No

Program Files