Back to History Current Version

Promoting Early Grade Reading & Numeracy in Tanzania

Last registered on March 25, 2020

Pre-Trial

Trial Information

General Information

Title
Promoting Early Grade Reading & Numeracy in Tanzania
RCT ID
AEARCTR-0000291
Initial registration date
February 09, 2015

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
February 09, 2015, 5:52 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
March 25, 2020, 4:02 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Primary Investigator

Affiliation
University of California, San Diego

Other Primary Investigator(s)

PI Affiliation
University of Virginia

Additional Trial Information

Status
Completed
Start date
2013-02-06
End date
2015-12-31
Secondary IDs
Abstract
Overall student learning levels remain extremely low across East Africa, despite a decade plus of major reforms and significant new investments in public education. In Kenya, Tanzania and Uganda, recent nationwide surveys show that large majorities of children are unable to read or do arithmetic at the required level (Uwezo at Twaweza, 2011). While the challenges facing the education sector are well known, existing reforms and aid instruments have largely failed to improve the situation.

At present, the two main approaches used by governments to improve the quality of education in East Africa are to strengthen teacher training and to disburse a capitation grant for books and related activities to schools. On the former, several studies show formal levels of teacher qualification to be weakly correlated to performance. On the latter, it is difficult to establish the impact of capitation grants on improving learning outcomes because the full amounts of the grant have not consistently reached schools and because they have not been rigorously evaluated. At the same time, even if funds were to flow well, under current arrangements no one is held accountable or incentivized to achieve learning. The lack of adequate attention to accountability and incentives may in part explain why increased budgets for education have not resulted in improved learning outcomes. So while government programs have largely focused on providing educational inputs, recent evidence suggests that it may be more effective to incentivize the delivery of learning outcomes, particularly at the local level (see Glewwe and Kremer 2005, and Kremer and Holla 2009).

In short, we see three key challenges to improving education in East Africa: (1) how to consistently get resources to the lowest (school/community) levels, (2) how to effectively invest (government and donor) resources by gearing the education process to emphasize learning outcomes rather than educational inputs, and (3) how to generate rigorous evidence of what works and have it inform national education debate, policy and practice.

In this project we seek to evaluate the impact of two separate approaches to improving early grade learning outcomes. We seek to evaluate the impact of providing schools with extra resources in the form of capitation grants (a grant per child enrolled) and the impact of providing teachers with incentives to achieve foundational learning outcomes in early grades (teachers receive a small bonus for each student that achieves an appropriate learning outcome). In addition we will evaluate the combination of the two approaches (i.e. capitation grants and teacher incentives).

The evaluation is being implemented in 350 government primary schools in 10 districts in Tanzania between 2013 and 2014.

Registration Citation

Citation
Mbiti, Isaac and Karthik Muralidharan. 2020. "Promoting Early Grade Reading & Numeracy in Tanzania." AEA RCT Registry. March 25. https://doi.org/10.1257/rct.291-1.1
Former Citation
Mbiti, Isaac and Karthik Muralidharan. 2020. "Promoting Early Grade Reading & Numeracy in Tanzania." AEA RCT Registry. March 25. https://www.socialscienceregistry.org/trials/291/history/64985
Experimental Details

Interventions

Intervention(s)
Treatment Group 1, Capitation Grant (70 schools): In this group the capitation grant (CG) program, as dictated by the Tanzanian Ministry of Education and Vocational Training, is implemented at each school. Each school receives a grant of 10,000Tsh/student, delivered in two tranches, one in March and one in July. This treatment focuses on making the existing policy work by channeling funds in full and more effectively to primary schools, and to test the effects of basic information provision about the grants. The evaluation will seek to measure the extent to which the funds reach schools, the level of citizen engagement on the use of funds, and ultimately the impact of funds and information on improving learning outcomes. In 2013, the average CG distributed to schools was 7,646,429Tsh.

Treatment Group 2, Cash on Delivery (70 schools): In this group teachers of Kiswahili, English and Math in Grades 1, 2, and 3 are eligible to receive 5,000Tsh per student that passes a given subject test at the end of the year, for a total of 15,000Tsh possible per student. Teachers are not penalized for students who do not pass.

Treatment Group 3, Combination (70 schools): Schools in this treatment group are given both the Capitation Grant and Cash on Delivery intervention.

All interventions were implemented directly by Twaweza and it’s District Partners, with money given through the CG and COD interventions coming also from Twaweza. Within each intervention, information describing the intervention was distributed to schools and the communities via school and community meetings in early-2013. The District Partners then followed up with additional school visits in July and August to answer any questions regarding the program. All students in Grades 1, 2, and 3 in schools in Treatment Groups 2 and 3 were tested in Kiswahili, English and Math at the end of the school year to determine teacher incentive payments. Tanzanian education professionals, following a similar structure as the Uwezo annual learning assessment, developed the subject tests for Grades 1, 2, and 3. The same schedule will be followed in 2014.
Intervention Start Date
2013-03-07
Intervention End Date
2015-02-28

Primary Outcomes

Primary Outcomes (end points)
The main outcome variables are students test scores, as a proxy for student learning. Assuming we are able to establish that the treatments had an impact in students test scores, secondary outcome variables would try to establish spillover effects, the mechanisms behind the main treatment effects and how the treatment effects vary across student, household, teacher and school characteristics.

A sample of 30 students per school (10 students from Grades 1, 2, and 3) are to be tested in all treatment schools and in 140 control schools. Students will be tested before the intervention begins, at the end of the first year and at the end of the second year. The information of these test will be used to calculate standardized test scores and compare achievement for children across treatment groups. We also collect detailed student information (e.g. age and gender); detailed school information (e.g. facilities, management practices and head teacher characteristics), detailed teacher information (e.g. education, age, experience and self reported time use), and detailed household information (e.g. parents engagement in child's education, parents own education, household composition, and assets owned by the household).

The information from our household, teacher and schools survey can be used to identify the mechanisms through which the treatment affects test scores. For example, we can look at the change in learning outcomes in non-incentivized subjects; how teachers spend their time in school; how schools spent the funds of the Capitation Grants (e.g. text books, scholarships, meals, etc); whether schools increased the hours taught in the incentivized subjects; and whether household become more or less engaged in the child's education after the intervention. Using the base line survey we can study how the treatment effects differ across student, household, teacher and school characteristics.
Primary Outcomes (explanation)
The main outcomes variables will be standardized test scores. First, we will construct a standardized test score for each subject in each grade, by subtracting the mean and dividing by the standard deviation of the test scores in the control group. Once we have subject-grade standardized test scores, we will add these up across grades and the re-normalize (dividing by the standard deviation of the test scores in the control group); this will yield subject standardized test score.

For some analysis we will also aggregate test scores across subjects by summing them and the re-normalizing (dividing by the standard deviation of the test scores in the control group).

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
This evaluation was conducted in 350 schools in 10 randomly selected districts across Tanzania. All government primary schools were eligible in each of the 10 districts, but 35 schools were randomly selected from each district to be part of the evaluation. The probability that a school was chosen was proportional to the number of students enrolled in the school. In each sampled district, 14 schools were assigned to the control group and 7 schools were assigned to each treatment group, CG, COD and Combination.

The aims of the three treatment groups are as follows:
1. Capitation Grant (CG): Test the impact of providing capitation grants (while providing the community with information on these grants) on improving basic literacy and numeracy.
2. Cash on Delivery (COD): Test the impact of incentivizing teachers to achieve previously set, absolute levels of learning among students in Grades 1, 2, and 3.
3. Combination (both CG and COD): Test the impact of COD "on top of" existing programs and budgets, from the CG intervention, in creating an incentive to make better use of those resources.
Experimental Design Details
Randomization Method
Randomization done in office using Stata.
Randomization Unit
Random sampling: District level. Random treatment assignment: School level
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
350 schools in 10 districts (35 schools per district)
Sample size: planned number of observations
350 schools; 14,000 students (40 per school); 3,300 teachers (8-12 per school); 5,250 households (15 per school)
Sample size (or number of clusters) by treatment arms
70 schools Treatment 1 (Capitation Grant)
70 schools Treatment 2 (Cash on Delivery)
70 schools Treatment 3 (Combination, capitation grant and cash on delivery)
140 schools Control (no treatment)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
We use Optimal Design Software for all power calculations. We originally assumed an intra-cluster correlation of 0.1 (that is the intra-cluster correlation for value added) and that 30% of the variation can be explained by baseline test scores and other covariates (such as age, gender, school and teacher characteristics) and district fix effects. Our main outcomes (the effect of capitation grants and cash on delivery) have a total of 280 clusters (140 controls schools and 140 treatment schools - 70 with the pure treatment and 70 with the combination treatment). In other words to estimate the effect of the capitation grant we have 140 control schools and 140 schools that recieve the capitation grant, 70 from Treatment 1 and 70 from Treatment 3. Similarly, we have 140 schools that receive Cash on Delivery. To estimate the interaction between the two main treatments we only have 70 treatment schools (Treatment 3) and 140 controls schools. With 280 clusters (i.e. to estimate the effect of the Capitation Grant or the Cash on Delivery) and a significance level of 5% we have: minimum detectable effect size of 0.2 with power of 99.9%, minimum detectable effect size of 0.15 with power of 97%, and minimum detectable effect size of 0.1 with power of 75%. With 210 clusters (i.e. to estimate the effect of the interaction between the Capitation Grant and Cash on Delivery) and a significance level of 5% we have: minimum detectable effect size of 0.2 with power of 99.5%, minimum detectable effect size of 0.15 with power of 92%, and minimum detectable effect size of 0.1 with power of 62.5%. We are aware that the numbers for the interaction assume symmetry and so the true numbers will be slightly lower, but the difference won't be first order. In practice, based on the first year data, we have that the intra-cluster correlation is 0.15 for Kiswahili, 0.06 for English and 0.14 for Math. The proportion of the variation can be explained by baseline test scores and other covariates (such as age, gender, school and teacher characteristics) and district fix effects. is 40% for Kiswahili, 36% for English and 37% for Math. Using the most conservative estimates (0.15 intra-cluster correlation and 36% of the variance explained by baseline characteristics) we have the following numbers: With 280 clusters (i.e. to estimate the effect of the Capitation Grant or the Cash on Delivery) and a significance level of 5% we have: minimum detectable effect size of 0.2 with power of 99%, minimum detectable effect size of 0.15 with power of 94%, and minimum detectable effect size of 0.1 with power of 65%. With 210 clusters (i.e. to estimate the effect of the interaction between the Capitation Grant and Cash on Delivery) and a significance level of 5% we have: minimum detectable effect size of 0.2 with power of 98.5%, minimum detectable effect size of 0.15 with power of 86%, and minimum detectable effect size of 0.1 with power of 53%.
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
SMU Institutional Review Board (for Human Subjects Research)
IRB Approval Date
2013-01-01
IRB Approval Number
2013-008-MBII
Analysis Plan

Analysis Plan Documents

Pre-Analysis Plan

MD5: 145400c0b12251d4b0d1909862fe05d8

SHA1: cdb499e576521434087d9334d2579d42cf590659

Uploaded At: February 09, 2015

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Yes
Data Collection Completion Date
Final Sample Size: Number of Clusters (Unit of Randomization)
Was attrition correlated with treatment status?
Final Sample Size: Total Number of Observations
Final Sample Size (or Number of Clusters) by Treatment Arms
Reports, Papers & Other Materials

Relevant Paper(s)

Abstract
We present results from a large-scale randomized experiment across 350 schools in Tanzania that studied the impact of providing schools with (i) unconditional grants, (ii) teacher incentives based on student performance, and (iii) both of the above. After two years, we find (i) no impact on student test scores from providing school grants, (ii) some evidence of positive effects from teacher incentives, and (iii) significant positive effects from providing both programs. Most important, we find strong evidence of complementarities between the programs, with the effect of joint provision being significantly greater than the sum of the individual effects. Our results suggest that combining spending on school inputs (the default policy) with improved teacher incentives could substantially increase the cost-effectiveness of public spending on education.
Citation
Isaac Mbiti, Karthik Muralidharan, Mauricio Romero, Youdi Schipper, Constantine Manda, Rakesh Rajani, Inputs, Incentives, and Complementarities in Education: Experimental Evidence from Tanzania, The Quarterly Journal of Economics, Volume 134, Issue 3, August 2019, Pages 1627–1673, https://doi.org/10.1093/qje/qjz010

Reports & Other Materials