Teaching with the test: Experimental Evidence on Diagnostic Feedback and Capacity-Building for Public Schools in Argentina
Last registered on September 19, 2017

Pre-Trial

Trial Information
General Information
Title
Teaching with the test: Experimental Evidence on Diagnostic Feedback and Capacity-Building for Public Schools in Argentina
RCT ID
AEARCTR-0002434
Initial registration date
September 19, 2017
Last updated
September 19, 2017 3:28 PM EDT
Location(s)
Region
Primary Investigator
Affiliation
New York University
Other Primary Investigator(s)
PI Affiliation
The World Bank
PI Affiliation
The World Bank
Additional Trial Information
Status
Completed
Start date
2013-06-03
End date
2016-06-30
Secondary IDs
Abstract
Large-scale assessments have allowed policy-makers, researchers, and the general public to compare learning outcomes across (national and sub-national) school systems and over time. This study examines whether they can achieve another equally-important goal: to provide useful information to improve school management and/or classroom instruction. We present experimental evidence on the impact of the use of large-scale assessments for diagnostic feedback and capacity-building. We randomly assigned 105 public primary schools in the province of La Rioja, Argentina to: (a) a diagnostic feedback group in which we administered standardized tests in math and Spanish at baseline and two follow-ups and made their results available to the schools through user-friendly reports; (b) a capacity-building group in which we did the same and also provided schools with professional development workshops for supervisors, principals, and teachers; or (c) a control group, in which we administered standardized tests only at the second follow-up.
External Link(s)
Registration Citation
Citation
Ganimian, Alejandro, Peter Holland and Rafael Hoyos. 2017. "Teaching with the test: Experimental Evidence on Diagnostic Feedback and Capacity-Building for Public Schools in Argentina." AEA RCT Registry. September 19. https://www.socialscienceregistry.org/trials/2434/history/21579
Experimental Details
Interventions
Intervention(s)
We administered: (a) student assessments of math and Spanish; and (b) surveys of students; (c) teachers; and (d) principals. Treatment schools were asked to participate in all rounds of data collection and control schools only in the 2015 round. The diagnostic feedback component included an user-friendly province and school-level results report made available to principals and supervisors. The capacity-building component included five workshops for supervisors, principals, and teachers, and two school visits. The workshops covered the student assessment results, school improvement plans, quality assurance and geometry instruction. The school visits included a meeting with the principal and his/her leadership team, a classroom observation and a meeting with the teaching staff.
Intervention Start Date
2013-10-02
Intervention End Date
2015-11-03
Primary Outcomes
Primary Outcomes (end points)
We measure the impact of the interventions over learning outcomes, captured by standardized tests in math and Spanish
Primary Outcomes (explanation)
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
We randomly assigned 105 public primary schools in the province of La Rioja, Argentina to three experimental groups: (a) Treatment 1: a diagnostic feedback group in which we administered standardized tests in math and Spanish at baseline and two follow-ups and made their results available to the schools through user-friendly reports; (b) Treatment 2: a capacity-building group in which we did the same and also provided schools with professional development workshops for supervisors, principals, and teachers; or (c) a control group, in which we administered standardized tests only at the second follow-up.
Experimental Design Details
Randomization Method
The randomization was be done by running a Stata code on a computer in office.
Randomization Unit
We randomly assigned the 105 sampled schools to one of three experimental groups, stratifying our randomization by school size. First, we grouped all sampled schools into three strata by size: small (199 students or fewer), medium (between 200 and 350 students), and large schools (351 students or more). Then, we randomly assigned schools within each stratum to one of the three experimental groups.
Was the treatment clustered?
Yes
Experiment Characteristics
Sample size: planned number of clusters
105 schools
Sample size: planned number of observations
Baseline (2013): 10.050 students, 310 teachers First follow-up (2014): 15.800 students, 454 teachers, 77 principals Second follow-up (2015): 16.750 students, 443 teachers, 129 principals
Sample size (or number of clusters) by treatment arms
30 schools in Treatment 1, 30 schools in Treatment 2, 45 schools in Control group.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
IRB Approval Date
IRB Approval Number
Post-Trial
Post Trial Information
Study Withdrawal
Intervention
Is the intervention completed?
Yes
Intervention Completion Date
November 03, 2015, 12:00 AM +00:00
Is data collection complete?
Yes
Data Collection Completion Date
November 03, 2015, 12:00 AM +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
104 schools
Was attrition correlated with treatment status?
No
Final Sample Size: Total Number of Observations
Baseline (2013): 10.050 students, 310 teachers
First follow-up (2014): 15.800 students, 454 teachers, 77 principals
Second follow-up (2015): 16.750 students, 443 teachers, 129 principals
Final Sample Size (or Number of Clusters) by Treatment Arms
30 schools in Treatment 1, 30 schools in Treatment 2, 44 schools in Control group.
Data Publication
Data Publication
Is public data available?
Yes
Program Files
Program Files
No
Reports and Papers
Preliminary Reports
Relevant Papers