Financial Incentives for Students, Parents, and Teachers in Houston, Texas

Last registered on March 28, 2017

Pre-Trial

Trial Information

General Information

Title
Financial Incentives for Students, Parents, and Teachers in Houston, Texas
RCT ID
AEARCTR-0001942
Initial registration date
March 27, 2017

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 28, 2017, 3:09 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
Harvard University

Other Primary Investigator(s)

PI Affiliation
UNSW Australia Business School

Additional Trial Information

Status
Completed
Start date
2010-09-02
End date
2013-12-14
Secondary IDs
Abstract
Researchers conduct a randomized field experiment in fifty public schools, where students, parents, and teachers were rewarded with financial incentives for mastering mathematics objectives. On outcomes for which researchers provided direct incentives, there were large and statistically significant treatment effects. These behaviors translated into increases in math achievement and decreases in reading achievement. Two full years after removing the incentives, students with high baseline test scores have statistically positive treatment effects in math and no deleterious impact on reading achievement. In stark contrast, students with low baseline test scores show no impacts in math and statistically negative effects in reading. To better understand these findings, researchers develop and calibrate a multi-period, multitask principal-agent model in which neither the principal nor the agent knows the mapping from actions to outputs, and there can be learning and dynamic complementarities through cumulative knowledge.
External Link(s)

Registration Citation

Citation
Fryer, Roland and Richard Holden. 2017. "Financial Incentives for Students, Parents, and Teachers in Houston, Texas." AEA RCT Registry. March 28. https://doi.org/10.1257/rct.1942-1.0
Former Citation
Fryer, Roland and Richard Holden. 2017. "Financial Incentives for Students, Parents, and Teachers in Houston, Texas." AEA RCT Registry. March 28. https://www.socialscienceregistry.org/trials/1942/history/15516
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
During the 2010-2011 school year, researchers partnered with fifty low-performing schools within the Houston Independent School District to evaluate the impact of providing financial incentives to fifth-grade students, their parents, and their teachers on student learning outcomes. The incentive program was implemented in 25 randomly chosen schools; the other 25 schools did not receive the incentive program and formed the control group.

Students in treatment schools received US$2 for every math objective they mastered in the Accelerated Math software program. Students who mastered 200 objectives received a $100 bonus. Parents also received $20 for every parent-teacher conference they attended to discuss their child’s math performance and $2 for each objective their child mastered if they had attended at least one conference. Teachers earned $6 for each parent-teacher conference held and up to $10,100 in performance bonuses tied to student achievement on standardized tests.

In total, during the 2010-2011 school year, researchers distributed $51,358 to teachers, $430,986 to parents, and $393,038 to students across the 25 treatment schools.
Intervention Start Date
2010-09-20
Intervention End Date
2011-06-01

Primary Outcomes

Primary Outcomes (end points)
Direct outcomes: Accelerated Math (AM) objectives mastered and parent-teacher conferences attended
Indirect outcomes: Student Achievement (State Math, State ELA, Aligned State Math, Unaligned State Math, Stanford 10 Math, Stanford 10 ELA); Survey Outcomes (Parents check HW more, Student prefers Math to Reading, Parent asks about Math more than Rdg.); and Attendance and Motivation (Attendance 2010-2011, Intrinsic Motivation Index)
Primary Outcomes (explanation)
Test scores assessed as part of indirect outcomes are based on Texas' state-mandated standardized test and Stanford 10

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Following approval from the district superintendent and other key district personnel, a letter was sent to 71 elementary school principals who had the lowest math performance in the school district in the previous year. In August 2010, researchers met with interested principals to discuss the experiment and provided a five-day window for schools to opt into the randomization. Schools that signed up to participate serve as the basis for the matched-pair randomization. All randomization was done at the school level. Prior to the randomization, all teachers in the experimental group signed a (non-binding) commitment form vowing to use the Accelerated Math (AM) curriculum to supplement and complement their regular math instruction and indicating their intention to give all students a chance to master AM objectives on a regular basis regardless of their treatment assignment. Students and parents were automatically enrolled in the program, and parents could choose not to participate and return a signed opt-out form at any point during the school year. Students and parents were required to participate jointly only. Students and parents received their first incentive payments on October 20, 2010 and their last incentive payment on June 1, 2011; teachers received incentives with their regular paychecks.

Before starting the program, students take an initial diagnostic assessment to measure mastery of math concepts, after which AM creates customized practice assignments that focus specifically on areas of weakness. Students then take the assignments home to work on (with or without their parents). Students must answer at least five of the six questions correctly to receive credit. After students scan their completed assignments into AM, the assignments are graded electronically. Teachers then administer an AM test that serves as the basis for potential rewards; students are given credit for official mastery by answering at least four out of five questions correctly. Students earned $2 for every objective mastered in this way. Students who mastered 200 objectives were declared “Math Stars" and received a $100 completion bonus with a special certificate.

Parents at treatment schools earned up to $160 for attending eight parent-teacher review sessions ($20/session) in which teachers presented student progress using Accelerated Math Progress Monitoring dashboards. Parents and teachers were both required to sign the student progress dashboards and submit them to their school's program coordinator to receive credit. Additionally, parents earned $2 for their child's mastery of each AM curriculum objective, so long as they attended at least one conference with their child's teacher. This requirement also applied retroactively. Parents were not instructed on how to help their children complete math worksheets.

Fifth grade math teachers at treatment schools received $6 for each academic conference held with a parent in addition to being eligible for monetary bonuses through the HISD ASPIRE program, which rewards teachers and principals for improved student achievement. Each treatment school also appointed a Math Stars coordinator responsible for collecting parent-teacher conference verification forms and organizing the distribution of student reward certificates, among other duties. Coordinators received an individual stipend of $500, which was not tied to performance.

The administrative data includes first and last name, date of birth, address, race, gender, free lunch eligibility, behavioral incidents, attendance, special education status, limited English proficiency (LEP) status, and measures of student achievement from state assessments and from a nationally normed assessment. State assessments are administered in April of each year and Stanford 10 is administered in May. Researchers use administrative data from 2008-09 and 2009-10 (pre-treatment) to construct baseline controls with 2010-11(treatment) and 2012-13 (post-treatment) data for the outcome measures. The direct outcomes for which incentives are provided are mastering math objectives via AM and attending parent-teacher conferences.

As controls, researchers use reading and math state test scores from the previous two years and their squares; an indicator if a student is missing a test score from a previous year; a set of race dummies, indicators for free lunch eligibility, special education status, and whether a student demonstrates limited English proficiency. Researchers also construct three school-level control variables: percent of student body that is black, percent Hispanic, and percent free lunch eligible. Researchers construct demographic variables for every 5th grade student in the district enrollment file in the experimental year and then take the mean value of these variables for each school. They assign each student who was present in an experimental school before October 1 to the first school they are registered with in the AM database. Outside the experimental group, researchers assign each student to the first school they attend according to the HISD attendance files and construct the school-level variables based on these school assignments.

Researchers also administered a survey to all parents and students in treatment and control schools (available in English and Spanish). The student survey includes information on time use, spending habits, parental involvement, attitudes toward learning, perceptions about the value of education, behavior in school, and an Intrinsic Motivation Inventory. The parent survey includes basic demographics such as parental education and family structure as well as questions about time use, parental involvement, and expectations. Teachers in treatment and control schools were eligible to receive rewards according to the number of students they taught: teachers with 1-20 students could earn $250, while teachers with 100 or more students could earn $500 (with fifty dollar gradations in between). Teachers only received their rewards if at least 90 percent of the student surveys and at least 75 percent of parent surveys were completed.

Researchers used a matched-pair randomization procedure. Researcher invited 71 schools to sign up for the randomization, 60 of which signed up. To conserve costs, researchers eliminated the 10 schools with the largest enrollment, leaving fifty schools from which to construct 25 matched pairs. To increase the likelihood that control and treatment groups were balanced on a variable that was correlated with the outcomes of interest, researchers used past standardized test scores to construct matched pairs. First, researchers ordered the full set of 50 schools by the sum of their mean reading and math test scores in the previous year. Then they designated every two schools from this ordered list as a “matched pair" and randomly drew one member of the matched pair into the treatment group and one into the control group.

Researchers estimate Intent-To Treat (ITT) effects using a regression equation, which includes baseline covariates measured at the individual level, school-level variables; parsimonious set of controls; and a set of matched pair indicators. The ITT is an estimate of the impact of being offered a chance to participate in the experiment. All student mobility between schools after random assignment is ignored. Researchers only include students who were in treatment and control schools as of October 1 in the year of treatment. In HISD, school began August 23, 2010; the first student payments were distributed October 20, 2010. Results with and without parsimonious set of controls are presented.

Researchers also examine a set of indirect outcomes that were not directly incentivized, including state assessments, Stanford 10 assessments, and several survey outcomes, such as student and parent engagement.

To measure the impact of the incentive experiments on intrinsic motivation, researchers administered to students the Intrinsic Motivation Inventory, developed by Ryan, R. M. (1982). “Control and Information in the Intrapersonal Sphere: An Extension of Cognitive Evaluation Theory." Journal of Personality and Social Psychology, 63: 397-427. The instrument assesses participants' interest/enjoyment, perceived competence, effort, value/usefulness, pressure and tension, and perceived choice while performing a given activity. Researchers only include the interest/enjoyment subscale in the surveys, which is the self-motivation measure. Only students with valid responses to all statements are included in the analysis of the overall score, as non-response may be confused with low intrinsic motivation. Researchers also report results for student attendance a proxy for effort.

Finally, researchers investigate treatment effects for a set of predetermined subsamples - gender, race/ethnicity, pre-treatment test score quintiles, and whether a student is eligible for free or reduced price lunch.
Experimental Design Details
Randomization Method
We ordered the full set of fifty schools by the sum of their mean reading and math test scores in the previous year. Then we designated every two schools from this ordered list as a “matched pair” and randomly drew one member of the matched pair into the treatment group and one into the control group using a random number generator on a computer.
Randomization Unit
School
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
50 schools (25 pairs)
Sample size: planned number of observations
3,428 5th grade students
Sample size (or number of clusters) by treatment arms
25 schools (1,693 students) in treatment and 25 schools (1,735 students) in control groups
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Harvard University Committee on the Use of Human Subjects in Research
IRB Approval Date
2010-09-16
IRB Approval Number
F19637

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
Yes
Intervention Completion Date
June 30, 2011, 12:00 +00:00
Data Collection Complete
Yes
Data Collection Completion Date
May 10, 2013, 12:00 +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
50 schools
Was attrition correlated with treatment status?
No
Final Sample Size: Total Number of Observations
3,428 students
Final Sample Size (or Number of Clusters) by Treatment Arms
25 schools in control and 25 schools in treatment groups
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
No
Reports, Papers & Other Materials

Relevant Paper(s)

Abstract
This is an updated version of the working paper which includes the same Houston experiment as well as another experiment run in DC.
Citation
Abstract
Researchers conduct a randomized field experiment in fifty public schools, where students, parents, and teachers were rewarded with financial incentives for mastering mathematics objectives. On outcomes for which researchers provided direct incentives, there were large and statistically significant treatment effects. These behaviors translated into increases in math achievement and decreases in reading achievement. Two full years after removing the incentives, students with high baseline test scores have statistically positive treatment effects in math and no deleterious impact on reading achievement. In stark contrast, students with low baseline test scores show no impacts in math and statistically negative effects in reading. To better understand these findings, researchers develop and calibrate a multi-period, multitask principal-agent model in which neither the principal nor the agent knows the mapping from actions to outputs, and there can be learning and dynamic complementarities through cumulative knowledge.
Citation
Fryer, Jr., Roland G., & Holden, R. T. (2013, December) "Multitasking, Dynamic Complementarities, and Incentives: A Cautionary Tale." Working Paper.

Reports & Other Materials