Back to History Current Version

Improving Community College Effectiveness Through Performance Incentives

Last registered on July 08, 2016

Pre-Trial

Trial Information

General Information

Title
Improving Community College Effectiveness Through Performance Incentives
RCT ID
AEARCTR-0001411
Initial registration date
July 08, 2016

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
July 08, 2016, 5:19 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
University of Arkansas

Other Primary Investigator(s)

PI Affiliation
University of California, San Diego

Additional Trial Information

Status
In development
Start date
2016-01-01
End date
2019-05-01
Secondary IDs
Abstract
We will design and test the effect of monetary and non-monetary incentives on the achievement, persistence and graduation rates of community college students. In partnership with a large community college in Indiana, we will use a randomized controlled trial to test: (1) the effects of incentive-pay for instructors, and (2) The effect of combining instructor incentives with performance-based summer scholarships for students. These incentives will be based on student performance on standardized exams given throughout the semester.
External Link(s)

Registration Citation

Citation
Brownback, Andy and Sally Sadoff. 2016. "Improving Community College Effectiveness Through Performance Incentives." AEA RCT Registry. July 08. https://doi.org/10.1257/rct.1411-1.0
Former Citation
Brownback, Andy and Sally Sadoff. 2016. "Improving Community College Effectiveness Through Performance Incentives." AEA RCT Registry. July 08. https://www.socialscienceregistry.org/trials/1411/history/9318
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
We will implement 3 treatments:

Treatment 1: Instructor incentives. Instructors will be paid bonuses based on the performance of their students on standardized exams.
Treatment 2: Student incentives. Students will be offered vouchers for free summer enrollment if they exceed certain thresholds on their standardized exams.
Treatment 3: Instructor and Student incentives. Both instructors and students will receive incentives based on the performance of students on standardized exams.
Intervention Start Date
2016-08-22
Intervention End Date
2017-05-26

Primary Outcomes

Primary Outcomes (end points)
Our short-term outcomes of interest are: Performance on standardized exams, course grades, and course completion.
Our medium-term outcomes of interest are: student retention, credit accumulation, graduation, and transfer to a 4-year college.
Our long-term outcomes of interest are: employment status and income.
Primary Outcomes (explanation)
Since a substantial percentage of students enroll at community colleges part-time, we will measure retention and credit accumulation relative to previous enrollment levels.

Our community college partners are in the process of establishing a framework for collecting data on employment status and income. Our long-term analysis intends to use data collected through this process.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
In the fall semester, we will randomize instructors to 1) receive incentives based on their students' performance (Treatment 1) or 2) control. Instructors who teach more than one course or more than one section will have only one section determine their incentive payments.

In the spring semester, we will add in Treatments 2 and 3. We will continue to offer instructor incentives to all instructors in Treatment 1 from the fall semester but some will receive incentives alone (Treatment 1) and others will receive their incentives in conjunction with incentives for the students in their incentivized section (Treatment 3). Among the sections taught by control instructors, some will now be assigned to receive incentives for the students (Treatment 2), while others will continue as control.
Experimental Design Details
Randomization Method
Randomization will be done on a computer using STATA.
Randomization Unit
We will first pool all instructors of a given course, then block on instructor demographics (gender, age, tenure) and time of day (day or evening course). We will randomize until we achieve balance across enrolled student characteristics (gender, race, age, prior GPA) where no one characteristic differs at a pre-specified p-value. We will record this p-value so that we can use randomization inference to run exact significance tests on our results.
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
We plan to cluster at the instructor level with approximately 240 instructors participating over the course of 2 semesters. This accounts for 20% attrition among the 200 fall 2016 instructors. These instructors will be replaced by new instructors in the spring.
Sample size: planned number of observations
8,000 students per semester.
Sample size (or number of clusters) by treatment arms
On average, instructors teach 2 sections. These sections have, on average, about 20 students each.

Fall semester: We expect our pure control to have 100 instructors with 4,000 students. We will incentivize 1 section each for 100 instructors, yielding 2,000 students in incentivized sections and 2,000 students in non-incentivized sections taught by incentivized instructors. The students in non-incentivized sections will be analyzed as spillover and not pure control students.

Spring semester: We will have 75 instructors with 3,000 students in the pure control. We will also have 25 instructors teaching 500 students who will receive student incentives and 500 spillover students who will not receive incentives. Finally, 100 instructors will receive incentives for one of their sections, and 50 of these sections will also receive student incentives. Thus, we will have 1,000 students in the instructor incentive treatment, 1,000 in the combined incentive treatment, and 2,000 more spillover students.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Our minimum detectable effect (with 80% power and 5% significance) for instructor incentives over the course of one semester is 0.19 standard deviations in test scores. Extending this to the full year study, we have a minimum detectable effect of 0.175 standard deviations. Our minimum detectable effect for combined incentives is 0.24 standard deviations.
IRB

Institutional Review Boards (IRBs)

IRB Name
University of Arkansas Institutional Review Board
IRB Approval Date
2015-11-11
IRB Approval Number
15-10-238

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials