Improving Community College Effectiveness Through Performance Incentives

Last registered on August 09, 2016

Pre-Trial

Trial Information

General Information

Title
Improving Community College Effectiveness Through Performance Incentives
RCT ID
AEARCTR-0001411
Initial registration date
July 08, 2016

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
July 08, 2016, 5:19 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
August 09, 2016, 4:04 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Primary Investigator

Affiliation
University of Arkansas

Other Primary Investigator(s)

PI Affiliation
University of California, San Diego

Additional Trial Information

Status
In development
Start date
2016-01-01
End date
2019-05-01
Secondary IDs
Abstract
We will design and test the effect of monetary and non-monetary incentives on the achievement, persistence and graduation rates of community college students. In partnership with a large community college in Indiana, we will use a randomized controlled trial to test: (1) the effects of incentive-pay for instructors, and (2) The effect of combining instructor incentives with performance-based summer scholarships for students. These incentives will be based on student performance on standardized exams given throughout the semester.
External Link(s)

Registration Citation

Citation
Brownback, Andy and Sally Sadoff. 2016. "Improving Community College Effectiveness Through Performance Incentives." AEA RCT Registry. August 09. https://doi.org/10.1257/rct.1411-2.0
Former Citation
Brownback, Andy and Sally Sadoff. 2016. "Improving Community College Effectiveness Through Performance Incentives." AEA RCT Registry. August 09. https://www.socialscienceregistry.org/trials/1411/history/10071
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
We will implement 3 treatments:

Treatment 1: Instructor incentives. Instructors will be paid bonuses based on the performance of their students on standardized exams.
Treatment 2: Student incentives. Students will be offered vouchers for free summer enrollment if they exceed certain thresholds on their standardized exams.
Treatment 3: Instructor and Student incentives. Both instructors and students will receive incentives based on the performance of students on standardized exams.
Intervention Start Date
2016-08-22
Intervention End Date
2017-05-26

Primary Outcomes

Primary Outcomes (end points)
Phase I: Includes the semesters the intervention is taking place (fall 2016, spring 2017) and enrollment in the semester subsequent to the intervention.
Primary outcome: Performance on the standardized assessment assigned to treatment or control
o In order to compare performance across multiple exams. We will standardize exam scores (mean 0, standard deviation 1) within course using the full population of students taking the exam during that semester. The primary unit of analysis will be standard deviation units on the exam.
Secondary outcomes:
o Credit accumulation during the intervention semester
o Student retention into subsequent semesters (spring 2017 for fall 2016; summer 2017 and fall 2017 for spring 2017) defined as whether a student is enrolled in at least one course as of the census date. Of particular interest is the impact of the student incentives on summer enrollment and subsequent retention into the fall.
o Faculty retention into subsequent semesters
o Faculty preferences for incentives as measured by survey responses
o Faculty time use and job satisfaction as measured by survey responses. We will correct for multiple hypothesis testing within the family of survey measures included using the method developed by List et al. (2016).
o Student evaluation scores for treated and control faculty

Sensitivity analysis:
o For exams with both multiple choice and free response sections, compare whether the results differ in the free response section to examine the potential influence of “gaming”
o Exclude students from the analysis who are exposed to multiple treatments in different courses during the 2016-2017 school year
o Examine the performance of students in “leftover” sections.
o Instructor and student attrition in treatment and control groups.

Phase II: Includes semesters subsequent to the intervention for up to a total of six years (including the intervention year). We will collect data at the end of the fall and spring semesters each year. As these data become available they will supersede previous analysis as the primary outcome.
Primary outcome: Final degree attainment as defined by the National Student Clearinghouse
Secondary outcomes: Progress towards degree attainment
o Enrollment at Ivy Tech defined as whether a student is enrolled in at least one course as of the census date.
o Credit accumulation at Ivy Tech
o Transfer to a 4-year institution
o Enrollment at a 4-year institution as defined by the National Student Clearinghouse
o Performance of spillover students
Sensitivity analysis
o Exclude students from the analysis who are exposed to multiple treatments in different courses during the 2016-2017 school year

Phase III: Exploratory work to track the long-term impact of the intervention on labor market outcomes for up to six years (including the intervention year). The exact outcomes will be determined once Ivy Tech has established what data it will be collecting from the Indiana Department of Workforce Development. We would like to measure employment status and income.
Primary Outcomes (explanation)
Since a substantial percentage of students enroll at community colleges part-time, we will measure retention and credit accumulation relative to previous enrollment levels.

Our community college partners are in the process of establishing a framework for collecting data on employment status and income. Our long-term analysis intends to use data collected through this process.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
In the fall semester, we will randomize instructors to 1) receive incentives based on their students' performance (Treatment 1) or 2) control. Most instructors teach two or fewer sections. These instructors will have all sections incentivized. Instructors teaching more than two sections will have two randomly selected to be incentivized. The number of sections per instructor is subject to our limit of 100 total incentivized sections.

In the spring semester, we will add in Treatments 2 and 3. We will continue to offer instructor incentives to all instructors in Treatment 1 from the fall semester, but we will randomly select one of their sections to also receive student incentives (Treatment 3). Among the sections taught by control instructors, 25 sections, each taught by a different instructor, will now be assigned to receive incentives for the students (Treatment 2), while the others will continue as control sections.
Experimental Design Details
Randomization Method
Randomization will be done on a computer using STATA.
Randomization Unit
We will first pool all instructors of a given course, then block on instructor demographics (gender, age, tenure) and time of day (day or evening course). We will randomize until we achieve balance across enrolled student characteristics (gender, race, age, prior GPA) where no one characteristic differs at a pre-specified p-value. We will record this p-value so that we can use randomization inference to run exact significance tests on our results.
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
We plan to cluster at the instructor level with approximately 144 instructors participating over the course of 2 semesters. This accounts for 20% attrition among the 120 fall 2016 instructors. These instructors will be replaced by new instructors in the spring.
Sample size: planned number of observations
Approximately 4,800 students per semester. Some students will be observed multiple times either across different courses or across different semesters. We will run a sensitivity analysis where we drop students who serve as multiple observations.
Sample size (or number of clusters) by treatment arms
On average, instructors teach 2 sections. These sections have, on average, about 20 students each.

Fall semester: We expect our pure control to have 60 instructors with 2,400 students. We will incentivize 100 total sections for the 60 instructors assigned to treatment, yielding 2,000 students in incentivized sections and 400 students in non-incentivized sections taught by incentivized instructors. The students in non-incentivized sections will be analyzed as spillover and not pure control students.

Spring semester: We will have 35 instructors with 1,400 students in the pure control. We will also have 25 instructors teaching 500 students who will receive student incentives and 500 spillover students who will not receive incentives. Finally, from the 60 instructors receiving instructor incentives before, we will randomly select 50 sections (1,000 students) to receive combined incentives with no instructor overseeing more than one of these sections. We will also randomly select 50 sections (1,000 students) to receive instructor incentives. Thus, we will have 1,000 students in the instructor incentive treatment, 1,000 in the combined incentive treatment, and 400 more spillover students.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Our minimum detectable effect (with 80% power and 5% significance) for instructor incentives over the course of the study is 0.17 standard deviations in test scores. Our minimum detectable effect for combined incentives is 0.2 standard deviations. Our minimum detectable effect for differences in incentives is 0.125 standard deviations, since this is a more powerful, within-subject test.
IRB

Institutional Review Boards (IRBs)

IRB Name
University of Arkansas Institutional Review Board
IRB Approval Date
2015-11-11
IRB Approval Number
15-10-238

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials