Effects of monitoring on higher education instructional quality and effects on student performance

Last registered on February 27, 2022

Pre-Trial

Trial Information

General Information

Title
Effects of monitoring on higher education instructional quality and effects on student performance
RCT ID
AEARCTR-0009011
Initial registration date
February 26, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
February 27, 2022, 12:31 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
United States Military Academy

Other Primary Investigator(s)

PI Affiliation
United States Military Academy
PI Affiliation
United States Military Academy
PI Affiliation
University of Virginia

Additional Trial Information

Status
On going
Start date
2021-08-09
End date
2024-06-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This experiment is designed to determine whether and how instructors in higher education respond to monitoring, and how those changes affect student outcomes. Substantial research over the last two decades has established the importance of instructor quality in determining the outcomes of students in K-12 education. Studies employing rich longitudinal data have shown that these differences in instructor quality have implications beyond just contemporaneous test scores; they translate into meaningful differences in long-term life outcomes. For a number of reasons, including the lack of standardized assessments and curricula, large-scale causal evidence around the importance of instructor quality in higher education lags that of K-12 education. However, a few key quasi-experimental studies suggest that instructor quality plays a meaningful role in the college students’ success.

While a strong literature exists exploring what practices lead to greater success in the college classroom, little is known about what policies can be implemented at the administrative level to improve classroom practices. We attempt to fill this gap by studying the effects of monitoring and feedback on instructor quality and student outcomes. This study is the first to our knowledge to develop and assess the quality of an in-class observation rubric for higher education. Furthermore, the randomization of student assignment to instructors and instructor assignment to treatment provide further opportunities to test the causal effect of instructors on student performance on standardized tests and in other classes. Finally, the research design directly measures how teacher practices may change over the course of a semester as a result of monitoring and feedback. Lesson schedules are standardized across instructors, as are assignments and due dates. This standardization further allows us to assess changes in student performance directly before and after instructor observations. As a result, the design also allows us to separate how instructors are learning as a result of their feedback as opposed to how they are preparing differently in response to the possibility of an observation.
External Link(s)

Registration Citation

Citation
Gardiner, Ashley et al. 2022. "Effects of monitoring on higher education instructional quality and effects on student performance." AEA RCT Registry. February 27. https://doi.org/10.1257/rct.9011-1.0
Experimental Details

Interventions

Intervention(s)
The intervention is to monitor instructor performance in higher education. Monitoring has the potential to affect instructor effort in the expectation of being observed, as well as their performance after being observed and receiving feedback. The design allows us to parse these effects out and determine whether in-class observations provide a viable tool for administrators to improve instructor quality and student performance in higher education.
Intervention Start Date
2021-08-26
Intervention End Date
2023-05-20

Primary Outcomes

Primary Outcomes (end points)
We will examine how monitoring and instructor quality affect the following outcomes:
1. Student test and assignment performance
2. Student evaluation of the course and of their specific instructor
3. Correlation between evaluation score, student course evaluations, and student performance
4. Individual instructor improvements between courses as assessed by in-class observations and student performance
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
1. Student success in other courses outside the one observed
2. Student success in follow-on courses that rely on the material learned in the class observed
3. Student success following graduation
4. Student preferences for switching majors and the correlation with instructor quality and instructor improvement
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We plan to randomly assign instructors to receive three in-class observations either in the Fall semester or the Spring semester. As a result, we have a randomly selected sample for treatment in both semesters. Randomization will be stratified based on instructor experience, and it will be conducted using a computer random-number generator. We expect a total of about 30 different instructors per year with two semesters of teaching each. Class sizes are standardized, as are tests and lesson pacing.

The research design allows us to identify the effect of in-class observations on instructor practice and student learning in several unique ways. First, given the control group, we can assess the overall effect of observation on student performance on standardized exams. Second, we can identify how instructors improve from one observation to the next. Third, we can assess the inter-rater reliability of our observation rubric. Fourth, we can measure whether and the extent to which in-class observations have cumulative effects on student outcomes.

To measure our outcomes of interest, we will rely on (1) instructor scores from observations to test for instructor improvement, (2) student performance data retrieved from administrative data sources, and (3) student assessments of instructor quality. The administrative data includes individual scores on course assignments and summary information on overall academic performance.

Treatment is three in-class observations that occur during three specific windows in the semester. The exact day of each visit is unknown to instructors. Observers will use a specific observation rubric designed based on existing observation literature. We will then use student performance measures from administrative data that contains student performance in all their current and prior courses.
Experimental Design Details
Not available
Randomization Method
Randomization conducted by computer. Instructors are randomized into a treatment semester, and the timing of observations is randomly determined as well.
Randomization Unit
Randomization occurs at two levels. First, instructors are randomly assigned to be treated or not treated. Treated instructors will receive 3 unannounced in-class observations throughout their treated semester. For instructors in the treatment group, the timing of the three observations is randomly determined within pre-specified windows of observation.

Treatment is clustered in the sense that all students of an instructor are in a cluster. Randomization was also blocked based on instructor years of experience.
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
30
Sample size: planned number of observations
1200
Sample size (or number of clusters) by treatment arms
600 treatment, 600 control
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
0.18 standard deviations in student test performance
IRB

Institutional Review Boards (IRBs)

IRB Name
Collaborative Academic Institutional Review Board
IRB Approval Date
2021-08-09
IRB Approval Number
CA21-008