Please fill out this short user survey of only 3 questions in order to help us improve the site. We appreciate your feedback!
Reducing impact of student bias on feedback surveys: A pilot study
Last registered on April 28, 2021


Trial Information
General Information
Reducing impact of student bias on feedback surveys: A pilot study
Initial registration date
December 08, 2020
Last updated
April 28, 2021 2:10 PM EDT

This section is unavailable to the public. Use the button below to request access to this information.

Request Information
Primary Investigator
The Ohio State University
Other Primary Investigator(s)
PI Affiliation
The Ohio State University
PI Affiliation
The Ohio State University
PI Affiliation
The Ohio State University
Additional Trial Information
On going
Start date
End date
Secondary IDs
This project utilizes a randomized controlled trial to assess the efficacy of utilizing modified introductory language to mitigate implicit bias in student evaluations of instruction. So-called "cheap talk" scripts are presented to survey respondents in advance and describe the hypothetical biases that tend to arise in the study context (Cummings and Taylor, 1999). This is potentially a highly cost-effective strategy to improve the quality of information generated by student evaluations of instruction, while also minimizing inequities for under-represented populations.
External Link(s)
Registration Citation
Genetin, Brandon et al. 2021. "Reducing impact of student bias on feedback surveys: A pilot study." AEA RCT Registry. April 28. https://doi.org/10.1257/rct.6868-1.2000000000000002.
Sponsors & Partners

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Experimental Details
All interventions related to the study will be administered by the investigators via the standard online platform for student evaluations of instruction. A subset of student participants will be randomly selected to receive additional introductory text related to the research study prior to completing the usual online evaluation of instruction for their enrolled courses. Instructor participants will not be asked to complete any activities for the study, other than to give their consent to participate. Data collected as part of the intervention will also be linked to student records (race, gender, course grade, major, rank) and instructor records (race, gender) to assess whether the effects of the intervention are heterogeneous across groups.
Intervention Start Date
Intervention End Date
Primary Outcomes
Primary Outcomes (end points)
The Student Evaluation of Instructor (SEI) instrument to measure the efficacy of the instructor's teaching. The SEI is broken into ten questions:

1. The subject matter of this course was well organized
2. The instructor is well prepared
3. The instructor communicated the subject manner clearly
4. The instructor was genuinely interested in teaching
5. The instructor was genuinely interested in helping students
6. The instructor created an atmosphere conducive to learning
7. The course was intellectually stimulating
8. The instructor encouraged students to think for themselves
9. I learned a great deal from the instructor
10. Overall, I would rate this instructor as … [Poor, Fair, Neutral, Good, Excellent]

Responses on questions 1 through 9 range from Strongly Disagree to Strongly Agree.
Primary Outcomes (explanation)
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
This research will utilize a “survey experiment” design, which involves embedding randomized text within a survey given to subjects. While all subjects will see the same questions (the current questions asked on the OSU Student Evaluation of Instruction), the introductory paragraph will vary across respondents.
Experimental Design Details
Not available
Randomization Method
Randomization occurs electronically through STATA.
Randomization Unit
Randomization occurs within each course section.
Was the treatment clustered?
Experiment Characteristics
Sample size: planned number of clusters
Currently one university.
Sample size: planned number of observations
Approximately 6,000 observations.
Sample size (or number of clusters) by treatment arms
Approximately 1,500 observations for each treatment arm.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB Name
The Ohio State University's Behavioral & Social Sciences IRB
IRB Approval Date
IRB Approval Number
Analysis Plan
Analysis Plan Documents
Pre-Analysis Plan (April 28, 2021)

MD5: 9a9885da2a2dbe93519ddc4120909b9b

SHA1: 264354c8696dca8497011024c49d5fcf05f899a6

Uploaded At: April 28, 2021