Information Bias in Student Ratings

Last registered on August 25, 2025

Pre-Trial

Trial Information

General Information

Title
Information Bias in Student Ratings
RCT ID
AEARCTR-0016611
Initial registration date
August 22, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 25, 2025, 8:46 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
Arizona State University

Other Primary Investigator(s)

Additional Trial Information

Status
Completed
Start date
2024-07-01
End date
2024-07-15
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This pilot randomized controlled trial tests whether students evaluate identical instructional content differently when they believe it is delivered by a female versus a male instructor, providing direct evidence on gender bias in professor evaluations. Undergraduate students at Arizona State University were randomly assigned at the individual level to view the same short lesson, with the instructor’s gender varied only through the voice-over. After viewing, students rated the instructor on overall quality and difficulty. We also examine heterogeneity by students’ engagement with professor review websites. The experiment is used to document the existence and direction of bias and to motivate subsequent confirmatory analyses and policy counterfactuals in the broader project. This study is retrospectively registered in the AEA RCT Registry.
External Link(s)

Registration Citation

Citation
El Khoury, Stephanie. 2025. "Information Bias in Student Ratings." AEA RCT Registry. August 25. https://doi.org/10.1257/rct.16611-1.0
Experimental Details

Interventions

Intervention(s)
Arms: Female-voice instructor, Male-voice instructor

What varies across arms: Only the perceived gender of the instructor, conveyed by the voice-over. The script, video, and content are identical across arms.

Delivery mode and timing: One-shot exposure during an online survey. Participants view the video and immediately report quality and difficulty of the lesson.

Target population and setting: Undergraduate students at Arizona State University completing the study online.


Intervention (Hidden)
Intervention Start Date
2024-07-01
Intervention End Date
2024-07-15

Primary Outcomes

Primary Outcomes (end points)
Overall quality rating of the instructor.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Overall difficulty rating of the instructor.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
This study is a two-arm, individual-level randomized controlled trial embedded in an online survey with undergraduate students at Arizona State University. Participants are randomly assigned 1:1 to view an identical short lesson in which only the perceived gender of the instructor varies: a female-voice instructor (treatment) or a male-voice instructor (control). The script and content are held constant across arms; the manipulation is delivered via the voice-over. After viewing, respondents complete report outcome in terms of quality and difficulty.

The primary outcome is the overall quality rating of the instructor measured immediately after the video. Secondary outcomes include perceived difficulty. Randomization is implemented by the survey platform using simple individual assignment with equal probabilities; participants are unaware of the alternative condition, and enumerators have no discretion over assignment.

Analyses are intention-to-treat. The main specification estimates the difference in mean outcomes between the female-labeled and male-labeled arms using OLS. Pre-specified heterogeneity examines differences by students’ self-reported engagement with professor review websites. This trial is retrospectively registered and is used as exploratory evidence on the existence and direction of bias in evaluations.

The identical protocol was re-fielded with a new sample in November 2024 using the same materials and procedures. Results are reported separately and in pooled robustness checks.
Experimental Design Details
Randomization Method
Qualtrics random assignment.
Randomization Unit
Individual. Assignment is 1:1 across the two arms within the survey platform.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
300 students
Sample size: planned number of observations
300 students
Sample size (or number of clusters) by treatment arms
150 students per treatment arm
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
ERA (Enterprise Research Administration) system
IRB Approval Date
Details not available
IRB Approval Number
STUDY00018638

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Yes
Data Collection Completion Date
Final Sample Size: Number of Clusters (Unit of Randomization)
Was attrition correlated with treatment status?
Final Sample Size: Total Number of Observations
Final Sample Size (or Number of Clusters) by Treatment Arms
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials