Determinant of Professor Selection and Mitigating Gender Bias

Last registered on February 14, 2024


Trial Information

General Information

Determinant of Professor Selection and Mitigating Gender Bias
Initial registration date
February 12, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
February 14, 2024, 4:59 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.


Primary Investigator

Arizona State University

Other Primary Investigator(s)

Additional Trial Information

In development
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Using data collected from surveys conducted at Arizona State University, this paper aims to investigate the willingness to pay of students for professor characteristics featured on online review platforms like RateMyProfessor. Specifically, the study analyzes students' perceptions of quality and difficulty ratings on such websites and models their expectations of these characteristics after receiving the signal, which, in this context, is the rating. Through the gathered data, the aim is to disentangle gender-based taste-based discrimination, statistical discrimination, and the role model effect to gain deeper insights into the role gender plays in professor selection.

In the latter part of the survey, participants are presented with a video, which may either serve as a control or a treatment. The treatment video educates students about the gender bias prevalent in the ratings, whereby women tend to receive more severe ratings compared to their male counterparts. This intervention seeks to guide students toward formulating an expected quality and difficulty that acknowledges the biases present between female and male professors, thereby mitigating statistical discrimination in their decision-making processes.
External Link(s)

Registration Citation

El Khoury, Stephanie. 2024. "Determinant of Professor Selection and Mitigating Gender Bias." AEA RCT Registry. February 14.
Experimental Details


The intervention takes place during the first survey that is administered to students and randomly sorts students into control and treatment groups. The control group is given a 2.5 min video that describes life on campus in Tempe and different activities students can enjoy. The treatment group is given a 3 min video that holds information regarding gender biases in reviews. Specifically, students are told that female professors are rated more harshly in terms of quality and difficulty on professor review websites (assuming female and male professors are identical). They were also told that these harsher reviews are not fair to women as they hinder their career development as well as reduced the likelihood of students taking courses with them.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
Scores, choice variable to compute WTP and utility, study effort, expected quality and expected difficulty.
Primary Outcomes (explanation)
Expected quality and difficulty are constructed using the signal for quality or difficulty given to the student in the scenario along with the means of the signals.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
- Students are recruited to take part in the survey
- The survey takes students through multiple questions regarding preferences and then they are presented with scenarios with varying professor characteristics (quality, difficulty, research faculty status, gender) and they are asked to choose a professor as well as report the expected grade and effort that they believe they would get with each professor.
- Students are also asked to report their beliefs about the means of the signals of quality and difficulty as well as their true values. They are also asked questions so that we can compute the variance of the "true" quality and difficulty distributions along with the signal distributions.
- Toward the end of the survey students are either shown a treatment or a control video
- After 3 days, if the student was treated, they are sent an email with bullet points regarding the video they were shown.
- After a week, they are given a second survey to complete for us to see if students have internalized the information we have given them about gender biases.
- Using these two surveys, I can construct Willingness to pay measures for quality, difficulty, research faculty status, and gender. Then using the mechanisms that I would outline in my model, I can disentangle statistical and taste-based discrimination as well as the gender matching effect.
Experimental Design Details
Randomization Method
Qualtrics randomizes the participants randomly so that the number of treated students is the same as the number of students in control.
Randomization Unit
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
3000 individuals
Sample size: planned number of observations
3000 students
Sample size (or number of clusters) by treatment arms
1500 treated students, 1500 control students
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)

Institutional Review Boards (IRBs)

IRB Name
Knowledge Enterprise Research Integrity and Assurance
IRB Approval Date
IRB Approval Number


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials