Disentangling Sources of Bias: Evidence from Advanced Placement Course Recommendations

Last registered on September 20, 2023

Pre-Trial

Trial Information

General Information

Title
Disentangling Sources of Bias: Evidence from Advanced Placement Course Recommendations
RCT ID
AEARCTR-0007927
Initial registration date
November 16, 2021

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 21, 2021, 11:16 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
September 20, 2023, 10:25 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
University of Massachusetts Boston

Other Primary Investigator(s)

PI Affiliation
University of Massachusetts Amherst
PI Affiliation
University of Massachusetts Amherst

Additional Trial Information

Status
In development
Start date
2023-09-01
End date
2024-09-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
In this project, we seek to understand minority and female underrepresentation in advanced STEM courses in high school by investigating whether school counselors exhibit racial or gender bias during the course assignment process. We extend the analysis by attempting to disentangle whether any observed bias can be attributed to taste-based discrimination, statistical discrimination, or implicit bias. Using an adapted audit study, we asked a nationally-recruited sample of school counselors to evaluate student transcripts that were identical except for the names on the transcripts, which were varied randomly to suggestively represent a chosen race and gender combination. In order to identify the sources of bias we included three additional experimental conditions. Understanding the underlying sources of racial and gender bias can help stakeholders and policymakers design better solutions to address the bias.
External Link(s)

Registration Citation

Citation
Francis, Dania, Angela de Oliveira and Carey Dimmitt. 2023. "Disentangling Sources of Bias: Evidence from Advanced Placement Course Recommendations." AEA RCT Registry. September 20. https://doi.org/10.1257/rct.7927-5.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
The plan for this study is to examine whether school counselors are more or less likely to recommend student (as represented by their academic transcripts) for advanced placement (AP) calculus courses based on the race and gender suggested by the student names, and how prepared the school counselor believes the student would be for the AP calculus course. If there is race or gender bias in the likelihood of recommendation or the preparedness rating, is it possible to identify the source of the bias (statistical discrimination, implicit bias, or taste-based discrimination) by implementing experimental interventions and survey based measurements of those sources?
Intervention Start Date
2023-09-01
Intervention End Date
2024-06-30

Primary Outcomes

Primary Outcomes (end points)
1) recommendation for AP Calculus (yes or no)
2) preparedness for AP Calculus (scale of 1-10)
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)

Secondary Outcomes (explanation)

Experimental Design

Experimental Design
To answer the proposed research questions, we conduct a randomized experiment that is a modification of a correspondence audit study. We ask study participants to evaluate a set of 6 profiles which are given at random either a Black, White, Hispanic, or Asian sounding name, or no name (blind review). There are also typical male or female sounding names within each racial/ethnic category. For the Black and White-sounding names, we also choose names indicative of high, medium, or low socioeconomic status as demonstrated by sociological research on names.

Experimental Design Details
Not available
Randomization Method
The randomization will be conducted by Qualtrics survey software .
Randomization Unit
Individual. There are two levels of randomization. Assignment to blinded transcripts without PSAT scores (control), blinded transcripts with PSAT scores (Treatment A), race/gender transcripts without PSAT scores (Treatment B) or race/gender transcripts with PSAT scores (Treatment C) will be randomized. Once assigned to control, treatment A, treatment B, or treatment C, the type of transcript presented for each of the 4 treatment transcripts (strong or borderline) will be randomized and a for those in the race/gender treatments, a randomized race/gender signifying name will be paired with the randomized transcript.
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
1300 School Counselors
Sample size: planned number of observations
5232 transcript ratings over 1308 School Counselors
Sample size (or number of clusters) by treatment arms
Blinded transcripts without PSAT scores - 65 counselors (260 transcripts)
Blinded transcripts with PSAT scores - 65 counselors (260 transcripts)
Race/gender transcripts without PSAT scores - 589 counselors (2356 transcripts)
Race/gender transcripts with PSAT scores - 589 counselors (2356 transcripts)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Using a participant-level randomized cluster, and treatment at level 2, we conducted a power calculation to determine the number of clusters needed to have a sufficiently powered study. We used a standard alpha of .05. Estimated effect size was drawn directly from the pilot study (Francis, de Oliveira, and Dimmitt, 2019). Due to the overall goals of the future study, we used the effect size of Black Female students in assignment to AP coursework, which had an effect size of 0.2. The calculated variance in effect size was 0.32. The r-squared was also pulled directly from the pilot study at 0.249. The sample size is determined by the number of tests within the cluster, each cluster has four experimental evaluations (not counting the two baseline evaluations). Using Optimal Design software, this presented us with a needed sample size of 218 evaluations per hypothesis. With 2 racial groups, 2 genders, 3 SES levels, and 2 statistical discrimination groupings, we have 24 total hypotheses (2x2x3x2) for our main analysis. The required number of evaluations would be 5,232. Each counselor will conduct four experimental evaluations for a total of 1,308 counselors required.
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
University of Massachusetts Boston
IRB Approval Date
2023-03-01
IRB Approval Number
3292
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information