Hiring Discrimination Against Transgender Job Applicants in the US Labor Market

Last registered on April 17, 2024


Trial Information

General Information

Hiring Discrimination Against Transgender Job Applicants in the US Labor Market
Initial registration date
March 19, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 19, 2024, 5:42 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
April 17, 2024, 2:54 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.



Primary Investigator

University of Vermont

Other Primary Investigator(s)

PI Affiliation

Additional Trial Information

On going
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
We are conducting a conduct a correspondence study to measure hiring discrimination against transgender job applicants for entry-level positions in the US labor market. We randomly assign gender identity and race to fictitious resumes. We measure discrimination against both transgender women and men as well as non-binary applicants relative to cisgender men and women, and we compare differences between White and Black applicants. We consider labor markets across the United States in order to examine heterogeneity in discrimination based on local political climates.
External Link(s)

Registration Citation

Beam, Emily and Ivy Stanton. 2024. "Hiring Discrimination Against Transgender Job Applicants in the US Labor Market ." AEA RCT Registry. April 17. https://doi.org/10.1257/rct.13199-1.1
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details


Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
Callbacks: employer response to an application offering an interview or job to the applicant
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Phase 1: We will use data from Stanton (2023), including ~1,000 applications, which included both cisgender and transgender White applicants in four markets

Phase 2: For this correspondence study, we will submit 5000 applications to 2500 job postings with unique employers on Craigslist. The correspondence study methodology means that all respondent characteristics will be held constant or randomized between resumes. As a result, we will be able to attribute any differences in the likelihood of receiving a callback to the employer perception of the traits (gender identity and race) made salient to the employer. This enables causal identification of the impact of gender identity and race on callback rates, which themselves are a strong proxy for employer demand.

We signal whether the applicant is a man or woman by randomly selecting first names from 2003 SSA data on newborn first names that are at least 90% female or 90% male.

We signal transgender identity by including a statement near the top of each resume stating that the applicant’s preferred and legal names differ. For example, “My preferred name is John, but my legal name is Mary.” One name is unambiguously masculine, while the other name is unambiguously feminine to establish a contrast in gender between the applicant’s legal name and preferred name that signals transgender identity.

We signal non-binary identity by putting they/them pronouns next to names that are randomly selected to be non-binary. The first name is either one of the randomly selected male or female names or a gender ambiguous first name. We randomly select gender ambiguous first names from 2003 SSA data on newborn first names by eliminating first names that do not have a female to male ratio of 40-60% and that are less than 85% white. There are not gender ambiguous names that effectively signal black identity.

We signal race by randomly selecting first and last names, which are used predominantly by either white or black individuals. The first names are from 2003 SSA data on newborn first names that are at least 90% white or at least 50% black. The surnames are from the 2000 US Census that are at least 90% white or at least 40% black. While there are many first and last names that are at least 90% white, there are few that are above 40% black, which is why the cutoff percentage is lower for black first and last names.

Market selection

Craigslist operates in 418 geographically distinct markets within the United States. However, some markets are not used frequently enough by employers to provide sufficient unique job postings for this study. 

We will include the four markets from Phase 1: Houston, Los Angeles, New York, and Phoenix.

We will screen out markets that do not have sufficient posting activity by documenting the number of postings in the key sectors (food and/or retail) in a specific window for each market and removing those with limited activity. This may also exclude some states from the study.

To select states from the remainder, we will generate a ranked list of states by vote shares in the 2020 presidential election. We will create paired strata (the first and second most Republican, the third and fourth, etc.) and randomly select one state from each strata to include in the study, yielding approximately 24 states.

In states with more than 2 markets, we will then randomly select two per state, with the probability of sampling proportional to population.

Developing Resumes

Each resume will be for the food or retail industry, with two job experiences in said industry (one customer-facing, one non-customer-facing). Job descriptions will be generated using ChatGPT and then edited appropriately.

The jobs and companies will be standardized. Then, the locations of these jobs will be selected from the companies’ locations near the applicants’ cities of residence.

Each resume will have a high school degree from the specific city of residence.

Identifying Job Postings

Because of the large scale of this field experiment, we will use python script to scrape job postings across the selected markets, using only the food service and retail categories. 

Included postings will be screened to ensure they are (1) entry-level and require no more than a high school diploma; (2) require only a resume to apply; and (3) provide an email address to apply. We will also screen data to ensure there is at most one post per employer. 

Postings will be updated at regular intervals during the data collection period, pulling only postings from the past week. If there are more eligible postings than the sample target, positions will be randomly selected.

Submitting Resumes

We will randomize gender identity (cisgender man, cisgender woman, transgender man, transgender woman, or non-binary) and race (black or white) for the resumes.

Based on the randomization of applicant gender identity and race, the research assistants will fill the resume templates with the appropriate applicant information (name, signal or transgender or non-binary identity, phone number, and email address).

Research assistants will submit the 5000 applications to 2500 job postings, which we anticipate will take approximately two months. 

Data Collection

We will track callbacks (employer responses to an application offering an interview or job to the applicant) by regularly monitoring the applicant phone numbers and email accounts.

We will organize the data to measure the total number of callbacks for each cell to compare the callback rates and rates of hiring discrimination.

We will decline the callback or interview offer with a standardized message writing that the applicant has already accepted another position and thanking the employer for their time.
Experimental Design Details
Not available
Randomization Method
randomization done in office by a computer
Randomization Unit
We randomize at the posting level
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
Sample size: planned number of observations
Phase 1: 1,000 applications to 250 employers Phase 2: 5,000 applications to 2,500 employers
Sample size (or number of clusters) by treatment arms
Phase 1: 500 cisgender applicants (250 men, 250 women); 500 transgender applicants (250 men 250 women)

Phase 2: 1250 white cisgender applications, 1250 black cisgender applications, 625 black transgender applications, 625 white transgender applications, 625 black non-binary applicants, 625 white non-binary applications. Within each binary group, we submit roughly half as men and half as women.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
We calculate minimum detectable effect sizes with 80% power at a significance level of 5%. We conservatively assume that posting characteristics have no predictive power, although in Phase 1 they predicted 10% of outcome variation. We use a benchmark callback rate of 18.9%, which was the rate for cisgender men in Phase 1. To test for differences in call-back rates between transgender (3000) and cisgender applicants (3000), we have 80% power to detect a 2.9 percentage point difference in callback rates. When conducting arm-by-arm comparisons, such as transgender black men vs cisgender black men or transgender black women vs. cisgender men, using Phase 2 data only, we have 80% power to detect a 6.6 percentage point difference.

Institutional Review Boards (IRBs)

IRB Name
University of Vermont Institutional Review Board
IRB Approval Date
IRB Approval Number
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information