Measuring Algorithmic Bias in Online Job Recommendation Systems

Last registered on June 19, 2023

Pre-Trial

Trial Information

General Information

Title
Measuring Algorithmic Bias in Online Job Recommendation Systems
RCT ID
AEARCTR-0006101
Initial registration date
June 30, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
July 21, 2020, 12:03 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
June 19, 2023, 12:23 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
University of California, Santa Barbara

Other Primary Investigator(s)

PI Affiliation
University of California, Santa Barbara

Additional Trial Information

Status
In development
Start date
2020-07-01
End date
2024-06-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Virtually all internet job boards now recommend jobs to the workers who use their platforms. These job recommendations are generated by algorithms, using criteria that include the worker’s characteristics and previous search behaviors, and the match between the worker’s characteristics and the job’s requirements. While job recommendation algorithms have the potential to help workers find their preferred jobs faster and to improve worker-job match quality, they also have sparked deep concern about the fairness of the job recommendation algorithms: even when this is not intended by the designers, the recommended jobs may reinforce gender and other stereotypes. We plan to conduct a resume audit study on several leading online job boards in China to learn whether, to what extent, and how job board algorithms systematically treat male and female job seekers differently. Specifically, we will collect the jobs that are recommended to fictitious workers who have identical backgrounds except for gender, compare the job recommendation outcomes and measure the difference of outcomes between two gender groups.
External Link(s)

Registration Citation

Citation
Kuhn, Peter and Shuo Zhang. 2023. "Measuring Algorithmic Bias in Online Job Recommendation Systems." AEA RCT Registry. June 19. https://doi.org/10.1257/rct.6101-1.2
Experimental Details

Interventions

Intervention(s)
We will set up fictitious profiles and send fictitious applicants to jobs posted on online job boards.
Intervention Start Date
2020-07-01
Intervention End Date
2020-12-31

Primary Outcomes

Primary Outcomes (end points)
Characteristics of jobs recommended by online job boards.
Primary Outcomes (explanation)
The primary outcome that we are interested in is the job ads that are recommended and displayed to job seekers. We will test whether the recommender systems used by these job boards recommend different jobs to male and female job seekers with identical resumes.

The outcomes we will study include (but are not restricted to):
•Advertised salary and benefits (total pay, base pay, performance pay, annual bonus, insurance, vacation days, paid leave days, compensation/subsidy on travel and meal, staff dormitory, shuttle service, hukou, professional training, flexible working hours, holiday benefits, etc.)
•Job Title: (we can use natural language processing tools to compare the extracted words)
•Job description (especially words like manage, coordinate, independently design, business travel, etc.)
•Job requirements (years of working experience, education, requested age and skills)
•Company characteristics (number of employees, ownership such as foreign, public, private, etc.)
•Job’s industry, occupation, and location.

Secondary Outcomes

Secondary Outcomes (end points)
Reactions received by the fictitious resumes.
Secondary Outcomes (explanation)
These will consist mostly of automated messages generated by the platform when the HR agent takes particular actions, such as reading or saving (downloading) a worker's resume. Our profiles might also receive ‘call-back’ emails from human recruiters, though we expect this to be rare due to the sparse online worker profiles we will use. As a secondary aspect of our study, we will measure whether there are any gender differences in these reactions from the job boards.

Experimental Design

Experimental Design
Our research design is a type of resume audit study, which begins by creating otherwise identical male and female worker profiles on Chinese job boards, then observing which jobs are recommended to the profiles.
Experimental Design Details
Not available
Randomization Method
In our audit study, the resumes come in pairs, in which the two workers in each pair have identical backgrounds except for gender.
Randomization Unit
Individual fictitious profile.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
No cluster
Sample size: planned number of observations
We plan to create about 140 fictitious profiles on each job board and replicate these profiles and application processes across a few big cities. Each profile will apply to a substantial number of jobs, which are recommended to profile by the applicant. .
Sample size (or number of clusters) by treatment arms
Half of the fictitious profiles are females, and half of the fictitious profiles are males.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
U California Santa Barbara #1
IRB Approval Date
2020-06-29
IRB Approval Number
17-20-0451
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information