Breaking Down Information and Inequality: Evidence from a Field Experiment in the Job Market

Last registered on March 31, 2022

Pre-Trial

Trial Information

General Information

Title
Breaking Down Information and Inequality: Evidence from a Field Experiment in the Job Market
RCT ID
AEARCTR-0008779
Initial registration date
March 29, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 31, 2022, 3:33 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation

Other Primary Investigator(s)

PI Affiliation
Stanford GSB
PI Affiliation
Stanford GSB

Additional Trial Information

Status
In development
Start date
2022-03-30
End date
2022-06-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Online review platforms, such as Glassdoor or Indeed, generally display the unweighted average of all existing reviews. However, a singular, unweighted average does not effectively capture heterogeneity in reviewer evaluations. For example, an unweighted average will typically be closer to the majority opinion due to its numeric properties. This can potentially be problematic in many labor market settings, where women systematically face different experiences than men, and are numerically the minority. In this research, we study how this unweighted aggregation affects different genders’ behaviors and labor market outcomes. We do so by designing an experiment on a job review platform geared toward high paying, high growth technology careers. We are able to observe the gender of reviewers in our setting, and calculate separate company ratings for each gender. We compare job search behavior, as well as subsequent reviewing behavior of users who were randomly exposed to the unweighted, aggregate ratings to users who were randomly exposed to their own gender rating.
External Link(s)

Registration Citation

Citation
Choi, Jungho, Surya Ierokomos and Adina Sterling. 2022. "Breaking Down Information and Inequality: Evidence from a Field Experiment in the Job Market." AEA RCT Registry. March 31. https://doi.org/10.1257/rct.8779-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
We plan to disaggregate the ranking of employers by gender and to present either the aggregated or the disaggreated employer ranking to randomly selected groups of users.
Intervention Start Date
2022-03-30
Intervention End Date
2022-04-30

Primary Outcomes

Primary Outcomes (end points)
We want to measure click through rate on the intervention page, as well as propensity to leave a company review.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Our experiment is restricted to mobile users of the application, and has two components. The first component will be sent to users who have already left their own company ratings, as well as self-identified their gender as a man or woman. We assign these users into one of three conditions: one aggreagated and two disaggregated ranking conditions. The second component will be sent to existing users of the application who have not left a review for their company. Users will be randomly assigned into conditions that are similar to those above. They can click on a randomly assigned push notification to be directed to either the aggregate rankings or the gender-based rankings to study how the value of information influences its participation in a subsequent survey.
Experimental Design Details
Our experiment is restricted to mobile users of the application, and has two components.

The first component will be sent to users who have already left their own company ratings, as well as self-identified their gender as a man or woman. We assign these users into one of three conditions. In the first (control) condition, users will receive a push notification directing to a page that displays the aggregate top 20 company rankings. In the second condition, users will receive a push notification to a separate page. This different page will display only the in-group top 20 company rankings upon landing from the push notification. Users can scroll down to a button, which upon clicking, replaces the in-group top 20 company rankings with the out-group top 20 company rankings. Users in the last condition are sent to a page where both the in-group and out-group rankings are immediately displayed side-by-side. Note that the same 20 companies are displayed in all conditions, however their numeric rankings differ based on the condition.

The second component will be sent to existing users of the application who have not left a review for their company. Users will be randomly assigned into conditions that are similar to those above. They can click on a randomly assigned push notification to be directed to either the aggregate rankings or the gender-based rankings. We do not know the gender of users who have not left a review, so all users in the second condition will see the women top 20 company rankings on the first page.
Randomization Method
Randomly assigned by a random number generator on a computer.
Randomization Unit
We randomize individual users.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
We anticipate approximately 55000 users in the first component. We anticipate approximately 12000 users in the second.
Sample size: planned number of observations
We anticipate approximately 55000 users in the first component. We anticipate approximately 12000 users in the second.
Sample size (or number of clusters) by treatment arms
For the first component: we will randomly split each gender into approximate thirds for each arm. This means that approximately 40000 men will be in each of control, treatment 1, and treatment 2. We anticipate 15000 women in each of control, treatment 1, and treatment 2.

We will randomly split the second component sample into thirds for each of the three relevant treatments as described previously.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Stanford Institutional Review Board
IRB Approval Date
2021-06-16
IRB Approval Number
61203

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials