The Tradeoffs of Transparency: Measuring Inequality When Subjects Are Told They Are in a Study

Last registered on July 31, 2025

Pre-Trial

Trial Information

General Information

Title
The Tradeoffs of Transparency: Measuring Inequality When Subjects Are Told They Are in a Study
RCT ID
AEARCTR-0015210
Initial registration date
June 26, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
June 27, 2025, 9:17 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
July 31, 2025, 2:29 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Tufts University

Other Primary Investigator(s)

PI Affiliation
Cornell University
PI Affiliation
Columbia Business School

Additional Trial Information

Status
On going
Start date
2025-06-26
End date
2026-06-26
Secondary IDs
Prior work
This trial is based on or builds upon one or more prior RCTs.
Abstract
Correspondence audit studies have sent almost one-hundred-thousand resumes without informing subjects they are in a study --- increasing realism, but without being fully transparent. We study the potential trade-offs of this lack of transparency by running a hiring field experiment with recruiters in a natural setting.
External Link(s)

Registration Citation

Citation
Agan, Amanda, Bo Cowgill and Laura Gee. 2025. "The Tradeoffs of Transparency: Measuring Inequality When Subjects Are Told They Are in a Study ." AEA RCT Registry. July 31. https://doi.org/10.1257/rct.15210-1.5
Experimental Details

Interventions

Intervention(s)
In the interest of preserving the integrity of the experiment, the intervention is being described in full in the experimental design sections that will remain hidden until the experiment is completed.
Intervention Start Date
2025-06-26
Intervention End Date
2026-06-26

Primary Outcomes

Primary Outcomes (end points)
A recruiter (subject) accepting our invitation to work (whether or not the recruiter completed the final task)
A recruiter (subject) accepting our invitation to work and completing the final task)
The recruiter suggesting an interview for a job candidate
How much interest the company should have in extending an interview to the job candidate
The recruiter’s salary offer suggestions for a job candidate
The recruiter’s stated willingness-to-pay for a job candidate
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
The recruiter’s time to answer questions about primary outcomes
Text analysis of open ended notes and length of open ended notes left by the recruiter
Single Offer: If the recruiter can make an offer to single candidate who would it be
Outside Option: A salary the recruiter thinks the applicant would be just as happy taking or rejecting (50% likely to accept or reject).

Secondary Outcomes (explanation)

Experimental Design

Experimental Design
In the interest of preserving the integrity of the experiment, the experimental design is being described in full in the experimental design sections that will remain hidden until the experiment is completed.
Experimental Design Details
Not available
Randomization Method
Randomization done by computer. We follow the re-randomization procedure described in Appendix B in the PAP for this pre-registration.
Randomization Unit
Recruiter
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
We plan to hire about 1,440 recruiters. We plan to invite about 16,000 recruiters. We will invite about 4,000 recruiters each in treatment groups (A) and (B) and 2,000 each in the remaining four groups (C)-(F). We are uncertain of the acceptance rate for our invites, but expect it to be between 12% to 20% based on previous projects using similar procedures. Each recruiter who completes the task will evaluate 12 applicants, so a total of about 17,280 applicant level observations are expected (though actual number will depend on true acceptance rate of the task). Due to budgetary constraints we plan to stop hiring recruiter subjects once we meet our target of 1,440 recruiters. Though all invited subjects will be included in analyses of acceptance rates.
Sample size: planned number of observations
We plan to have a total of about 1,440 recruiters (and thus 1,440 x 12 =17,280 applicant level observations). The exact total is partially contingent on the acceptance rate. The current sample size is based on funding expectations, but contingent on more funding becoming available we may increase the sample size.
Sample size (or number of clusters) by treatment arms
Randomization A: 4,000 invites to arm A; our goal is to have a total of around 360 acceptances. Similar studies have had acceptance rates between 12-20%. If the acceptance rate is 12% then we expect that approximately 480 will accept. If the acceptance rate is 20% we expect that around 800 will accept. We have set a goal of around 360 acceptances and will not hire workers after we hit that goal due to budgetary constraints.

Randomization B: 4,000 invites to arm B; our goal is to have a total of around 360 acceptances. We will follow a procedure similar to that for Randomization A to do our best to obtain this goal.

Randomization C, D, E and F: 2000 invites each; our goal is to have a total of around 180 acceptances in each of these four arms. Similar studies have had acceptance rates between 12-20%. If the acceptance rate is 12% then we expect approximately 240 will accept. If instead the acceptance rate is 20% we expect that around 400 will accept. We have set a goal of around 180 acceptances and will not hire workers after we hit that goal due to budgetary constraints.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Power for selection into the experiment (All treatment information is put into the invitation letters): Based on pilot data, we expect that acceptance and participation rates in group A will be about 12%-20%. With a sample size of 4,000 invitations in group (A) versus group (B) with alpha=0.05, power=0.80, we will be powered to detect a 2.5 percentage point difference in the acceptance rate between these two groups if the acceptance rate is 20% or a 2.1 percentage point difference if the acceptance rate is 12%. For the groups which test mechanisms (C), (D), (E) and (F), with 2000 invitations in each of these groups, for the comparisons between A vs. E, A vs. F, B vs. C, and B vs. D we will be powered with alpha=0.05, power=0.80 to detect a 3.1 percentage point difference in acceptance rates if the true acceptance rate is 20%, while we will be able to detect a 2.6 percentage point difference if the true acceptance rate is 12%. Power for applicant level outcomes: We base our power calculations on the binary callback (0/1) for a candidate. From pilot data we expect the callback rate to be about 10 percentage points different between our advantaged versus disadvantaged candidates when that is female vs. male (the difference will likely be larger for Black vs. White and Crime vs. NoCrime given previous audit meta-studies). Our pilot data also suggests an intra-recruiter correlation coefficient of 0.05. We simulated data for our regression of interest with stratified randomization, callback as the dependent variable and treatment and treatment x advantaged candidate variables, clustered at the strata level and including recruiter fixed effects. Based on 1000 simulations with 360 recruiters in each of treatment arms (A) and (B) we have 98.6% power to detect the 10pp change for main effect. And with 180 each in groups (C)-(F) we have 95% power to detect the 10pp change in callback rates for the mechanism comparison to (A) or (B).
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information