Disclosure and Hiring under Multidimensional Applicant Quality: A Laboratory Experiment

Last registered on April 29, 2026

Pre-Trial

Trial Information

General Information

Title
Disclosure and Hiring under Multidimensional Applicant Quality: A Laboratory Experiment
RCT ID
AEARCTR-0018500
Initial registration date
April 27, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 29, 2026, 3:55 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
BI Handelshøyskolen

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2026-05-16
End date
2026-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
In many markets, potential sellers (or applicants) can disclose information that is relevant to buyers' (employers') decisions. Classic unraveling logic asserts that when disclosure is costless and informative in a manner aligned with buyer value, all but the worst types disclose, and in the limit all types do so. We examine this logic in a controlled laboratory hiring game in which applicant quality is two-dimensional, but the test is informative about only one dimension. We contrast an environment with \emph{positive} correlation, where strong test performance aligns with employer value, and one with \emph{negative} correlation, where the alignment is only partial. In this way, the experiment provides a clean test of how multidimensional applicant quality shapes disclosure incentives and hiring decisions.
External Link(s)

Registration Citation

Citation
Heggedal, Tom-Reiel. 2026. "Disclosure and Hiring under Multidimensional Applicant Quality: A Laboratory Experiment." AEA RCT Registry. April 29. https://doi.org/10.1257/rct.18500-1.0
Experimental Details

Interventions

Intervention(s)
We contrast an environment with positive correlation (where strong test performance aligns with employer value) and one with negative correlation (where the alignment is partial).
Intervention Start Date
2026-05-16
Intervention End Date
2026-12-31

Primary Outcomes

Primary Outcomes (end points)
The primary outcomes are:
(i) block-level test uptake for types B and C, and
(ii) block-level acceptance rates following Opt-Out.

Primary Outcomes (explanation)
We have two primary hypotheses, corresponding to two distinct behavioral margins: Applicants’ disclosure decisions and Employers’ hiring responses following Opt-Out. Both primary tests use blocks as the unit of observation. Unless otherwise stated, the main analysis window is the last 20 periods of the experiment, that is, games 21–40.

H1a. The primary outcome is block-level test uptake for types B and C in the main analysis window. Because each block contains one Applicant of each type in every period, each block contributes exactly one observation of type B and one observation of type C in each of periods 21–40. For each block, we calculate U_BC as the share of B- and C-type observations in games 21–40 for which the Applicant chooses to take the test. This is the total number of test choices by B and C types in games 21–40, divided by 2 × 20 = 40. We compare the distribution of this block-level measure across treatments using a two-sided Wilcoxon rank-sum test, treating blocks as independent treatment observations.

H1b. The primary outcome is the block-level acceptance rate following Opt-Out in the main analysis window. For each block, we calculate E_Opt-Out as the number of cases in games 21–40 where the Employer hires after observing Opt-Out, divided by the total number of Opt-Out observations in that block during games 21–40. We compare the distribution of this block-level measure across treatments using a two-sided Wilcoxon rank-sum test, again treating blocks as independent treatment observations. Blocks with no Opt-Out observations in the main analysis window are excluded from this comparison and reported separately.

All-game robustness analysis. As a pre-specified robustness check, we repeat the primary and secondary analyses using all 40 games.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary outcomes include overall test uptake across all Applicant types, Employer hiring decisions by signal category (High, Low, Opt-Out), and type-conditional test uptake rates for all four types $(A,B,C,D)$.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We contrast an environment with positive correlation (where strong test performance aligns with employer value) and one with negative correlation (where the alignment is partial).
Experimental Design Details
Not available
Randomization Method
Participants are recruited from the same poole of students: Participants do not know anything about the experiment before the session starts.
Randomization Unit
Subjects are pre-assigned to blocks of 8 (4 Applicants, 4 Employers). Random rematching occurs only within a block each game. Roles are fixed at session start. Applicant type order is a random permutation with 10-game blocks.
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
We plan to have 8 blocks per treatment for a total of 16 cluster.
Sample size: planned number of observations
Our planned sample of 8 blocks per treatment arm corresponds to 64 participants per arm and 128 participants in total.
Sample size (or number of clusters) by treatment arms
We plan to have 8 blocks per treatment arm.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
For H1a, our primary Applicant-side test compares block-level mean test uptake for types B and C across treatments, using the main analysis window (games 21–40), with a two-sided Wilcoxon rank-sum test at the 5 percent significance level. For H1b, power depends on how often Opt-Out occurs, especially in the Positive treatment where Opt-Out may be rare. We therefore do not provide a separate formal power calculation for H1b, but we will report the number of blocks with Opt-Out observations in each treatment. The standard deviation used in the power analysis is based on a pilot conducted at BI Norwegian Business School on March 18, 2026. The pilot included both a Positive and a Negative treatment, and we use the variance estimated from these pilot treatments as the basis for the calculations. Based on the pilot data from the final 20 rounds of the experiment, we estimate a pooled standard deviation of about 15.8 for our primary outcome variable, U_BC. With our planned sample size of 8 independent matching blocks per treatment arm, and assuming a standard deviation of about 15.8 and a minimal detectable difference of 25, this yields statistical power of approximately 87 percent, using the Pitman asymptotic relative efficiency adjustment for the Wilcoxon rank-sum test.
IRB

Institutional Review Boards (IRBs)

IRB Name
BI’s Ethics Review Board
IRB Approval Date
2025-10-22
IRB Approval Number
SF-056
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information