An Experimental Evaluation of a Matching Market Mechanism

Last registered on October 22, 2020


Trial Information

General Information

An Experimental Evaluation of a Matching Market Mechanism
Initial registration date
September 17, 2019

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
September 18, 2019, 9:41 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
October 22, 2020, 4:59 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.


There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information

Primary Investigator

University of Chicago

Other Primary Investigator(s)

PI Affiliation
University of Chicago
PI Affiliation
University of Oregon
PI Affiliation
United States Military Academy at West Point

Additional Trial Information

On going
Start date
End date
Secondary IDs
We propose using a randomized controlled trial to measure the impact of replacing a status quo job assignment process within a large-scale employer with a system based on insights from matching and market design on match outcomes, in particular, using a deferred acceptance algorithm to match employees with positions.
External Link(s)

Registration Citation

Davis, Jonathan et al. 2020. "An Experimental Evaluation of a Matching Market Mechanism." AEA RCT Registry. October 22.
Experimental Details


We propose replacing the current system of manually matching employees to positions with an algorithmic match based on the deferred acceptance algorithm.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
Our main outcomes are (1) self-reported preference manipulation, (2) satisfaction with the match based on preference reports, (3) performance in the match, and (4) retention in the match and with the Army.
Primary Outcomes (explanation)
Our main outcomes are selected to indirectly measure three well-known, and sometimes competing, goals of market design mechanisms: strategyproofness, efficiency, and stability.

Officers will be asked about whether they strategically manipulated their preferences in two surveys. They were first asked about strategic manipulation in a survey administered on the platform where they submit or review their preferences about two weeks before their preferences were due. They will be asked a similar set of questions again when they receive their assigned match in February 2020. We will use these reports to assess whether officers matched with the deferred acceptance algorithm truthfully reported their preferences at a higher rate than officers in the control group.

We will measure the efficiency of the match in two ways. First, following the literature assessing the welfare consequences of centralized assignment in the student-to-school setting (Abdulkadiroğlu, Pathak, and Roth, 2009; Abdulkadiroğlu, Agarwal, and Pathak, 2017), we will measure efficiency using officers’ and units’ rank-ordered preferences over potential matches. This is a measure of ex-ante satisfaction with a match. We will also measure realized job satisfaction via survey questions that will be including in Human Resource Command's existing officer surveys.

Of course, another important dimension of efficiency is whether the match is more productive. We will measure this aspect of efficiency using promotion outcomes and officer’s annual performance evaluations. Our main promotion outcome will be the time to the next promotion. The performance evaluations include a categorical rating (Most qualified, Highly qualified, Qualified, and Unqualified) and a text-based evaluation that can be mapped to a rank ordered performance rating that is highly predictive of future promotions. In particular, we will use the predicted rank from the text-based evaluation as a quantitative measure of performance.

In addition to strategyproofness, one of the most often cited benefits of the deferred acceptance algorithm is that it yields a match that is stable in the sense that no unmatched worker-firm pair would prefer being matched together to their assigned match. The theoretical guarantees of stability are based on the preference reports submitted prior to the match. We will measure whether the matches are more stable in the long-run by looking at officers’ retention in their assigned matches and also with the Army. During the three-year project period, we will measure retention using one- and two-year retention rates. In addition, we will measure retention outcomes for as long as our data agreement allows.

Secondary Outcomes

Secondary Outcomes (end points)
There is a possibility that we will be able measure additional measures of match-relevant outcomes for the family, including fertility decisions, marriage and divorce rates, and measures of spousal volunteerism.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We will test the impact of matching via the deferred acceptance algorithm using a randomized controlled trial.
Experimental Design Details
Not available
Randomization Method
We will use a pseudo-random number generator to assign markets to the treatment or control group within pre-determined randomization blocks.

Any markets that are considered unsuitable candidates for the treatment before randomization will be excluded from the randomization and analysis.
Randomization Unit
Market of employees and positions within the firm, based on rank in the organization and specialty.
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
120 markets
Sample size: planned number of observations
11,500 employees
Sample size (or number of clusters) by treatment arms
60 treatment and 60 control markets
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
The MDE on one-year retention rates if covariates explain 10 percent of residual variation is 1.6 percentage points if the intra-cluster correlation is 0. If instead the intra-cluster correlation is 0.1 or 0.2 the MDE is 5.7pp or 7.9pp. If covariates explain 10 percent of residual variation, the MDE on 4-year retention is 2.5pp, 9.0pp, and 12.5pp if the intracluster correlation is 0, 0.1, or 0.2, respectively.

Institutional Review Boards (IRBs)

IRB Name
West Point Human Research Protection Program
IRB Approval Date
IRB Approval Number
IRB Name
University of Oregon Committee for the Protection of Human Subjects
IRB Approval Date
IRB Approval Number
IRB Name
University of Chicago Social and Behavioral Sciences Institutional Review Board
IRB Approval Date
IRB Approval Number
Analysis Plan

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information