An Experimental Evaluation of a Matching Market Mechanism

Last registered on October 22, 2020

Pre-Trial

Trial Information

General Information

Title
An Experimental Evaluation of a Matching Market Mechanism
RCT ID
AEARCTR-0004718
Initial registration date
September 17, 2019

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
September 18, 2019, 9:41 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
October 22, 2020, 4:59 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region
Region
Region
Region
Region
Region
Region
Region
Region
Region
Region
Region
Region
Region
Region

Primary Investigator

Affiliation
University of Chicago

Other Primary Investigator(s)

PI Affiliation
University of Chicago
PI Affiliation
University of Oregon
PI Affiliation
United States Military Academy at West Point

Additional Trial Information

Status
On going
Start date
2019-10-02
End date
2024-03-01
Secondary IDs
Abstract
We propose using a randomized controlled trial to measure the impact of replacing a status quo job assignment process within a large-scale employer with a system based on insights from matching and market design on match outcomes, in particular, using a deferred acceptance algorithm to match employees with positions.
External Link(s)

Registration Citation

Citation
Davis, Jonathan et al. 2020. "An Experimental Evaluation of a Matching Market Mechanism." AEA RCT Registry. October 22. https://doi.org/10.1257/rct.4718-3.0
Experimental Details

Interventions

Intervention(s)
We propose replacing the current system of manually matching employees to positions with an algorithmic match based on the deferred acceptance algorithm.
Intervention Start Date
2019-10-02
Intervention End Date
2020-01-31

Primary Outcomes

Primary Outcomes (end points)
Our main outcomes are (1) self-reported preference manipulation, (2) satisfaction with the match based on preference reports, (3) performance in the match, and (4) retention in the match and with the Army.
Primary Outcomes (explanation)
Our main outcomes are selected to indirectly measure three well-known, and sometimes competing, goals of market design mechanisms: strategyproofness, efficiency, and stability.

Officers will be asked about whether they strategically manipulated their preferences in two surveys. They were first asked about strategic manipulation in a survey administered on the platform where they submit or review their preferences about two weeks before their preferences were due. They will be asked a similar set of questions again when they receive their assigned match in February 2020. We will use these reports to assess whether officers matched with the deferred acceptance algorithm truthfully reported their preferences at a higher rate than officers in the control group.

We will measure the efficiency of the match in two ways. First, following the literature assessing the welfare consequences of centralized assignment in the student-to-school setting (Abdulkadiroğlu, Pathak, and Roth, 2009; Abdulkadiroğlu, Agarwal, and Pathak, 2017), we will measure efficiency using officers’ and units’ rank-ordered preferences over potential matches. This is a measure of ex-ante satisfaction with a match. We will also measure realized job satisfaction via survey questions that will be including in Human Resource Command's existing officer surveys.

Of course, another important dimension of efficiency is whether the match is more productive. We will measure this aspect of efficiency using promotion outcomes and officer’s annual performance evaluations. Our main promotion outcome will be the time to the next promotion. The performance evaluations include a categorical rating (Most qualified, Highly qualified, Qualified, and Unqualified) and a text-based evaluation that can be mapped to a rank ordered performance rating that is highly predictive of future promotions. In particular, we will use the predicted rank from the text-based evaluation as a quantitative measure of performance.

In addition to strategyproofness, one of the most often cited benefits of the deferred acceptance algorithm is that it yields a match that is stable in the sense that no unmatched worker-firm pair would prefer being matched together to their assigned match. The theoretical guarantees of stability are based on the preference reports submitted prior to the match. We will measure whether the matches are more stable in the long-run by looking at officers’ retention in their assigned matches and also with the Army. During the three-year project period, we will measure retention using one- and two-year retention rates. In addition, we will measure retention outcomes for as long as our data agreement allows.

Secondary Outcomes

Secondary Outcomes (end points)
There is a possibility that we will be able measure additional measures of match-relevant outcomes for the family, including fertility decisions, marriage and divorce rates, and measures of spousal volunteerism.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We will test the impact of matching via the deferred acceptance algorithm using a randomized controlled trial.
Experimental Design Details
We will test the impact of matching via the deferred acceptance algorithm using a randomized controlled trial. The appropriate unit of observation in this experiment is a disjoint market since changing the matching mechanism will necessarily cause spillovers across officers within a market.

In October 2019, the research team randomly assigned 118 markets, which included about 10,000 officers, to either a treatment or control condition. The 118 markets were split into 34 randomization blocks based on the similarity of the work done in the market and the rank of the officers in the market. The median block included 4 markets. In order to maximize compliance with the treatment protocol, we worked with the Army’s Human Resources Command to remove any markets for which they anticipated the deferred acceptance algorithm might be inappropriate prior to randomization. These excluded markets will not be included in the analysis and were not counted in the 118 markets that were randomized.

Because only a random subset of markets will match with the deferred acceptance algorithm, any differences in outcomes between the branches offered the new mechanism and the control group can be credibly attributed to the mechanism itself, even if not all treatment branches adopt the new mechanism.

For most outcomes, we will measure the average effect by comparing average outcomes of officers in treatment and control branches using a linear regression of the outcome on treatment status, the lagged mean of the outcome in each officer's market, pre-randomization characteristics of the officer, and randomization block fixed effects. Block fixed effects must be included since treatment was only randomly assigned conditional on the randomization block. Controlling for baseline outcomes and officers' baseline characteristics is not necessary for identification but increases our power, as discussed below. Inference will be clustered at the disjoint market level. We will report inference based on both standard asymptotic approximations and based on randomization inference that approximates Fisher’s Exact Test (Imbens and Rubin, 2015).
Randomization Method
We will use a pseudo-random number generator to assign markets to the treatment or control group within pre-determined randomization blocks.

Any markets that are considered unsuitable candidates for the treatment before randomization will be excluded from the randomization and analysis.
Randomization Unit
Market of employees and positions within the firm, based on rank in the organization and specialty.
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
120 markets
Sample size: planned number of observations
11,500 employees
Sample size (or number of clusters) by treatment arms
60 treatment and 60 control markets
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
The MDE on one-year retention rates if covariates explain 10 percent of residual variation is 1.6 percentage points if the intra-cluster correlation is 0. If instead the intra-cluster correlation is 0.1 or 0.2 the MDE is 5.7pp or 7.9pp. If covariates explain 10 percent of residual variation, the MDE on 4-year retention is 2.5pp, 9.0pp, and 12.5pp if the intracluster correlation is 0, 0.1, or 0.2, respectively.
IRB

Institutional Review Boards (IRBs)

IRB Name
West Point Human Research Protection Program
IRB Approval Date
2019-12-14
IRB Approval Number
20-041
IRB Name
University of Oregon Committee for the Protection of Human Subjects
IRB Approval Date
2020-01-07
IRB Approval Number
08282019.044
IRB Name
University of Chicago Social and Behavioral Sciences Institutional Review Board
IRB Approval Date
2019-09-17
IRB Approval Number
IRB19-1362
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials