Back to History

Fields Changed

Registration

Field Before After
Last Published November 01, 2019 06:17 PM March 13, 2020 01:32 PM
Primary Outcomes (End Points) Our three main measures of match quality will be (1) satisfaction with the match, (2) performance in the match, and (3) retention with the organization. Our main outcomes are (1) self-reported preference manipulation, (2) satisfaction with the match based on preference reports, (3) performance in the match, and (4) retention in the match and with the Army.
Primary Outcomes (Explanation) Our three main measures of match quality will be (1) satisfaction with the match, (2) performance in the match, and (3) retention with the Army. We will measure satisfaction with the match using officers’ and units’ rank-ordered preferences over potential matches. This is a measure of ex-ante satisfaction with a match. We may also be able to measure realized job satisfaction via survey questions that will be included in Human Resource Command's existing officer surveys. We will measure the impact on performance using promotion outcomes and officer’s annual performance evaluations. Our main promotion outcome will be the time to the next promotion. The performance evaluations include a categorical rating (Most qualified, Highly qualified, Qualified, and Unqualified) and a text-based evaluation that can be mapped to a rank ordered performance rating that is highly predictive of future promotions. In particular, we will use the predicted rank from the text-based evaluation as a quantitative measure of performance. During the three-year project period, we will measure retention using one- and two-year retention rates. In addition, we will measure retention outcomes for as long as our data agreement allows. Our main outcomes are selected to indirectly measure three well-known, and sometimes competing, goals of market design mechanisms: strategyproofness, efficiency, and stability. Officers will be asked about whether they strategically manipulated their preferences in two surveys. They were first asked about strategic manipulation in a survey administered on the platform where they submit or review their preferences about two weeks before their preferences were due. They will be asked a similar set of questions again when they receive their assigned match in February 2020. We will use these reports to assess whether officers matched with the deferred acceptance algorithm truthfully reported their preferences at a higher rate than officers in the control group. We will measure the efficiency of the match in two ways. First, following the literature assessing the welfare consequences of centralized assignment in the student-to-school setting (Abdulkadiroğlu, Pathak, and Roth, 2009; Abdulkadiroğlu, Agarwal, and Pathak, 2017), we will measure efficiency using officers’ and units’ rank-ordered preferences over potential matches. This is a measure of ex-ante satisfaction with a match. We will also measure realized job satisfaction via survey questions that will be including in Human Resource Command's existing officer surveys. Of course, another important dimension of efficiency is whether the match is more productive. We will measure this aspect of efficiency using promotion outcomes and officer’s annual performance evaluations. Our main promotion outcome will be the time to the next promotion. The performance evaluations include a categorical rating (Most qualified, Highly qualified, Qualified, and Unqualified) and a text-based evaluation that can be mapped to a rank ordered performance rating that is highly predictive of future promotions. In particular, we will use the predicted rank from the text-based evaluation as a quantitative measure of performance. In addition to strategyproofness, one of the most often cited benefits of the deferred acceptance algorithm is that it yields a match that is stable in the sense that no unmatched worker-firm pair would prefer being matched together to their assigned match. The theoretical guarantees of stability are based on the preference reports submitted prior to the match. We will measure whether the matches are more stable in the long-run by looking at officers’ retention in their assigned matches and also with the Army. During the three-year project period, we will measure retention using one- and two-year retention rates. In addition, we will measure retention outcomes for as long as our data agreement allows.
Randomization Method We will use a pseudo-random number generator to assign markets to the treatment or control group. We will use a pseudo-random number generator to assign markets to the treatment or control group within pre-determined randomization blocks. Any markets that are considered unsuitable candidates for the treatment before randomization will be excluded from the randomization and analysis.
Power calculation: Minimum Detectable Effect Size for Main Outcomes The MDE on one-year retention rates if covariates explain 10 percent of residual variation is 1.5 percentage points (pp) if The MDE on one-year retention rates if covariates explain 10 percent of residual variation is 1.6 percentage points if the intra-cluster correlation is 0. If instead the intra-cluster correlation is 0.1 or 0.2 the MDE is 5.7pp or 7.9pp. If covariates explain 10 percent of residual variation, the MDE on 4-year retention is 2.5pp, 9.0pp, and 12.5pp if the intracluster correlation is 0, 0.1, or 0.2, respectively.
Intervention (Hidden) This algorithm is the basis for the National Residency Matching Program, which matches all new doctors to hospitals in the United States every year. When matches are determined using this algorithm, officers submit a rank ordered list of their preferences over positions at bases. The leadership for each hiring unit similarly submits a rank ordered list over candidates for each open position. The algorithm then uses these preferences to find a matching for which no unmatched officer-base pair would both prefer to be matched together over their assigned match. The algorithm can be modified to account for both the Army’s objectives and constraints (Hatfield, Kominers, and Westkamp, 2017). These matching algorithms are transparent to participants, fair, and efficient to implement. They yield a match which is envy-free: if an officer is not given her first choice, then that means all other officers assigned ahead of her were preferred by that base. A benefit of this feature is that participants cannot benefit from misreporting their preferences over potential assignments. A key difference between this setting and other matching markets that have adopted the DAA is that the Army may have preferences over potential matches in addition to the officers’ and units’ preferences. We are incorporating this feature in to the DAA implementation by allowing officers and units to rank all potential matches, but allowing the Army to constrain the set of allowable matches. Therefore, the DAA is used to determine matches among the set of matches the Army is indifferent between. While we anticipate only a small share of matches will be constrained, we are collecting data on these constraints so we will be able to report the extent of their use and their impact. In fall 2019, we partnered with the Army’s Human Resources Command and researchers from West Point to pilot replacing the current system of manually matching officers to bases with an algorithmic match based on the deferred acceptance algorithm (DAA; Gale and Shapley, 1962). This algorithm is the basis for the National Residency Matching Program, which matches all new doctors to hospitals in the United States every year, and for the mechanism that matches New York City public school students to school every year (Abdulkadiroğlu, Pathak, and Roth, 2005). When matches are determined using this algorithm, officers submit a rank ordered list of their preferences over positions at bases. The leadership for each hiring unit similarly submits a rank ordered list over candidates for each open position. The algorithm then uses these preferences to find a matching for which no unmatched officer-base pair would both prefer to be matched together over their assigned match. These matching algorithms are transparent to participants, fair, and efficient to implement. They yield a match which is envy-free: if an officer is not given her first choice, then that means all other officers assigned ahead of her were preferred by that base. A benefit of this feature is that participants cannot benefit from misreporting their preferences over potential assignments. The matching market between officers and units shares many features with a school choice problem (Abdulkadiroğlu and Sönmez, 2003). Just as students are guaranteed a spot in a public school, officers are guaranteed to match with a unit. Mirroring implementations of the deferred acceptance algorithm in school choice problems (Abdulkadiroğlu, Pathak, and Roth, 2009) this constraint is incorporated in to the algorithm by randomly imputing preferences so that within a market, every officer ranks every position and every position ranks every officer. For example, if a unit actively ranked 10 of 100 officers in a market, an imputed rank between 11 and 100 would be assigned for the 90 unranked officers. Specifically, officers were sorted into indifference classes based on their labels and then randomly assigned a ranking within this class. Human Resources Command ranked the indifference classes so that every officer in a more preferred class is imputed with a better ranking than officers in less preferred classes. Because the sorting is independent across positions, this is sometimes referred to as “multiple tiebreaking” (Abdulkadiroğlu, Pathak, and Roth, 2009). Units also had the option of giving an officer a “thumbs down” indicating that they are less preferred than unranked officers. Officers with a thumbs down were grouped into the least preferred class. An analogous imputation procedure was used to impute officers’ preferences. This imputation is not costless. In particular, the imputation procedure does not treat any potential matches as entirely unacceptable to an officer or position. As a result, the resulting match may be unstable because an officer might prefer being unmatched (e.g. leaving the Army) to their assigned match. This is not a problem, however, if officers are actually just indifferent between unranked positions but do not view them as unacceptable. Importantly, this imputation is similar to what is done in other real-world matching markets, like school choice in New York City public schools (Abdulkadiroğlu, Pathak, and Roth, 2009).
Back to top

Irbs

Field Before After
IRB Name University of Oregon Committee for the Protection of Human Subjects
IRB Approval Date January 07, 2020
IRB Approval Number 08282019.044
Back to top
Field Before After
IRB Name West Point Human Research Protection Program
IRB Approval Date December 14, 2019
IRB Approval Number 20-041
Back to top