Decentralized School Choice

Last registered on September 12, 2025

Pre-Trial

Trial Information

General Information

Title
Decentralized School Choice
RCT ID
AEARCTR-0016494
Initial registration date
September 07, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
September 12, 2025, 10:05 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation

Other Primary Investigator(s)

PI Affiliation
NYU Abu Dhabi

Additional Trial Information

Status
In development
Start date
2025-09-10
End date
2025-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We study two mechanisms for matching K–12 students with schools: the well-known Top Trading Cycles (TTC) mechanism and a new market-based mechanism inspired by Leshno and Lo (2021), which we refer to as the competitive equilibrium (CE) mechanism. While TTC is strategy-proof and Pareto efficient, students may fail to recognize that truthful reporting is optimal, resulting in welfare losses. Leshno and Lo (2021) develop a method to calculate priority cutoffs in a student-school market, ensuring that demand equals supply at each school, and show that the resulting allocation in a finite model coincides with the TTC outcome. Our CE mechanism builds on this approach, with a key modification in how priority cutoffs are determined. We conjecture that it is cognitively simpler than TTC, as students only need to select an eligible school rather than rank all schools. We conduct an online experiment to assess whether the CE mechanism improves truthtelling, defined as students choosing their best eligible school, increases social surplus, and implements the TTC allocation under truthful reporting. So far, only pilot data have been collected.
External Link(s)

Registration Citation

Citation
Park, Hyoeun and John Wooders. 2025. "Decentralized School Choice." AEA RCT Registry. September 12. https://doi.org/10.1257/rct.16494-1.0
Experimental Details

Interventions

Intervention(s)
We study two mechanisms for matching K–12 students with schools: the well-known Top Trading Cycles (TTC) mechanism and a new market-based mechanism inspired by Leshno and Lo (2021), which we refer to as the competitive equilibrium (CE) mechanism. While TTC is strategy-proof and Pareto efficient, students may fail to recognize that truthful reporting is optimal, resulting in welfare losses. Leshno and Lo (2021) develop a method to calculate priority cutoffs in a student-school market, ensuring that demand equals supply at each school, and show that the resulting allocation in a finite model coincides with the TTC outcome. Our CE mechanism builds on this approach, with a key modification in how priority cutoffs are determined. We conjecture that the CE mechanism is cognitively simpler than the TTC mechanism, as students only need to select one school from a set of schools of which they are eligible to apply, rather than providing a ranking of all the schools. We conduct an online experiment to assess whether the CE mechanism yields a higher rate of truth-telling (defined as students choosing their best eligible school), yields more social surplus, and more closely implements the allocation that would result if subjects were to report their preferences truthfully in the TTC mechanism. So far, only pilot data have been collected.
Intervention (Hidden)
Intervention Start Date
2025-09-10
Intervention End Date
2025-12-31

Primary Outcomes

Primary Outcomes (end points)
The main outcomes of interest are: (1) the probability at which subjects truthfully report their preferences under TTC, and the rate at which subjects choose their best eligible school in CE. (2) The payoffs realized under TTC and in CE. (3) The closeness of the TTC allocation under truth-telling to (a) the TTC allocation for reported preferences and to (b) the CE allocation. Here, we briefly explain these outcome measures. Thorough definitions and the analysis plan are explained in the Experimental Design and Analysis Plan.

The existing literature has shown that subjects largely do not report their preferences truthfully in the TTC mechanism, even though it is a weakly dominant strategy to do so (for example, Guillen and Hing, 2014; Guillen and Hakimov, 2017, 2018; Chen and Sönmez, 2006; Guillen and Veszteg, 2021). A likely explanation is that subjects do not recognize that truth-telling is optimal. This motivates our study of the CE mechanism inspired by Leshno and Lo (2021), in which a student’s individual priorities, together with the CE priority prices, determine the set of schools to which the student is eligible to apply. While the CE priority prices clear markets in a continuum economy, they will typically fail to do so in a finite economy, and hence in our implementation of the market, students may be rationed. When choosing among eligible schools, a student is informed of the odds that they will be admitted to each school. In the CE treatment, we are interested in the rate at which students choose their best eligible school. Since the odds of admission are higher in the large market than in the small market, we expect students to be more likely to choose their best eligible school in the large market. For the TTC mechanism, we expect no difference in the rate at which subjects truthfully report their preferences in small and large markets. Then, the natural next question would be how the payoffs would change between the two mechanisms. We calculate the average payoffs of each market and compare which mechanism subjects earn more. Since we expect there will be less difference in behaviors depending on the market size in the TTC treatments, the difference in average payoffs is also expected only in the CE treatments. Especially, we expect to have an improvement in payoffs when the market size increases in the CE treatments.

Last, we compare which market produces the closer outcomes to the TTC equilibrium predictions, where everyone reports their preferences truthfully. For this purpose, we use the Borda distance metric proposed by Can et al. (2023).
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
(For a cleaner version of math notations, please refer to section 2.3. docs & material, pre-registration.pdf)

1. General Environment
There are three schools: A, B, and C. The number of seats in schools A, B, C is 2k, 4k, and 6k, respectively, where k = 1 and k = 10 depending on the size of the economy. Each student has a priority at each school. Let ri(s) denote the priority of student i at school s. We assume that ri(s) ∼ U [0, 1] for each student i and school s. Each student’s priority at each school is drawn from the same distribution to capture the idea that students are ex-ante symmetric from the perspective of each school.

For students, there are six possible preference orderings over the three schools, e.g., A ≻ B ≻ C, A ≻ C ≻ B, and so on. We consider an economy where the preference orderings are uniform. That each preference ordering is equally common captures the idea that schools are of the same quality. Schools differ only in the number of seats.

There are two possible ways to implement the uniform preference orderings in an experiment: (1) exogenously assign the same number of students to each preference ordering, and (2) for each student, independently draw their preference from the uniform distribution over the six preference orderings. We will do both cases. The first one is the no aggregate uncertainty treatment, and the second one is the aggregate uncertainty treatment.

The small market consists of 12 students. Thus, in the small market × no aggregate uncertainty treatments, there are two students for each of six preference orderings, while small market × aggregate uncertainty treatments require drawing 12 students’ preferences independently with an equal chance for each preference ordering. In the case of the large market, there are 120 students. Similarly, there are 20 students for each preference ordering for no aggregate uncertainty treatments and we draw preferences 120 times independently for aggregate uncertainty treatments. Priority and preference profiles are the same between TTC and CE treatments.

For no aggregate uncertainty, we consider 10 large markets (LM1, LM2,..., LM10) and 10 small markets (SM1, SM2,...,SM10). The subjects’ priorities in the large market are generated randomly. To construct SM1, for each preference profile, take the first two students in LM1 with that preference profile, and then add them to SM1. This makes a market with 12 students. Repeat for SM2 and LM2, and so on.

For aggregate uncertainty, we again consider 10 large markets (LM1, LM2, ..., LM10) and 100 small markets (SM1, SM2, ..., SM10). The subjects’ preferences and priorities in an uncertain large market (ULM) are generated randomly. To construct USM1, take the first 12 students from ULM1 (regardless of preference profile). Repeat for USM2 and ULM2, and so on.

2. TTC Treatment

In the TTC treatment, subjects are informed of (1) the number of seats in each school, (2) the number of students (size of the economy), (3) their earnings from each school (their own preferences), (4) their priority at each school, and (5) distribution of preferences (whether there’s no aggregate uncertainty or not). Given the information, subjects submit a full preference ranking.

They make choices twice (Round 1 and Round 2). All other things remain the same between the two rounds, except for priorities. Allocations follow the traditional TTC mechanism.

3. CE Treatment
The key insight we take from Leshno and Lo (2021) is that the TTC allocation can be replicated using priority prices. In other words, each student is assigned a “budget set” of schools they can apply to based on their priorities. The TTC allocation can then be interpreted as assigning each student to their best school in their budget set. Because priorities determine the budget set, they also play a role in CE prices (or cutoffs) of schools in a decentralized economy. These cutoffs—and the resulting budget sets—can be computed directly from the distribution of preferences and priorities in a continuum economy.

We calculate the CE prices under the continuum economy where Schools A, B, and C have a measure 2/12, 4/12, and 6/12, of seats respectively, and there is a measure 2/12 of students with each preference profile. Then, we use these prices to form subjects’ budget sets.

Subjects are informed of (1) the number of seats in each school, (2) the number of students (size of the economy), (3) their earnings from each school (their own preference), (4) their own budget set, (5) odds of getting into a school they choose conditioning on all others choose their highest-earning school from their budget set, and (6) distribution of preferences (whether there’s no aggregate uncertainty or not). Given the information, subjects choose a school from their budget sets.

Subjects make choices twice (Round 1 and Round 2). All other things remain the same between the two rounds, except for priorities. If a school has an excess demand, we randomly select which students will be assigned to it. Students who are not selected will be assigned to a school with available seats.
Experimental Design Details
Randomization Method
Each study is an individual, prolific study. Prolific subjects self-select which study to participate in without any knowledge about the study.
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
No planned clusters.
Sample size: planned number of observations
5,280 for School preference rankings (TTC treatments, based on two rounds), 5,280 for School choices from the eligible sets (CE treatments, based on two rounds)
Sample size (or number of clusters) by treatment arms
TTC, small, no aggregate uncertainty treatment: 120 (12 subjects per session, 10 sessions)
TTC, small, aggregate uncertainty treatment: 120 (12 subjects per session, 10 sessions)
TTC, large, no aggregate uncertainty treatment: 1,200 (120 subjects per session, 10 sessions)
TTC, large, aggregate uncertainty treatment: 1,200 (120 subjects per session, 10 sessions)
CE, small, no aggregate uncertainty treatment: 120 (12 subjects per session, 10 sessions)
CE, small, aggregate uncertainty treatment: 120 (12 subjects per session, 10 sessions)
CE, large, no aggregate uncertainty treatment: 1,200 (120 subjects per session, 10 sessions)
CE, large, aggregate uncertainty treatment: 1,200 (120 subjects per session, 10 sessions)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

Documents

Document Name
pre-registration.pdf
Document Type
Document Description
The detailed document with tables and figures.
File
pre-registration.pdf

MD5: 6c018fa9b067425d67489ddd49a60dee

SHA1: 53730cf544ea0b452fb75d255ad6ed9882887f25

Uploaded At: September 07, 2025

IRB

Institutional Review Boards (IRBs)

IRB Name
Division of Social Science New York University Abu Dhabi
IRB Approval Date
2025-03-14
IRB Approval Number
HRPP-2020-37
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials