The Equilibrium Impact of AI Assisted School Choice

Last registered on September 30, 2018

Pre-Trial

Trial Information

General Information

Title
The Equilibrium Impact of AI Assisted School Choice
RCT ID
AEARCTR-0003367
Initial registration date
September 28, 2018

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
September 30, 2018, 2:06 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
Princeton University

Other Primary Investigator(s)

PI Affiliation
Princeton University
PI Affiliation
University of Chicago Booth School of Business

Additional Trial Information

Status
Completed
Start date
2018-01-01
End date
2018-06-30
Secondary IDs
Abstract
This project studies a randomized EdTech/AI intervention that helps parents strategize in a centralized school choice setting that rewards knowledge of admissions probabilities for different submitted portfolios. Many districts have implemented centralized assignment mechanisms that reward strategic play (see e.g. Agarwal and Somaini 2018). Prior studies indicate that many households lack the information on admissions probabilities necessary for optimal play (Kapor, Neilson, and Zimmerman 2018), and that this reduces overall welfare relative to alternative assignment mechanisms that only require households to report their own preferences truthfully. However, this work also suggests that a “best-case” personalized informational intervention in which households learn true admissions probabilities could lead to welfare outcomes that beat the strategy-proof assignment mechanisms. The first goal of this study is to see whether AI assistance that provides simulations of person-specific, application-specific assignment probabilities can approximate this best-case scenario, and to understand the distributional implications of such interventions when implemented at scale.

The second goal is to understand how households in school choice markets make dynamic tradeoffs between schools they like now and access to better schools in the future. We address this question in the context of survey questions that ask students about their preferences over and understanding of this tradeoff, and an informational intervention that helps them understand how the tradeoff depends on submitted applications to the choice system.
External Link(s)

Registration Citation

Citation
Kapor, Adam, Christopher Neilson and Seth Zimmerman. 2018. "The Equilibrium Impact of AI Assisted School Choice." AEA RCT Registry. September 30. https://doi.org/10.1257/rct.3367-1.0
Former Citation
Kapor, Adam, Christopher Neilson and Seth Zimmerman. 2018. "The Equilibrium Impact of AI Assisted School Choice." AEA RCT Registry. September 30. https://www.socialscienceregistry.org/trials/3367/history/34970
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2018-01-14
Intervention End Date
2018-05-31

Primary Outcomes

Primary Outcomes (end points)
We will examine the effects of our intervention on the following outcomes:
• Elicited beliefs about admissions probabilities and changes in these beliefs relative to baseline, as measured in a follow-up survey
• Participation in choice (i.e. submitting an application)
• Attributes of the submitted application (how many schools listed, admissions chances at schools listed, elicited preferences for schools listed, belief accuracy about schools listed)
• Placement outcomes (any placement, attributes of placed school such as measures of school quality and stated preference for placed school, enrollment)
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Over the medium run, we will look at “persistence” outcomes as well (persistence in the placed school, the decision to participate in choice in subsequent years, and submitted applications in subsequent years)
Over the long run, we will look at student achievement if and when these data become available (student test scores, absences, suspensions)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Our app-based “Apply Smart” program allows households to view admissions probabilities for hypothetical applications based on outcomes from past years. This project evaluates a randomized trial in which some respondents to home surveys of parents with children entering Kindergarten and PK receive access to the Apply Smart tool while others do not. In addition to simulating admissions chances for the current school year, students applying for spots in Pre-K also receive information about how their choices about PK this year will affect their options for Kindergarten next year. The PK choice has important implications for Kindergarten choice the next year because some PKs (those that are part of PK-8 schools) give students automatic admission to the associated primary school track, while others (primarily Headstart programs that stop at PK) do not.
We will estimate specifications that take these variables as outcomes. Right-hand side variables will include indicators for treatment and a rich set of controls based on survey covariates, including elicited baseline preferences and beliefs, as well as student demographic variables. We plan to examine effect heterogeneity by a) belief errors at baseline, b) student socioeconomic status as measured by neighborhood poverty, c) elicited preference intensity, d) student neighborhood (measures of students’ options outside of choice), e) grade, and f) “need to switch,” defined by whether the PK in which a student is currently enrolled offers the option of continuing through to Kindergarten. In addition to these cuts of the data, we plan to conduct a machine learning analysis of effect heterogeneity following Chernozhukov et al. (2018).
We plan to estimate two types of specifications. The first will compare the two treatment arms to the control arm with the sample universe. The second will compare the two treatment arms (survey vs. survey +app) within the sample of individuals who complete surveys.
In the second step of this analysis we will use our experimental results as inputs to a structural model of school choice, following Kapor, Neilson, and Zimmerman (2018). The details of the model will depend on the themes that emerge from experimental analysis in part one. The overarching idea is to incorporate the effects of the intervention on belief formation, choice participation, and placement outcomes into estimation, and then to use counterfactual simulations to learn about the equilibrium effects of scaling up the policy. We will evaluate these counterfactuals using measures of welfare (typically distance-metric welfare in this literature) as well as school attributes such as test score performance for past students.
We note that, at the time of this writing, we anticipate the output from this intervention will consist of two separate papers, corresponding to the two research goals outlined in the motivation setting. The first will focus on the “static” school choice problem, and experimental specifications split by points a) through d) outlined above. The second will focus on the dynamics of household decisionmaking, and include splits by points e) and f). In particular, we anticipate that the second project will include interactions between current grade and option to continue. Given sample sizes, we anticipate these specifications will have low statistical power within the standard experimental framework but will still provide information that is informative in the model estimation step.
In addition to these two main steps, we will conduct a descriptive analysis of student preferences and beliefs.
We conducted the survey and intervention between January 2018 and March 2018. We conducted followup surveys between March 2018 and May 2018. At the time of this writing (September 2018) we have knowledge of survey/intervention takeup in the treatment and control arms. but have not begun analysis of experimental data. This statement is not verifiable. We note, however, that we outlined this same analysis in applications for grant funding at JPAL-NA submitted in February 2018, prior to data collection. We can provide this grant application upon request. Further, we think the outcomes, specifications, and sample splits outlined here represent the natural set of first-order test for the effect of informational interventions in the school choice context. With respect to sample collection, our goal for the initial survey was to conduct as many survey/treatment interventions as possible during the short window between the opening of the choice process in January 2018 and the submission of applications in March 2018. With respect to followup survey sample collection, our goal was to collect as many followup surveys as possible between the release of choice outcomes in March 2018 and the conclusion of our survey arrangement with NHPS at the end of the 2018 school year.
Experimental Design Details
Randomization Method
Household-level randomization within the sample of households with a student enrolled in PK3 or PK4 or who had applied to PK3/PK4 in the past but were not placed.
Randomization Unit
Household
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
N=1981 households in sample universe
N=219 households completing either treatment arm 1 (survey only) or treatment arm 2 (survey+app)
N=222 households completing followup survey
Sample size: planned number of observations
N=1981 households in sample universe N=219 households completing either treatment arm 1 (survey only) or treatment arm 2 (survey+app) N=222 households completing followup survey
Sample size (or number of clusters) by treatment arms
Control: 671 (132 completing followup)
Treatment arm 1: 649 (47 completing followup)
Treatment arm 2: 661 (43 completing followup)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Institutional Review Board, Princeton University
IRB Approval Date
2017-12-26
IRB Approval Number
0000007154

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials