Experimental Design
Our app-based “Apply Smart” program allows households to view admissions probabilities for hypothetical applications based on outcomes from past years. This project evaluates a randomized trial in which some respondents to home surveys of parents with children entering Kindergarten and PK receive access to the Apply Smart tool while others do not. In addition to simulating admissions chances for the current school year, students applying for spots in Pre-K also receive information about how their choices about PK this year will affect their options for Kindergarten next year. The PK choice has important implications for Kindergarten choice the next year because some PKs (those that are part of PK-8 schools) give students automatic admission to the associated primary school track, while others (primarily Headstart programs that stop at PK) do not.
We will estimate specifications that take these variables as outcomes. Right-hand side variables will include indicators for treatment and a rich set of controls based on survey covariates, including elicited baseline preferences and beliefs, as well as student demographic variables. We plan to examine effect heterogeneity by a) belief errors at baseline, b) student socioeconomic status as measured by neighborhood poverty, c) elicited preference intensity, d) student neighborhood (measures of students’ options outside of choice), e) grade, and f) “need to switch,” defined by whether the PK in which a student is currently enrolled offers the option of continuing through to Kindergarten. In addition to these cuts of the data, we plan to conduct a machine learning analysis of effect heterogeneity following Chernozhukov et al. (2018).
We plan to estimate two types of specifications. The first will compare the two treatment arms to the control arm with the sample universe. The second will compare the two treatment arms (survey vs. survey +app) within the sample of individuals who complete surveys.
In the second step of this analysis we will use our experimental results as inputs to a structural model of school choice, following Kapor, Neilson, and Zimmerman (2018). The details of the model will depend on the themes that emerge from experimental analysis in part one. The overarching idea is to incorporate the effects of the intervention on belief formation, choice participation, and placement outcomes into estimation, and then to use counterfactual simulations to learn about the equilibrium effects of scaling up the policy. We will evaluate these counterfactuals using measures of welfare (typically distance-metric welfare in this literature) as well as school attributes such as test score performance for past students.
We note that, at the time of this writing, we anticipate the output from this intervention will consist of two separate papers, corresponding to the two research goals outlined in the motivation setting. The first will focus on the “static” school choice problem, and experimental specifications split by points a) through d) outlined above. The second will focus on the dynamics of household decisionmaking, and include splits by points e) and f). In particular, we anticipate that the second project will include interactions between current grade and option to continue. Given sample sizes, we anticipate these specifications will have low statistical power within the standard experimental framework but will still provide information that is informative in the model estimation step.
In addition to these two main steps, we will conduct a descriptive analysis of student preferences and beliefs.
We conducted the survey and intervention between January 2018 and March 2018. We conducted followup surveys between March 2018 and May 2018. At the time of this writing (September 2018) we have knowledge of survey/intervention takeup in the treatment and control arms. but have not begun analysis of experimental data. This statement is not verifiable. We note, however, that we outlined this same analysis in applications for grant funding at JPAL-NA submitted in February 2018, prior to data collection. We can provide this grant application upon request. Further, we think the outcomes, specifications, and sample splits outlined here represent the natural set of first-order test for the effect of informational interventions in the school choice context. With respect to sample collection, our goal for the initial survey was to conduct as many survey/treatment interventions as possible during the short window between the opening of the choice process in January 2018 and the submission of applications in March 2018. With respect to followup survey sample collection, our goal was to collect as many followup surveys as possible between the release of choice outcomes in March 2018 and the conclusion of our survey arrangement with NHPS at the end of the 2018 school year.