Primary Outcomes (explanation)
Understandability. We will assess understandability using a variety of comprehension questions about the schools and their attributes. These will include items that ask participants to select schools that are highest or lowest in terms of specific criteria (e.g., the school with the lowest suspension rate), and to select schools that meet more than one criteria (e.g., schools within 2 miles of home that have at least 50 laptops or tablets per 100 students). This task will include the major information domains, such as convenience of the location, academic performance, school safety, and resources.
Effects on choice. For the effects on choice outcome, we will use a ranking exercise to measure the extent to which parents’ school choices align with their initial preferences as stated in the baseline survey. In other words, this outcome reflects how closely a parent’s ranking of a set of schools matches what one would predict based on (1) their stated preferences in terms of which school attributes they value the most; and (2) the actual values of those attributes. For example, for parents who believe location is by far the most important attribute for a school, the effects on choice outcome would be more positive if the closest school among the available schools on the website was their highest-ranked option. During the baseline survey, parents will rate the importance of each quality, including academics, safety, convenience of the location, and resources, on a slider or thermometer ranging from 0 to 100. After viewing the presentations, parents will then select the schools that they would seriously consider for their child from a list of all those presented. They will also rank their top five choices from one (their first choice school) to five (their fifth choice).
An important challenge in measuring the alignment of rankings to preferences is the fact that parents may not state their preferences accurately in the baseline survey. This could occur due to social desirability bias (for example, a bias towards overstating a preference for academic quality over other attributes), or measurement error related to the difficulty of identifying one’s preferences correctly in a survey module. Our primary response to this challenge is that the study’s random assignment design should ensure that the amount of bias and measurement error is consistent across all of the treatment arms in the study. As a result, these issues should not bias the study’s impact estimates, as long as there is variation across treatment arms in the extent to which users align their rankings to their expressed baseline preferences. In addition, we will include additional "distractor" attributes in the survey module asking about respondents’ baseline preferences beyond those that will be displayed in the school choice website. It is possible that making the preference survey more complex (with irrelevant attributes that will not be used in the analysis) may help to reduce bias in the responses, since using a longer survey module may make it easier to answer honestly than to alter responses in ways that make them seem more socially desirable.
Careful examination of the rankings parents assign to schools will also provide a means of measuring the degree to which information presentation techniques nudge choosers toward particular types of choices, such as a preference for stronger academic performance. For example, we will ask users to rank the top three schools that they would choose for their own child from the set of schools presented. In this scenario, some families might prefer schools that are closer to their home (based on information about distance), while others might prioritize schools with higher academic achievement, availability of specific programs, or particular demographic characteristics. The study’s factorial design will make it possible to examine the degree to which different ways of presenting and sorting information on, for example, academic achievement, lead families to weight this information more heavily in their rankings, relative to the priorities specified at the beginning of the survey.
Usability. To assess usability—the extent to which parents find the information easy to use—we began with the System Usability Scale, or SUS (Brooke 1986), a reliable and validated measure that has been used in over 1,300 articles and publications. The SUS defines usability in terms of three features: (1) effectiveness (user’s ability to complete tasks using the system); (2) efficiency (the level of resources required); and (3) satisfaction. The scale consists of 10 items rated on a five-point scale, from strongly disagree to strongly agree, and it is scored as a single composite. Example items include “I found the system unnecessarily complex” and “I would imagine that most people would learn to use this system very quickly.” We will supplement the SUS items with additional items related to usability that are tailored to school choice platforms. Specifically, we will ask participants to rate how easily they were able to use information related to each domain on the website, including school distance, academics, school safety, and resources. In addition to ease of use, we will also include a second measure of usability that asks parents about whether they would recommend the website to a friend who is also in the process of selecting schools to apply for. This outcome will be based on a direct survey question asking about such recommendations.