Back to History Current Version

How Information Affects Parents' Choice of Schools: A Factorial Experiment

Last registered on March 22, 2017

Pre-Trial

Trial Information

General Information

Title
How Information Affects Parents' Choice of Schools: A Factorial Experiment
RCT ID
AEARCTR-0001190
Initial registration date
April 26, 2016

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 26, 2016, 10:39 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
March 22, 2017, 5:27 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
Innovations for Poverty Action

Other Primary Investigator(s)

PI Affiliation
Mathematica Policy Research
PI Affiliation
Tulane University

Additional Trial Information

Status
On going
Start date
2016-08-01
End date
2017-09-29
Secondary IDs
Abstract
School choice can only be an effective policy if choosers can process large amounts of information about schools to make effective choices. This study seeks to identify the impacts of different strategies of presenting consumers with information about schools on the choosers' ability to understand and use the information. We categorize school information into four domains: convenience (primarily distance from home), academics (primarily captured by academic proficiency and growth measures), safety (captured by indicators such as school suspension rates and parent perceptions of safety), and resources (captured by number of laptops or devices per student).

The study is an online experiment with 72 treatment arms arranged in a 3 x 3 x 2 x 2 x 2 factorial design. The study will ask respondents, who are screened to be low-income parents of school-aged children, to rank their top 5 among 16 hypothetical schools with detailed profiles. We will experimentally vary:
1. the format (numbers, numbers + graphs, or numbers + icons),
2. the source of information (objective indicators or objective + subjective indicators),
3. the presence or absence of a reference point, namely the district-level mean value for each indicator
4. the number of attributes per domain and disclosure method (one attribute per information domain, multiple attributes per information domain, or multiple attributes with progressive disclosure via user-initiated click-through to see beyond the first attribute per domain), and
5. the default sort order (distance or academic rating)

The experiment is conducted in one sitting. Participants are administered an online baseline survey and then randomized into one of the 72 treatment arms and given an endline survey that includes tasks to complete, such as ranking the schools and answering factual information about the schools described in the profiles. Participants cannot go back and change their responses to the baseline, but while they are completing the endline tasks they may toggle between the survey instrument and the school profile information display. The study will record response times as well as responses to survey items themselves.

The study will allow the researchers to estimate the impact of each of these factors on the way that parents actually rank schools (consistency with stated preferences, and whether the factors push parents toward favoring one domain over another), as well as their ability to comprehend the information and their overall attitudes toward the information (such as whether they found it useful).

The information will be used to inform a guide for school districts and other entities seeking to provide choice information to parents via online tools.

Registration Citation

Citation
Glazerman, Steven, Ira Nichols-Barrer and Jon Valant. 2017. "How Information Affects Parents' Choice of Schools: A Factorial Experiment." AEA RCT Registry. March 22. https://doi.org/10.1257/rct.1190-2.0
Former Citation
Glazerman, Steven, Ira Nichols-Barrer and Jon Valant. 2017. "How Information Affects Parents' Choice of Schools: A Factorial Experiment." AEA RCT Registry. March 22. https://www.socialscienceregistry.org/trials/1190/history/15310
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
When consumers shop for schools they seek out several sources of information. One important source is an official website with schools profiled. The interventions under study are variations on these official online information displays. The basic display will be a map centered on the user's hypothetical home address, showing 16 nearby schools as icons. Below the map is a list of the same 16 schools with a one-line profile for each, showing basic indicators. In some variations (see below), there is additional information on each school.

The treatment factors that vary are:
1. the format (numbers, numbers + graphs, or numbers + icons),
2. the source of information (objective indicators or objective + subjective indicators),
3. the presence or absence of a reference point, namely the district-level mean value for each indicator
4. the number of attributes per domain and disclosure method (one attribute per information domain, multiple attributes per information domain, or multiple attributes with progressive disclosure via user-initiated click-through to see beyond the first attribute per domain), and
5. the default sort order (distance or academic rating)
Intervention Start Date
2016-08-01
Intervention End Date
2016-08-31

Primary Outcomes

Primary Outcomes (end points)
There are three primary outcomes, each of which can be measured with multiple indicators:
(1) understandability, or how comprehensible parents found the information presented;
(2) usability, or how confident parents felt about using the information and how easy it was to use; and
(3) effects on choices, including the weight placed on various school attributes and the extent to which ranking decisions align with parents' baseline preferences.
Primary Outcomes (explanation)
Understandability. We will assess understandability using a variety of comprehension questions about the schools and their attributes. These will include items that ask participants to select schools that are highest or lowest in terms of specific criteria (e.g., the school with the lowest suspension rate), and to select schools that meet more than one criteria (e.g., schools within 2 miles of home that have at least 50 laptops or tablets per 100 students). This task will include the major information domains, such as convenience of the location, academic performance, school safety, and resources.

Effects on choice. For the effects on choice outcome, we will use a ranking exercise to measure the extent to which parents’ school choices align with their initial preferences as stated in the baseline survey. In other words, this outcome reflects how closely a parent’s ranking of a set of schools matches what one would predict based on (1) their stated preferences in terms of which school attributes they value the most; and (2) the actual values of those attributes. For example, for parents who believe location is by far the most important attribute for a school, the effects on choice outcome would be more positive if the closest school among the available schools on the website was their highest-ranked option. During the baseline survey, parents will rate the importance of each quality, including academics, safety, convenience of the location, and resources, on a slider or thermometer ranging from 0 to 100. After viewing the presentations, parents will then select the schools that they would seriously consider for their child from a list of all those presented. They will also rank their top five choices from one (their first choice school) to five (their fifth choice).

An important challenge in measuring the alignment of rankings to preferences is the fact that parents may not state their preferences accurately in the baseline survey. This could occur due to social desirability bias (for example, a bias towards overstating a preference for academic quality over other attributes), or measurement error related to the difficulty of identifying one’s preferences correctly in a survey module. Our primary response to this challenge is that the study’s random assignment design should ensure that the amount of bias and measurement error is consistent across all of the treatment arms in the study. As a result, these issues should not bias the study’s impact estimates, as long as there is variation across treatment arms in the extent to which users align their rankings to their expressed baseline preferences. In addition, we will include additional "distractor" attributes in the survey module asking about respondents’ baseline preferences beyond those that will be displayed in the school choice website. It is possible that making the preference survey more complex (with irrelevant attributes that will not be used in the analysis) may help to reduce bias in the responses, since using a longer survey module may make it easier to answer honestly than to alter responses in ways that make them seem more socially desirable.

Careful examination of the rankings parents assign to schools will also provide a means of measuring the degree to which information presentation techniques nudge choosers toward particular types of choices, such as a preference for stronger academic performance. For example, we will ask users to rank the top three schools that they would choose for their own child from the set of schools presented. In this scenario, some families might prefer schools that are closer to their home (based on information about distance), while others might prioritize schools with higher academic achievement, availability of specific programs, or particular demographic characteristics. The study’s factorial design will make it possible to examine the degree to which different ways of presenting and sorting information on, for example, academic achievement, lead families to weight this information more heavily in their rankings, relative to the priorities specified at the beginning of the survey.

Usability. To assess usability—the extent to which parents find the information easy to use—we began with the System Usability Scale, or SUS (Brooke 1986), a reliable and validated measure that has been used in over 1,300 articles and publications. The SUS defines usability in terms of three features: (1) effectiveness (user’s ability to complete tasks using the system); (2) efficiency (the level of resources required); and (3) satisfaction. The scale consists of 10 items rated on a five-point scale, from strongly disagree to strongly agree, and it is scored as a single composite. Example items include “I found the system unnecessarily complex” and “I would imagine that most people would learn to use this system very quickly.” We will supplement the SUS items with additional items related to usability that are tailored to school choice platforms. Specifically, we will ask participants to rate how easily they were able to use information related to each domain on the website, including school distance, academics, school safety, and resources. In addition to ease of use, we will also include a second measure of usability that asks parents about whether they would recommend the website to a friend who is also in the process of selecting schools to apply for. This outcome will be based on a direct survey question asking about such recommendations.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The design a factorial experiment that we will analyze using a hierarchical Bayesian model, estimated with Stan software.
Experimental Design Details
Randomization Method
Randomization is being automated as part of the system that delivers the online survey.
Randomization Unit
The study randomizes individual respondents.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Not applicable.
Sample size: planned number of observations
3,240 respondents
Sample size (or number of clusters) by treatment arms
45 respondents in each treatment arm
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Minimum detectable effect size is 0.17 standard deviations The natural units will be values on survey questions, or variables constructed from survey questions, such as response time to complete factual questions or concordance of preferences implied by rank-ordering with stated preferences from baseline survey. If our analysis plan used the standard frequentist framework for hypothesis testing, a study design seeking to compare this many factor combinations would be faced with a serious lack of statistical power—a lack arising from the small number of respondents in each individual treatment arm. This issue would also be compounded by a multiple comparisons problem: the number of tested contrasts would be likely to produce a substantial number of false positives even if there were no true effects. The most commonly used corrections for multiple comparisons—such as the Benjamini-Hochberg method—would substantially increase the study’s sample size and costs. Instead, our approach will address the multiple comparisons issue by adopting a hierarchical Bayesian approach for analyzing the data. The key difference between a Bayesian analysis and a classical analysis is that while the latter creates a long list of contrasts, tests each one separately, and adjusts for multiple comparisons ad hoc, the former estimates all the treatment effects at the same time without requiring them to be independent. In so doing, the statistical precision attained in estimating groups of factor combinations can be “shared” with individual treatment arms that have a common factor. This turns out to be far more efficient than a classical design, enabling us to test 72 factor combinations with a sample that would otherwise have sufficient power to measure just 16 factor combinations.
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials