Nudges for high school applications

Last registered on October 31, 2025

Pre-Trial

Trial Information

General Information

Title
Nudges for high school applications
RCT ID
AEARCTR-0017146
Initial registration date
October 30, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 31, 2025, 9:25 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Cornell Tech

Other Primary Investigator(s)

PI Affiliation
Cornell University

Additional Trial Information

Status
In development
Start date
2025-10-30
End date
2026-06-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
In this experiment, we design and deploy an informational intervention to applicants to high schools, providing applicant families a set of geographically nearby, high-performing programs.
External Link(s)

Registration Citation

Citation
Chiang, Erica and Nikhil Garg. 2025. "Nudges for high school applications." AEA RCT Registry. October 31. https://doi.org/10.1257/rct.17146-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
We send emails to applicants with lists of programs to consider applying to.
Intervention Start Date
2025-10-30
Intervention End Date
2025-12-13

Primary Outcomes

Primary Outcomes (end points)
Our primary hypothesis is that the nudges will increase the rate at which applicants apply to (list in their ranked list) programs that they are nudged to, compared to the control group. In particular:
For each treatment applicant, we have the set of programs to which they were nudged.
For each control applicant, we have the set of programs to which they would have been nudged had they been in the treatment group.

Then, our primary outcome is:
Y = Whether an applicant applied (listed somewhere in their list) to at least 1 program to which they were nudged.

We will measure this through logistic regression in which the treatment assignment is an indicator variable. We will cluster standard errors by middle school, and include analyses with controls corresponding to the stratification variables.

The co-primary outcome is:
Y = Whether an applicant matched (listed somewhere in their list) to at least 1 program to which they were nudged.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Alternate specifications to primary outcome
Alternate outcomes
Y = whether an applicant applied to a nudged program first.
Y = rank position of the first nudged program listed in profile

Scale-up analyses for match outcomes
Given the outcomes in application behavior (do treatment applicants list, and where do they list nudged programs, compared to the treatment group), we will simulate:
Match outcomes under scaleups of the treatment group, for example, if everyone in the target group was in control, or everyone in treatment.
Match outcomes under stronger interventions, that increase application behavior takeup of applications to nudge programs.
Relatedly, In all cases, congestion: rate of acceptance among applicants with high offer likelihood estimates, at each high school. Estimating congestion at scale-up analyses, including with corresponding alternative choices of program nudge caps.
This will help estimate match outcomes under worlds where the intervention is scaled up.

Email click response outcomes
We have individual applicant click data on the emails (separate url for each treatment applicant-nudged school pair). We will analyze this click data to measure engagement with the nudges. An applicant click on a specific link is defined as whether the link is clicked at least once before the application deadline.

We will use the following measurements on click behavior, all measured as clicks before the application deadline:
Did the applicant click on at least one link
Number of clicks by the applicant
Fraction of nudged programs on which the applicant clicked.

For each measurement, we will report:
Overall click rate
Heterogeneous click behavior by free/reduced lunch status, middle school grades/application competitiveness, other demographic characteristics; specific nudged high school
While heterogeneous application and match outcomes are underpowered for the expected engagement levels, these initial engagement outcomes may provide information toward heterogeneous behavior and engagement.

Survey response outcomes
[Details to be added after the survey is designed, but before they are sent.]
For example, satisfaction with match outcome; knowledge/awareness of nudged programs

We will not adjust for multiple comparisons across secondary outcomes. Instead, we will clearly label primary (confirmatory) vs. secondary (pre-specified) vs. exploratory analyses.

Exploratory analyses
The same as our primary outcomes, but calculating the Local Average Treatment Effect (LATE/CACE): where a complier is defined as someone who clicks on at least one school link in their given email.
Heterogeneous treatment effects by: FRL status, geography, middle school grades/application competitiveness, other demographic characteristics.
Quantitative match/enrollment metrics: Graduation rate/performance/impact/safety of assigned/enrolled school
Within-applicant comparing applicant saved school lists over time (from applicant data dumps, before and after nudges)
Outcomes at [high school or middle school] school level
Increase in matches/enrollment from target middle schools to nudged high schools, as fraction of applicants eligible for nudges.
Comparing middle school outcomes from previous year(s)
Long-term metrics [in years after experiment]
9th, 10th, 11th, 12th grade State test scores
Graduation rate
College enrollment rate
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We identify a set of target middle schools, and nudge applicants from those middle schools to a set of high-performing high schools nearby.
Experimental Design Details
Not available
Randomization Method
Randomization will be done privately in the NYCPS office on a computer using Python code
Randomization Unit
Among target middle schools, the share of applicants receiving treatment will vary, with 0%, 50%, and 100% of applicants treated. Middle schools will be assigned these values at random, with stratification based on geography (borough) and number of nudge-eligible applicants in the application cycle for fall 2025 (2 size categories). The ratios of these will be approximately 50% at 0% treatment intensity, and 25% at 50% and 100% treatment intensity, so as to achieve a treatment group size of ~500 from a set of 1300 students in the overall experimental population.
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
Approximately 220 middle schools
Sample size: planned number of observations
Approximately 1300 applicants
Sample size (or number of clusters) by treatment arms
~105 middle schools as control, ~55 at 50% treatment, ~55 at 100% treatment
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
For primary outcome of application behavior change (takeup): 80% power for about 7% takeup compared to control.
IRB

Institutional Review Boards (IRBs)

IRB Name
New York City Department of Education IRB
IRB Approval Date
2025-09-29
IRB Approval Number
N/A
IRB Name
Cornell University IRB
IRB Approval Date
2025-08-06
IRB Approval Number
IRB0149840