A National Study of Using Nudges for FAFSA Renewal

Last registered on June 06, 2023

Pre-Trial

Trial Information

General Information

Title
A National Study of Using Nudges for FAFSA Renewal
RCT ID
AEARCTR-0011295
Initial registration date
May 31, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
June 06, 2023, 4:07 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Harvard University

Other Primary Investigator(s)

PI Affiliation
University of Chicago
PI Affiliation
Federal Student Aid

Additional Trial Information

Status
Completed
Start date
2023-04-11
End date
2023-04-14
Secondary IDs
Prior work
This trial is based on or builds upon one or more prior RCTs.
Abstract
Every year, Federal Student Aid sends email nudges to enrolled college students to renew their financial aid. This study seeks to evaluate the effects of these nudges on FAFSA renewal, financial aid receipt, and subsequent college enrollment.
External Link(s)

Registration Citation

Citation
Anisfeld, Ari, Salman Khan and Dennis Kramer. 2023. "A National Study of Using Nudges for FAFSA Renewal." AEA RCT Registry. June 06. https://doi.org/10.1257/rct.11295-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2023-04-11
Intervention End Date
2023-04-14

Primary Outcomes

Primary Outcomes (end points)
FAFSA renewal, college re-enrollment, and financial aid receipt.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Hidden
Experimental Design Details
There are four groups: one control group and three treatment groups. The treatment groups for our purposes are considered similar and received slightly different messages.
Randomization Method
Block randomization
Randomization Unit
We block randomize on the following observables:
-Gender
-Pell
-Independent status
-Coarse school types (Public/non-profit 4-year, Public/non-profit 2-year, for-profit, NA)
-Most recent enrollment status (Full-time, Part-time, Not enrolled, Withdrawn/other, Graduated)
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
N/A
Sample size: planned number of observations
9,914,843
Sample size (or number of clusters) by treatment arms
Sample Size by Treatment Arms
C 198275
T1 3238800
T2 3238893
T3 3238875
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
We conduct power calculations using Stata’s power command for two proportion tests. These calculations require an estimate of the control groups take-up which we set to .35. The estimate is based on a previous iteration of this experiment, which occurred in mid-2020 during the COVID pandemic. The population take-up rate in that experiment was around .25, which we take as a lower bound. Lower take-up rates imply lower variance and so increase power. To be more conservative, we use .35, which is just above the take-up rate of Pell recipients, the subgroup observed with the highest take-up rate in our previous experiment. Our block randomization provides 160 potential blocks; though, in practice we have fewer blocks as some are empty (e.g. “not enrolled” students only have NA for the college type variable). This blocking scheme provides two benefits. 1) We increase power by incorporating block indicators in regression analysis. We do not account for the blocking scheme in power calculations. 2) It allows us to produce estimates for any subgroup of interest, such as Pell recipients or students in for-profit colleges, while maintaining balance on other blocking attributes. In practice we’ll look at heterogeneity analysis by each of our subgroups. The smallest subgroup is graduated students with 600,000 observations, followed by For-Profit students (n = 930,000). We show the power curves for the full sample and for the smallest subsample. In the full sample we can distinguish a difference in rates of .003 with power=.8 or .0035 with power = .9, these are very small MDE representing roughly a 1% change in take-up. In our smallest subsamples, our MDE are roughly .012 (.014) with power of .8 (.9). In practice, we will use a multiple hypothesis correction.
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials