When does tomorrow start? 401(k) Auto-escalation over future time horizons

Last registered on December 02, 2019

Pre-Trial

Trial Information

General Information

Title
When does tomorrow start? 401(k) Auto-escalation over future time horizons
RCT ID
AEARCTR-0005136
Initial registration date
December 02, 2019

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
December 02, 2019, 3:34 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Carnegie Mellon University

Other Primary Investigator(s)

PI Affiliation
UCLA
PI Affiliation
Voya Financial

Additional Trial Information

Status
In development
Start date
2018-10-01
End date
2020-12-31
Secondary IDs
Abstract
While it is widely recognized that 401(k) enrollees are more willing to commit to increases in future, relative to present, savings, clarifying how enrollees conceptualize the “future” is an open question. Clarifying how individuals conceptualize the future, in the specific domain of savings, could help policy-makers and program designers encourage savings via optimized design of auto-enrollment interfaces and could help deepen our understanding of the mechanisms that drive individuals to exhibit present-focus in their intertemporal financial decisions. The proposed project would investigate these questions by testing how the willingness of US employees to enroll in 401(k) automatic escalation varies across the default escalation delay, default escalation increment, and the framing of the auto-escalation delay through a randomized field study administered across several hundred 401(k) plans sponsored by Voya Financial. The field study will be supplemented with experimental studies to investigate mechanisms.
External Link(s)

Registration Citation

Citation
Benartzi, Shlomo, Saurabh Bhargava and Richard Mason. 2019. "When does tomorrow start? 401(k) Auto-escalation over future time horizons." AEA RCT Registry. December 02. https://doi.org/10.1257/rct.5136-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2019-12-02
Intervention End Date
2020-08-31

Primary Outcomes

Primary Outcomes (end points)
We intend to analyze two sets of behavioral outcomes to capture the immediate and longer-run effects of the interventions on enrollment decisions and savings. Note that we anticipate excluding 10% of data based on earlier research targeting the same market segment (Mason 2019, Bhargava et al. 2018; Beshears et al. 2019). The basis for exclusion includes observations with incomplete data, and at the plan level, extend to plans that use non-standard data tracking methods, offer after-tax Roth contributions, index contributions in dollar amounts rather than percentage terms, etc.

Immediate Outcomes: Enrollment Decision, Escalation Decision (Delay, Increment, Cap)

Long-Run Variables: Persistence of Enrollment, Escalation Decisions, Subsequent Plan Adjustments, Leakage/Loans, Asset Accumulation


Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The experimental interventions involve randomizing employees across approximately 500 hundred 401(k) plans, sponsored by our firm partner, to one of several variations of the automatic-escalation enrollment interface as they proceed through online plan enrollment (or adjustment). The randomization occurs by a computer algorithm and is administered at the individual level. The randomization will involve all plans for which data is available, and for which there are no commercial exclusions, that had at least 25 online active enrollments from October 2018 to September 2019 (526 plans prior to any data, commercial exclusions – currently unknown to us).

The interventions reflect systematic departures along three dimensions from a baseline condition (which itself reflects a slight simplification relative to the prevailing commercial design used by Voya): the length of default escalation delay (90 days, 180 days, 365 days), the default escalation increment (1% or 2%), and the framing of the delay (as mm/dd/yy, or in # days). We do not intend to test all possible combinations (3 x 2 x 2) due to technical and operational constraints. For increased statistical efficiency, we intend to double-sample the baseline condition.
Experimental Design Details
Randomization Method
Randomization done by algorithm - implemented by firm partner.
Randomization Unit
Employee level (or as best inferred by IP address, cookie)
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
As described in the power calculation section, we aspire towards 27,000 enrollments/employees prior to exclusions. Using administrative data on enrollment from October 2018 to September 2019, we estimate that our sample (of plans w/ at least 25 annual online active enrollments) will yield approximately 3,000 observations (prior to exclusions per month). To simplify logistics, we pre-specify a trial period that ends at the end of a calendar month rather than specifying an explicit sample threshold. Assuming approximately 3,000 non-excluded observations per month (Mason 2019), we anticipate a data collection period lasting approximately 9 months. Assuming a start date during the first week of December implies an end date of August 31st 2020.
Sample size: planned number of observations
As described in the power calculation section, we aspire towards 27,000 enrollments/employees prior to exclusions. Using administrative data on enrollment from October 2018 to September 2019, we estimate that our sample (of plans w/ at least 25 annual online active enrollments) will yield approximately 3,000 observations (prior to exclusions per month). To simplify logistics, we pre-specify a trial period that ends at the end of a calendar month rather than specifying an explicit sample threshold. Assuming approximately 3,000 non-excluded observations per month (Mason 2019), we anticipate a data collection period lasting approximately 9 months. Assuming a start date during the first week of December implies an end date of August 31st 2020.
Sample size (or number of clusters) by treatment arms
We aspire towards approximately 2300 enrollments/employees per each of 11 conditions (or 10 w/ the control double-sampled) after exclusions or approximately 27k observations collectively prior to exclusions.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Because the statistical power of our tests are determined by the number of employees who visit the portal, their baseline rate of enrollment, as well as the number of plans represented (in the estimation of treatment effects, we intend to cluster standard errors at the plan level and because we are focused on small-to-mid market plans, these plans can be small), our most informed estimates of statistical power come from prior research administered to the same Voya market segment and occurring in roughly the same decision context (“Picking up the Pace”, Mason 2019, unpublished). This prior research involved a field test of 3 treatment variations on the escalation enrollment decisions of employees. In their baseline specification, the paper estimates treatment effects on the binary enrollment decision with a standard error of 0.01 from a sample of approximately 8700 employees across 2,400 plans (Table 2). The estimates suggest that to identify changes in automatic escalation enrollment of approximately 0.02, using pairwise comparisons of interventions, would require approximately 2,900 employees per condition (assuming the same distribution of plan sizes). There are three reasons to expect slightly higher statistical power in our current test. First, we introduce sample restrictions to focus on larger plans, which increases our statistical power for a given overall sample. Second, we intend to oversample the baseline condition both within the study (x2) and through the use of a pre-period control (from Oct 2018). Finally, while we are interested in pairwise comparisons, our central tests of the three mechanisms involve tests across groups of interventions. We therefore roughly assume a need for 2,300/condition to identify a treatment effect on escalation enrollment of approximately 0.02. We therefore aspire towards a non-excluded sample of about, across the 10 interventions and the 11 treatment assignments (including the double-sampling of the baseline), of approximately 25,000 employees. Assuming a small fraction of exclusions do to missing data or commercial exclusions, and recognizing the additional power provided by a pre-trial sample period, we aspire towards approximately 27,000 active online enrollments.
IRB

Institutional Review Boards (IRBs)

IRB Name
Carnegie Mellon University
IRB Approval Date
2019-10-09
IRB Approval Number
STUDY2017_00000045

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials