Back to History Current Version

Self-Selection in Financial Literacy Education

Last registered on August 01, 2025

Pre-Trial

Trial Information

General Information

Title
Self-Selection in Financial Literacy Education
RCT ID
AEARCTR-0016395
Initial registration date
July 30, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 01, 2025, 2:29 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
UC Berkeley

Other Primary Investigator(s)

PI Affiliation
UC Berkeley
PI Affiliation
UC Berkeley

Additional Trial Information

Status
In development
Start date
2025-08-01
End date
2026-10-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Persistent disparities in financial literacy contribute to income and wealth inequality. This project investigates how individuals self-select into financial education and the welfare impacts of such selection. We pair rich administrative data with a lab-in-field experiment that incentivizes participation in financial education among individuals with low willingness to pay. To examine how different types of education material may influence take-up, we experimentally vary the format and perceived difficulty of the education materials.
External Link(s)

Registration Citation

Citation
Lyu, Junru, Elaine Shen and Laila Voss. 2025. "Self-Selection in Financial Literacy Education." AEA RCT Registry. August 01. https://doi.org/10.1257/rct.16395-1.0
Experimental Details

Interventions

Intervention(s)
We will conduct a lab-in-field experiment to study selection patterns in financial education. Participants are offered access to education material designed to improve their financial decision-making ability within the experiment. We study how demand (willingness to pay) for education varies with baseline ability and with the format and perceived difficulty of the education materials. To better understand the role of confidence in education investment decisions, we also elicit each participant’s perceived baseline ability and perceived benefits of education.
Intervention Start Date
2025-08-01
Intervention End Date
2026-10-01

Primary Outcomes

Primary Outcomes (end points)
We measure how demand for different types of education materials varies with an individual's starting level of financial knowledge. We measure an individual’s financial decision-making ability, demand for financial education (willingness to pay), and the treatment effect of education (or no education).
Primary Outcomes (explanation)
To measure financial knowledge, we design scenarios where there is a financial decision that maximizes a participant's monetary payout in the experiment. We then measure the mistake, or the difference between the participant's choices in the scenario and the choice that would have maximized their payout. We construct our measure of financial knowledge as a rescaled function of the mistake. We base our methodology on Ambuehl, Bernheim, and Lusardi (2022), which uses the difference between two equivalently valued complex and simply-framed financial instruments as the outcome. We collect both a baseline and an endline financial decision-making score. We also measure the change in the score, or the treatment effect under different education materials or under no education.

We use a multiple price list to measure demand (willingness to pay/willingness to accept) for different types of education. We exclude observations where individuals switch at multiple points in the multiple price list. We also conduct a version of our analysis using only the willingness to pay/willingness to accept elicitation for the first version of the education material as a robustness check.

Secondary Outcomes

Secondary Outcomes (end points)
We also measure confidence and perceived ability by asking participants to predict their performance on the baseline and endline financial decision-making questions. We count an estimated score as being correct if it is within a reasonable range (e.g. +/- 5 percentage points) of the actual score.
Secondary Outcomes (explanation)
We measure confidence and perceived ability by asking participants to predict their performance on the baseline financial decision-making questions. We also ask participants to predict their performance on the endline questions for each version of the education materials and for no education material. We count an estimate as being correct if it is within a reasonable range (e.g., +/- 5 percentage points) of the actual score. We also measure overconfidence using the difference between the participant's actual ability and their perceived ability.

Experimental Design

Experimental Design
We will conduct an experiment where we offer participants different types of education material to identify selection patterns and measure treatment effects.
Experimental Design Details
Not available
Randomization Method
The treatment and control conditions will be randomly assigned by a computer. Treatment conditions will be assigned at the individual level. Within treatment status, the different versions of the education material will be randomly assigned at the individual level as well. The size of the treatment and control group will not be balanced in order to maximize power.
Randomization Unit
Treatment conditions will be randomly assigned at the individual level. A computer will choose a random subset of individuals to have one of their choices in the willingness to pay elicitation implemented. For the majority of participants, the treatment and control conditions will be randomly assigned. Within treatment status, the different versions of the education material will be randomly assigned at the individual level as well. The size of the treatment and control group will not be balanced in order to maximize power.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Treatment will not be clustered.
Sample size: planned number of observations
With our current funding we will target a minimum of 1,000 complete responses, however the final sample size will ultimately depend on the attrition rate. Complete responses are defined as surveys that have been fully completed with only coherent willingness to pay responses. Incoherent responses include participants who switch from willing-to-pay to unwilling-to-pay more than once on the multiple price list, or participants who switch from unwilling-to-pay back to willing-to-pay. Incoherent willingness to pay measures will be coded as missing and excluded from the analysis. Our ideal sample size is 1,200-1,600 responses, and we will target larger sample sizes if we are able to obtain additional funding. Treatment will not be clustered. Since we connect our experiment questions to a real personal finance class at UC Berkeley, we may decide to exclude responses from participants who have already taken the class if their responses look different from other participants. We will also exclude participants who provide incoherent willingness to pay responses. Based on pilots of the survey on Prolific, we expect the attrition rate from incoherent willingness to pay responses to be approximately 5-15%, however the exact attrition rate may be higher or lower for different participant groups.
Sample size (or number of clusters) by treatment arms
We will target a minimum of 1,000 complete responses. The treatment-to-control ratio is 3:1, reflecting our design with three treatment arms and one control group. Our ideal sample size is 1,200-1,600 responses, but the final sample size will ultimately depend on the attrition rate and our ability to recruit enough participants to complete the survey. We may add additional versions of the education material in future versions of the study. We will target larger sample sizes if we are able to obtain additional funding. Treatment will not be clustered.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
We use results from a pilot study on Prolific to calculate the minimum detectable effect size, assuming a Type I error rate of 5%, 80% power, a mean score of 50 (out of 100), and a standard deviation of 30. To be conservative, we use a slightly higher standard deviation than what was observed in the pilot study. The treatment-to-control ratio is 3:1, reflecting our design with three treatment arms and one control group. With a sample size of 800, the minimum detectable effect size is 10 percentage points (p.p.), meaning the treated group must improve by at least 10 p.p. more than the control group for the effect to be detectable. As the sample size increases, the minimum detectable effect decreases: with 1,000 observations, it drops to 9 p.p.; with 1,200 and 1,400 observations, it decreases further to 8 p.p.; and with 1,600 observations, it reaches 7 p.p. Due to funding constraints that limit our sample size, we do not expect to be adequately powered to detect differences in selection patterns across the different versions of the education material at conventional significance levels. Instead, we plan to examine the correlation between participants' willingness to pay for each treatment and their initial financial task scores, then test whether these correlations differ across treatments. Based on our power calculations, with a sample of 800 to 1,000 participants, we can only detect differences in correlations of approximately 15–20 percentage points or more–that is, the correlation between the baseline financial literacy score and the willingness to pay would need to differ by at least 15–20 percentage points across treatments to be statistically detectable. We believe such large differences are unlikely given the similar nature of the education materials.
IRB

Institutional Review Boards (IRBs)

IRB Name
UC Berkeley Committee for the Protection of Human Subjects
IRB Approval Date
2024-09-27
IRB Approval Number
2024-03-17249