Financial Education, Health Scoring and Downstream Behaviour: Evidence from FinTech

Last registered on February 21, 2023

Pre-Trial

Trial Information

General Information

Title
Financial Education, Health Scoring and Downstream Behaviour: Evidence from FinTech
RCT ID
AEARCTR-0009594
Initial registration date
February 16, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
February 21, 2023, 7:03 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Kiel Institute for the World Economy

Other Primary Investigator(s)

PI Affiliation
PI Affiliation
Technical University of Munich
PI Affiliation
Ludwig-Maximilians-Universit├Ąt Munich & ifo Institute

Additional Trial Information

Status
In development
Start date
2023-05-01
End date
2023-09-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We conduct a field experiment within the chatbot of a mobile app of a fintech. The goal is to examine the effect of financial education and a financial health score on consumer financial behavior. Experiment participants will be randomly assigned to three groups on an individual level.

Participants in the control group neither receive financial education nor a financial health score. The first treatment group receives financial education. More specifically participants are educated about a standard budgeting rule and how to use the apps' personal financial management features to implement it. The second treatment group receives identical financial education but is further provided with their individual subjective financial health score. Thereby, the financial health score is calculated based on the participant's answers to five questions on their finances. With this experiment we want to answer two research questions: First, is financial education within a chatbot effective? Second, does a financial health score amplify the effect of financial education?

To investigate the research questions, we examine several outcomes and compare them among the different groups. More specifically, we measure the adoption of personal financial management features within the app as well as the spending and saving behavior of the participants. Our first hypothesis is that participants in the first treatment group have a higher adoption of personal financial management features and spend less/ save more as compared to the control group. Our second hypothesis is that the provision of an individual financial health score in the second treatment group further amplifies these effects.
External Link(s)

Registration Citation

Citation
Federle, Jonathan et al. 2023. "Financial Education, Health Scoring and Downstream Behaviour: Evidence from FinTech." AEA RCT Registry. February 21. https://doi.org/10.1257/rct.9594-1.0
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2023-07-01
Intervention End Date
2023-08-01

Primary Outcomes

Primary Outcomes (end points)
- Actual spending behavior of users in the period following the presentation of a financial health score. Separate evaluation horizons ranging from several days to several months are planned. We expect spendings to decrease following the presentation of a financial health score.

- Actual saving behavior of users in the period following the presentation of a financial health score. Separate evaluation horizons ranging from several days to several months are planned. We expect savings to increase following the presentation of a financial health score.

- Adoption of personal financial management features as measured by the usage of the features by the respective users in the groups. We expect the adoption of personal financial management features to increase in both Treatment arms.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The fintech app used for the field experiment contains a chatbot which can be used by the fintech's clients to ask service related questions. For instance, if a client wants to change his/ her address and does not know how, the client can ask the chatbot to assist him/ her with this task. The recruitment mechanism now works as follows: If a client consults the chatbot and his/ her question is resolved, he/ she will subsequently be asked by the chatbot whether he/ she would like to learn more about financial health:

"One last thing - I'm learning a lot about financial health these days. They say money can't buy happiness... But financial health can definitely improve one's life and prevent hardships. Can I share my insights with you? It just takes 3-5 minutes." (The exact wording might be subject to change depending on editorial requests of the fintech)

Upon answering the question with yes, the participant will be randomly assigned to one of the two treatment groups. In general, the participant is at no point obliged to participate in the experiemnt as he/ she can exit the chatbot any time.
As stated, the experiment will be launched in a staggered release. The control group is composed of clients that consult the chabot and have their question resolved, but are not asked if they would like to learn more about financial health.
Experimental Design Details
Randomization Method
Each user of the FinTech app is assigned a random user id according to the UUID-4 standard upon registration. The treatment/control assignment within our experiment is based on this user id. In particular, we use the Python NumPy RandomState random number generator and initialize the seed function determining the randomization with the aforementioned UUID-4. Based on this initialized random number generator, we assign the users to the Control, Treatment A, or Treatment B group, respectively.

A code excerpt outlining the exact mechanism of the randomization is attached in the rubric "Docs & Materials".
Randomization Unit
Users are randomized at a user level. See "Randomization Method" for further details.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
As many user which we are able collect in the experiment time period or 30.000 users, whichever comes first.
Sample size: planned number of observations
As many user which we are able collect in the experiment time period or 30.000 users, whichever comes first. If the participation rate of app users in the experiment is significantly lower than anticipated, we might fall back to a maximum observation period of 4-6 months and reside with the number of observations that we have collected within this time frame. The exact specification of the time frame depends on the preferences of the project partner.
Sample size (or number of clusters) by treatment arms
The experiment will be introduced as a staggered release by the project partner. We cannot yet determine how many users will finally end up in the control/treatment groups.

We, however, plan to assign the Control:Treatment A:Treatment B according to the following ratio: 50:25:25
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
Ludwig-Maximilians-Universit├Ąt Munich, Ethics Committee of the Department of Economics
IRB Approval Date
2022-06-27
IRB Approval Number
2022-07

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials