Back to History

Fields Changed

Registration

Field Before After
Last Published June 08, 2020 11:45 AM June 08, 2020 12:08 PM
Intervention (Public) We will contact firms to have the opportunity to learn about a potential loan offer through a Fintech company. Firms are clients of an electronic payment processing company. They will be informed about the loan opportunity through a concurrent email and SMS. To test if forgetfulness and present-bias hinder the search for credit, firms will be randomly assigned to offers to learn about the loan with different combinations of deadlines and unanticipated and anticipated reminders. In our intervention, we will contact firms to provide them details about the opportunity to apply for a loan from a FinTech lender. The firms in our sample are clients of a FinTech electronic payment processing company that is partnering with a separate FinTech lender. The firms will be informed about the loan opportunity through a concurrent email and SMS messages. To test if forgetfulness and present-bias hinder the search for credit, we will vary the content and timing of the emails and SMS messages: firms will be randomly assigned offers to apply for a loan with different combinations of deadlines and unanticipated and anticipated reminder messages.
Primary Outcomes (End Points) Our primary outcomes are if firms click a link through an email or SMS to obtain information about a loan offer and if they apply for a loan. Furthermore, we will structurally estimate the model in Ericson (2017) using the data from the experiment to quantify the relative importance and interaction of different behavioral biases in preventing firms from applying for credit. Our primary outcome is a dummy variable indicating whether firms apply for a loan. We will also use this outcome to estimate the model in Ericson (2017) to quantify the relative importance and interaction of different behavioral biases in preventing firms from applying for credit.
Experimental Design (Public) Offers with deadlines will state that the firm has one week or one day to access the link to learn about the loan opportunity. Offers with an unanticipated reminder will receive a reminder to access the link one week after. Offers with an anticipated reminder will also receive a reminder to access the link one week after and will be told about such a reminder in the initial email and SMS. In total, our design has eight treatment groups: 1. Control, no offer 2. Loan offer with no deadline, no reminder 3. Loan offer with no deadline, anticipated reminder 4. Loan offer with no deadline, unanticipated reminder 5. Loan offer with 1-week deadline, no reminder 6. Loan offer with 1-week deadline, anticipated reminder 7. Loan offer with 1-week deadline, unanticipated reminder 8. Loan offer with 24-hour deadline, no reminder Messages with deadlines will state that the firm has one week or one day to access the link to apply for credit. Messages with an unanticipated reminder will receive a reminder to access the link one week after. Messages with an anticipated reminder will also receive a reminder to access the link one week after and will be told in the initial email and SMS messages that they will receive this reminder and on what day they will receive it. In total, our design has eight treatment groups: 1. Control, no messages 2. Loan messages with no deadline, no reminder 3. Loan messages with no deadline, anticipated reminder 4. Loan messages with no deadline, unanticipated reminder 5. Loan messages with 1-week deadline, no reminder 6. Loan messages with 1-week deadline, anticipated reminder 7. Loan messages with 1-week deadline, unanticipated reminder 8. Loan messages with 24-hour deadline, no reminder
Randomization Unit We randomize at the individual firm level (note that outcomes are measured at the firm level) using a simple stratified randomization. We stratify our randomization by four variables: 1. Average monthly electronic sales in the past year, or since the firm registered with the payments processing company if it was within the past year (4 quartiles) 2. Business type (6 categories: Beauty, Clothing, Professionals, Restaurants, Small Retailers and Other) 3. Tax registration status (2 categories: Self-Employed and Limited Company) 4. A proxy for initial impact on sales due to the Covid-19 outbreak as of March 2020. This proxy is defined as above or below the median in the percent difference in sales from February 2020 to March 2020. A third group for this categorical variable is made up of those who had no sales in February 2020 such that the change in sales measure is undefined. We stratified on these variables since we will test for heterogeneous treatment effects by each of these variables (sales, business type, tax registration status, and how affected they were by Covid-19). We randomize at the individual firm level (note that outcomes are measured at the firm level) using a simple stratified randomization. There are 70,020 individual firms in our experiment. We stratify our randomization by four variables: 1. Average monthly electronic sales in the past year, or since the firm registered with the payments processing company if it was within the past year (4 quartiles) 2. Business type (6 categories: Beauty, Clothing, Professionals, Restaurants, Small Retailers and Other) 3. Tax registration status (2 categories: Self-Employed and Limited Company) 4. A proxy for initial impact on sales due to the Covid-19 outbreak as of March 2020. This proxy is defined as above or below the median in the percent difference in sales from February 2020 to March 2020. A third group for this categorical variable is made up of those who had no sales in February 2020 such that the change in sales measure is undefined. We stratified on these variables since we will test for heterogeneous treatment effects by each of these variables (sales, business type, tax registration status, and how affected they were by Covid- 19). Our stratification includes 144 blocks: 4 (prior sales quartiles) * 2 (tax registration status) * 6 (business types) * 3 (impact on sales from Covid-19 outbreak).
Planned Number of Clusters 70,020 clusters (individual firms). Our stratification includes 144 blocks: 4 (prior sales quartiles) * 2 (tax registration status) * 6 (business types) * 3 (impact on sales from Covid-19 outbreak) 70,020 clusters (individual firms).
Power calculation: Minimum Detectable Effect Size for Main Outcomes Sample sizes for each group were informed by results in a prior pilot test and making pairwise power calculations for comparisons of interest. In the pilot there was a control group, a group with offers with a 24-hour deadline and without, along with an unanticipated reminder. For the purpose of the power calculations we assume a similar take-up between a 24-hour and one week deadline absent other treatments. The take-up rates in the pilot by pairwise comparison and the necessary sample size to detect such differences are (following the treatment group numbers in the Sample Size section): 2 vs 1: P0 = 0.01 P1 = 0.18 Optimal sample Size= 46 5 vs 1: P0 = 0.01 P1 = 0.28 Optimal sample Size= 26 5 vs 2: P0 = 0.18 P1 = 0.28 Optimal sample Size= 277 4 vs 2: P0 = 0.14 P1 = 0.18 Optimal sample Size= 1472 7 vs 5: P0 = 0.2 P1 = 0.24 Optimal sample Size= 1529 In the pilot there was no anticipated reminder treatment group. To obtain an estimate of expected effect size and perform power calculations, we benchmark results from the pilot with model simulations based on a model in Ericson (2017) assuming standard magnitudes for present-bias, forgetfulness and memory overconfidence. The model predicts ratios of take-up in the difference in take-up between groups with unanticipated and anticipated reminders over the difference in take-up between groups with unanticipated and no reminders. In our pilot we use the difference in take-up between groups with unanticipated and no reminders and then apply the ratio to scale an estimated take-up rate for groups with an anticipated reminder. The ratio from the model is measured in the period when the reminder is sent right before the deadline. With these simulated take-up rates of groups with an anticipated reminder, the necessary sample size to detect differences in relevant pairwise comparisons are: 4 vs 3: P0 = 0.14 P1 = 0.18 Optimal sample Size= 1222 7 vs 6: P0 = 0.24 P1 = 0.29 Optimal sample Size= 1152 To obtain a sample size for our study, we select the largest sample size needed for each group depending on its relevant pairwise power calculations from pilot results. We calculate we need in total 7253 observations to statistically detect a difference in take-up between treatment groups of interest. As our sample size is larger, we adjust sample sizes proportionally. Sample sizes for each group were informed by results in a prior pilot test conducted in May 2019 and making pairwise power calculations for comparisons of interest. In the pilot we were offering a reduction in the merchant fee charged to the firm for card payments processed through the FinTech payments company from 3.75% to 3.5%. There was a control group, a placebo group that received an email from the FinTech payments company with no messages, a group that received the offer with no deadline, and a group that received the offer with a 24-hour deadline. We also sent two non-randomized reminders to both groups after the deadline had passed (the deadline was not binding, so firms could still sign up after the deadline). For the purpose of the power calculations we assume a similar take-up between a 24-hour and one week deadline absent other treatments. The take-up rates in the pilot by pairwise comparison and the necessary sample size to detect such differences are as follows (using the treatment group numbers above). In the comparisons below, P0 refers to take-up in the second group listed in each comparison (e.g. if the comparison is 2 vs 1, P0 refers to take-up in group 1, and P1 refers to take-up in group 2). For the reminders which were not randomized in our pilot, we estimate P1 as cumulative take-up of the offer 24 hours after the reminder was sent and P0 as cumulative take-up immediately before the reminder was sent. - 2 vs 1: P0 = 0.01, P1 = 0.18. Minimum sample size per arm = 46. - 5 vs 1: P0 = 0.01, P1 = 0.28. Minimum sample size per arm = 26. - 5 vs 2: P0 = 0.18, P1 = 0.28. Minimum sample size per arm = 277. - 4 vs 2: P0 = 0.14, P1 = 0.18. Minimum sample size per arm = 1472. - 7 vs 5: P0 = 0.20, P1 = 0.24. Minimum sample size per arm = 1529. In the pilot there was no anticipated reminder treatment group. To obtain an estimate of expected effect size of the anticipated reminder and perform power calculations, we benchmark results from the pilot with model simulations based on the model in Ericson (2017) assuming standard magnitudes for present-bias and forgetfulness from the literature, and assuming full naïveté about present-bias but accurate beliefs about memory (using Ericson’s terminology, beta = 0.9; beta_hat = 1; rho = 0.95; rho_hat = 0.95). The model simulations predict ratios of the difference in take-up between the groups with unanticipated and anticipated reminders over the difference in take-up between the groups with unanticipated and no reminders. In our pilot we use the difference in take-up between groups with unanticipated and no reminders and then apply the ratio to scale an estimated take-up rate for groups with an anticipated reminder. The ratio from the model is measured in the period when the reminder is sent right before the deadline. With these simulated take-up rates of groups with an anticipated reminder, we get a treatment effect ratio of 1.23, which means the necessary sample size to detect differences in relevant pairwise comparisons are: - 4 vs 3: P0 = 0.14, P1 = 0.18. Minimum sample size per arm = 1222. - 7 vs 6: P0 = 0.24, P1 = 0.29. Minimum sample size per arm = 1152. To obtain a minimum sample size per arm for our study, we select the largest sample size needed for each group depending on its relevant pairwise power calculations above. For group 8, we assume we need the same number of observations as for group 5, since both include a deadline and no reminder. We calculate the following minimum sample sizes per arm to detect the expected effect sizes based on our pilot and simulations: - Control, no messages: 46 - Loan messages with no deadline, no reminder: 1,472 - Loan messages with no deadline, anticipated reminder: 1,529 - Loan messages with no deadline, unanticipated reminder: 1,222 - Loan messages with 1-week deadline, no reminder: 1,152 - Loan messages with 1-week deadline, anticipated reminder: 1,472 - Loan messages with 1-week deadline, unanticipated reminder: 1,529 - Loan messages with 24-hour deadline, no reminder: 1,529 Thus, we need in total 9,951 observations across all treatment arms to statistically detect the expected differences in take-up between treatment groups of interest, based on outcomes in our pilot and simulations of the Ericson (2017) model. As our available sample size for the experiment is 70,020 firms, which is much larger than the needed 9,951, we adjust sample sizes of each treatment arm proportionally to arrive at the sample sizes per arm in our study shown under “Sample size by treatment arm.”
Public analysis plan No Yes
Secondary Outcomes (End Points) Through link tracking, we know the fraction of firms in each treatment group that click the links (but cannot link these back to particular firms within each treatment group). Thus, a secondary outcome will be clicking the link. In addition, for those who log in after clicking the link, the FinTech lender also tracks how much progress firms make in completing the loan application (and these measures we can link to other data on the firms from the FinTech payments provider): 0% complete if after clicking the link they log in but do not fill out general information, and 25%, 50%, or 75% complete if they do not fully complete the loan application but complete a fraction of the application. 100% complete applications are our primary outcome “apply for a loan” above. Dummy variables for each of the four partially complete outcomes (0%, 25%, 50%, 75% complete) will be used as secondary outcomes to explore firms’ behavior in the loan application process.
Back to top

Analysis Plans

Field Before After
Document
SearchForCredit_PAP.pdf
MD5: dc41a656da4c15c049311e1a6515e22d
SHA1: ad0df4b6a62a07296027d9b6f967303a54a544a6
Title Search For Credit Pre-Analysis Plan
Back to top