x

NEW UPDATE: Completed trials may now upload and register supplementary documents (e.g. null results reports, populated pre-analysis plans, or post-trial results reports) in the Post Trial section under Reports, Papers, & Other Materials.
Do Firms’ Behavioral Biases Delay the Search for Credit During the Covid-19 Outbreak?
Last registered on June 08, 2020

Pre-Trial

Trial Information
General Information
Title
Do Firms’ Behavioral Biases Delay the Search for Credit During the Covid-19 Outbreak?
RCT ID
AEARCTR-0005782
Initial registration date
April 29, 2020
Last updated
June 08, 2020 12:08 PM EDT
Location(s)
Region
Primary Investigator
Affiliation
Baruch College, The City University of New York
Other Primary Investigator(s)
PI Affiliation
University of California, Berkeley
PI Affiliation
University of California, Berkeley
PI Affiliation
Northwestern University
Additional Trial Information
Status
In development
Start date
2020-04-30
End date
2020-12-31
Secondary IDs
Abstract
The goal of this study is to estimate the effect of deadlines and reminders on firms seeking out credit during an economic downturn due to the Covid-19 outbreak. Credit can be valuable to firms to weather the downturn but its benefits are in the future when the cost to obtain one is borne immediately. Forgetfulness and present-bias might be biases that exacerbate the cost and lead firms to not ever search and request a loan. We will study if treatments involving deadlines and reminders (anticipated and unanticipated) helps firms overcome these biases to search for a loan.
External Link(s)
Registration Citation
Citation
Gertler, Paul et al. 2020. "Do Firms’ Behavioral Biases Delay the Search for Credit During the Covid-19 Outbreak?." AEA RCT Registry. June 08. https://doi.org/10.1257/rct.5782-3.0.
Experimental Details
Interventions
Intervention(s)
In our intervention, we will contact firms to provide them details about the opportunity to apply for a loan from a FinTech lender. The firms in our sample are clients of a FinTech electronic payment processing company that is partnering with a separate FinTech lender. The firms will be informed about the loan opportunity through a concurrent email and SMS messages. To test if forgetfulness and present-bias hinder the search for credit, we will vary the content and timing of the emails and SMS messages: firms will be randomly assigned offers to apply for a loan with different combinations of deadlines and unanticipated and anticipated reminder messages.
Intervention Start Date
2020-04-30
Intervention End Date
2020-05-07
Primary Outcomes
Primary Outcomes (end points)
Our primary outcome is a dummy variable indicating whether firms apply for a loan. We will also use this outcome to estimate the model in Ericson (2017) to quantify the relative importance and interaction of different behavioral biases in preventing firms from applying for credit.
Primary Outcomes (explanation)
Secondary Outcomes
Secondary Outcomes (end points)
Through link tracking, we know the fraction of firms in each treatment group that click the links (but cannot link these back to particular firms within each treatment group). Thus, a secondary outcome will be clicking the link. In addition, for those who log in after clicking the link, the FinTech lender also tracks how much progress firms make in completing the loan application (and these measures we can link to other data on the firms from the FinTech payments provider): 0% complete if after clicking the link they log in but do not fill out general information, and 25%, 50%, or 75% complete if they do not fully complete the loan application but complete a fraction of the application. 100% complete applications are our primary outcome “apply for a loan” above. Dummy variables for each of the four partially complete outcomes (0%, 25%, 50%, 75% complete) will be used as secondary outcomes to explore firms’ behavior in the loan application process.
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
Messages with deadlines will state that the firm has one week or one day to access the link to apply for credit. Messages with an unanticipated reminder will receive a reminder to access the link one week after. Messages with an anticipated reminder will also receive a reminder to access the link one week after and will be told in the initial email and SMS messages that they will receive this reminder and on what day they will receive it. In total, our design has eight treatment groups:

1. Control, no messages
2. Loan messages with no deadline, no reminder
3. Loan messages with no deadline, anticipated reminder
4. Loan messages with no deadline, unanticipated reminder
5. Loan messages with 1-week deadline, no reminder
6. Loan messages with 1-week deadline, anticipated reminder
7. Loan messages with 1-week deadline, unanticipated reminder
8. Loan messages with 24-hour deadline, no reminder
Experimental Design Details
Not available
Randomization Method
Randomization done by a computer (R script)
Randomization Unit
We randomize at the individual firm level (note that outcomes are measured at the firm level) using a simple stratified randomization. There are 70,020 individual firms in our experiment.

We stratify our randomization by four variables:

1. Average monthly electronic sales in the past year, or since the firm registered with the payments processing company if it was within the past year (4 quartiles)
2. Business type (6 categories: Beauty, Clothing, Professionals, Restaurants, Small Retailers and Other)
3. Tax registration status (2 categories: Self-Employed and Limited Company)
4. A proxy for initial impact on sales due to the Covid-19 outbreak as of March 2020. This proxy is defined as above or below the median in the percent difference in sales from February 2020 to March 2020. A third group for this categorical variable is made up of those who had no sales in February 2020 such that the change in sales measure is undefined.

We stratified on these variables since we will test for heterogeneous treatment effects by each of these variables (sales, business type, tax registration status, and how affected they were by Covid- 19). Our stratification includes 144 blocks: 4 (prior sales quartiles) * 2 (tax registration status) * 6 (business types) * 3 (impact on sales from Covid-19 outbreak).
Was the treatment clustered?
No
Experiment Characteristics
Sample size: planned number of clusters
70,020 clusters (individual firms).
Sample size: planned number of observations
70,020
Sample size (or number of clusters) by treatment arms
1. Control, no offer: 327
2. Loan offer with no deadline, no reminder: 10355
3. Loan offer with no deadline, anticipated reminder: 8592
4. Loan offer with no deadline, unanticipated reminder: 10362
5. Loan offer with 1-week deadline, no reminder: 10765
6. Loan offer with 1-week deadline, anticipated reminder: 8104
7. Loan offer with 1-week deadline, unanticipated reminder: 10755
8. Loan offer with 24-hour deadline, no reminder: 10760
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Sample sizes for each group were informed by results in a prior pilot test conducted in May 2019 and making pairwise power calculations for comparisons of interest. In the pilot we were offering a reduction in the merchant fee charged to the firm for card payments processed through the FinTech payments company from 3.75% to 3.5%. There was a control group, a placebo group that received an email from the FinTech payments company with no messages, a group that received the offer with no deadline, and a group that received the offer with a 24-hour deadline. We also sent two non-randomized reminders to both groups after the deadline had passed (the deadline was not binding, so firms could still sign up after the deadline). For the purpose of the power calculations we assume a similar take-up between a 24-hour and one week deadline absent other treatments. The take-up rates in the pilot by pairwise comparison and the necessary sample size to detect such differences are as follows (using the treatment group numbers above). In the comparisons below, P0 refers to take-up in the second group listed in each comparison (e.g. if the comparison is 2 vs 1, P0 refers to take-up in group 1, and P1 refers to take-up in group 2). For the reminders which were not randomized in our pilot, we estimate P1 as cumulative take-up of the offer 24 hours after the reminder was sent and P0 as cumulative take-up immediately before the reminder was sent. - 2 vs 1: P0 = 0.01, P1 = 0.18. Minimum sample size per arm = 46. - 5 vs 1: P0 = 0.01, P1 = 0.28. Minimum sample size per arm = 26. - 5 vs 2: P0 = 0.18, P1 = 0.28. Minimum sample size per arm = 277. - 4 vs 2: P0 = 0.14, P1 = 0.18. Minimum sample size per arm = 1472. - 7 vs 5: P0 = 0.20, P1 = 0.24. Minimum sample size per arm = 1529. In the pilot there was no anticipated reminder treatment group. To obtain an estimate of expected effect size of the anticipated reminder and perform power calculations, we benchmark results from the pilot with model simulations based on the model in Ericson (2017) assuming standard magnitudes for present-bias and forgetfulness from the literature, and assuming full naïveté about present-bias but accurate beliefs about memory (using Ericson’s terminology, beta = 0.9; beta_hat = 1; rho = 0.95; rho_hat = 0.95). The model simulations predict ratios of the difference in take-up between the groups with unanticipated and anticipated reminders over the difference in take-up between the groups with unanticipated and no reminders. In our pilot we use the difference in take-up between groups with unanticipated and no reminders and then apply the ratio to scale an estimated take-up rate for groups with an anticipated reminder. The ratio from the model is measured in the period when the reminder is sent right before the deadline. With these simulated take-up rates of groups with an anticipated reminder, we get a treatment effect ratio of 1.23, which means the necessary sample size to detect differences in relevant pairwise comparisons are: - 4 vs 3: P0 = 0.14, P1 = 0.18. Minimum sample size per arm = 1222. - 7 vs 6: P0 = 0.24, P1 = 0.29. Minimum sample size per arm = 1152. To obtain a minimum sample size per arm for our study, we select the largest sample size needed for each group depending on its relevant pairwise power calculations above. For group 8, we assume we need the same number of observations as for group 5, since both include a deadline and no reminder. We calculate the following minimum sample sizes per arm to detect the expected effect sizes based on our pilot and simulations: - Control, no messages: 46 - Loan messages with no deadline, no reminder: 1,472 - Loan messages with no deadline, anticipated reminder: 1,529 - Loan messages with no deadline, unanticipated reminder: 1,222 - Loan messages with 1-week deadline, no reminder: 1,152 - Loan messages with 1-week deadline, anticipated reminder: 1,472 - Loan messages with 1-week deadline, unanticipated reminder: 1,529 - Loan messages with 24-hour deadline, no reminder: 1,529 Thus, we need in total 9,951 observations across all treatment arms to statistically detect the expected differences in take-up between treatment groups of interest, based on outcomes in our pilot and simulations of the Ericson (2017) model. As our available sample size for the experiment is 70,020 firms, which is much larger than the needed 9,951, we adjust sample sizes of each treatment arm proportionally to arrive at the sample sizes per arm in our study shown under “Sample size by treatment arm.”
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
UC Berkeley IRB
IRB Approval Date
2018-04-04
IRB Approval Number
2018-02-10796
IRB Name
UC Berkeley IRB
IRB Approval Date
2020-04-19
IRB Approval Number
2020-03-13091
Analysis Plan
Analysis Plan Documents
Search For Credit Pre-Analysis Plan

MD5: dc41a656da4c15c049311e1a6515e22d

SHA1: ad0df4b6a62a07296027d9b6f967303a54a544a6

Uploaded At: June 08, 2020