Back to History Current Version

Do Firms’ Behavioral Biases Delay the Search for Credit During the Covid-19 Outbreak?

Last registered on June 08, 2020

Pre-Trial

Trial Information

General Information

Title
Do Firms’ Behavioral Biases Delay the Search for Credit During the Covid-19 Outbreak?
RCT ID
AEARCTR-0005782
Initial registration date
April 29, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 01, 2020, 3:36 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
June 08, 2020, 11:45 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
Baruch College, The City University of New York

Other Primary Investigator(s)

PI Affiliation
University of California, Berkeley
PI Affiliation
University of California, Berkeley
PI Affiliation
Northwestern University

Additional Trial Information

Status
In development
Start date
2020-04-30
End date
2020-12-31
Secondary IDs
Abstract
The goal of this study is to estimate the effect of deadlines and reminders on firms seeking out credit during an economic downturn due to the Covid-19 outbreak. Credit can be valuable to firms to weather the downturn but its benefits are in the future when the cost to obtain one is borne immediately. Forgetfulness and present-bias might be biases that exacerbate the cost and lead firms to not ever search and request a loan. We will study if treatments involving deadlines and reminders (anticipated and unanticipated) helps firms overcome these biases to search for a loan.
External Link(s)

Registration Citation

Citation
Gertler, Paul et al. 2020. "Do Firms’ Behavioral Biases Delay the Search for Credit During the Covid-19 Outbreak?." AEA RCT Registry. June 08. https://doi.org/10.1257/rct.5782-2.0
Experimental Details

Interventions

Intervention(s)
We will contact firms to have the opportunity to learn about a potential loan offer through a Fintech company. Firms are clients of an electronic payment processing company. They will be informed about the loan opportunity through a concurrent email and SMS. To test if forgetfulness and present-bias hinder the search for credit, firms will be randomly assigned to offers to learn about the loan with different combinations of deadlines and unanticipated and anticipated reminders.
Intervention Start Date
2020-04-30
Intervention End Date
2020-05-07

Primary Outcomes

Primary Outcomes (end points)
Our primary outcomes are if firms click a link through an email or SMS to obtain information about a loan offer and if they apply for a loan. Furthermore, we will structurally estimate the model in Ericson (2017) using the data from the experiment to quantify the relative importance and interaction of different behavioral biases in preventing firms from applying for credit.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Offers with deadlines will state that the firm has one week or one day to access the link to learn about the loan opportunity. Offers with an unanticipated reminder will receive a reminder to access the link one week after. Offers with an anticipated reminder will also receive a reminder to access the link one week after and will be told about such a reminder in the initial email and SMS. In total, our design has eight treatment groups:

1. Control, no offer
2. Loan offer with no deadline, no reminder
3. Loan offer with no deadline, anticipated reminder
4. Loan offer with no deadline, unanticipated reminder
5. Loan offer with 1-week deadline, no reminder
6. Loan offer with 1-week deadline, anticipated reminder
7. Loan offer with 1-week deadline, unanticipated reminder
8. Loan offer with 24-hour deadline, no reminder
Experimental Design Details
Randomization Method
Randomization done by a computer (R script)
Randomization Unit
We randomize at the individual firm level (note that outcomes are measured at the firm level) using a simple stratified randomization.

We stratify our randomization by four variables:
1. Average monthly electronic sales in the past year, or since the firm registered with the payments processing company if it was within the past year (4 quartiles)
2. Business type (6 categories: Beauty, Clothing, Professionals, Restaurants, Small Retailers and Other)
3. Tax registration status (2 categories: Self-Employed and Limited Company)
4. A proxy for initial impact on sales due to the Covid-19 outbreak as of March 2020. This proxy is defined as above or below the median in the percent difference in sales from February 2020 to March 2020. A third group for this categorical variable is made up of those who had no sales in February 2020 such that the change in sales measure is undefined.

We stratified on these variables since we will test for heterogeneous treatment effects by each of these variables (sales, business type, tax registration status, and how affected they were by Covid-19).
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
70,020 clusters (individual firms).

Our stratification includes 144 blocks: 4 (prior sales quartiles) * 2 (tax registration status) * 6 (business types) * 3 (impact on sales from Covid-19 outbreak)
Sample size: planned number of observations
70,020
Sample size (or number of clusters) by treatment arms
1. Control, no offer: 327
2. Loan offer with no deadline, no reminder: 10355
3. Loan offer with no deadline, anticipated reminder: 8592
4. Loan offer with no deadline, unanticipated reminder: 10362
5. Loan offer with 1-week deadline, no reminder: 10765
6. Loan offer with 1-week deadline, anticipated reminder: 8104
7. Loan offer with 1-week deadline, unanticipated reminder: 10755
8. Loan offer with 24-hour deadline, no reminder: 10760
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Sample sizes for each group were informed by results in a prior pilot test and making pairwise power calculations for comparisons of interest. In the pilot there was a control group, a group with offers with a 24-hour deadline and without, along with an unanticipated reminder. For the purpose of the power calculations we assume a similar take-up between a 24-hour and one week deadline absent other treatments. The take-up rates in the pilot by pairwise comparison and the necessary sample size to detect such differences are (following the treatment group numbers in the Sample Size section): 2 vs 1: P0 = 0.01 P1 = 0.18 Optimal sample Size= 46 5 vs 1: P0 = 0.01 P1 = 0.28 Optimal sample Size= 26 5 vs 2: P0 = 0.18 P1 = 0.28 Optimal sample Size= 277 4 vs 2: P0 = 0.14 P1 = 0.18 Optimal sample Size= 1472 7 vs 5: P0 = 0.2 P1 = 0.24 Optimal sample Size= 1529 In the pilot there was no anticipated reminder treatment group. To obtain an estimate of expected effect size and perform power calculations, we benchmark results from the pilot with model simulations based on a model in Ericson (2017) assuming standard magnitudes for present-bias, forgetfulness and memory overconfidence. The model predicts ratios of take-up in the difference in take-up between groups with unanticipated and anticipated reminders over the difference in take-up between groups with unanticipated and no reminders. In our pilot we use the difference in take-up between groups with unanticipated and no reminders and then apply the ratio to scale an estimated take-up rate for groups with an anticipated reminder. The ratio from the model is measured in the period when the reminder is sent right before the deadline. With these simulated take-up rates of groups with an anticipated reminder, the necessary sample size to detect differences in relevant pairwise comparisons are: 4 vs 3: P0 = 0.14 P1 = 0.18 Optimal sample Size= 1222 7 vs 6: P0 = 0.24 P1 = 0.29 Optimal sample Size= 1152 To obtain a sample size for our study, we select the largest sample size needed for each group depending on its relevant pairwise power calculations from pilot results. We calculate we need in total 7253 observations to statistically detect a difference in take-up between treatment groups of interest. As our sample size is larger, we adjust sample sizes proportionally.
IRB

Institutional Review Boards (IRBs)

IRB Name
UC Berkeley IRB
IRB Approval Date
2018-04-04
IRB Approval Number
2018-02-10796
IRB Name
UC Berkeley IRB
IRB Approval Date
2020-04-19
IRB Approval Number
2020-03-13091
Analysis Plan

Analysis Plan Documents

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials