Why are firms slow to adopt profitable business practices? Evidence on the roles of present bias, forgetfulness, and overconfidence about memory

Last registered on October 06, 2020

Pre-Trial

Trial Information

General Information

Title
Why are firms slow to adopt profitable business practices? Evidence on the roles of present bias, forgetfulness, and overconfidence about memory
RCT ID
AEARCTR-0006540
Initial registration date
September 28, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
September 29, 2020, 7:32 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
October 06, 2020, 3:48 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
Northwestern University

Other Primary Investigator(s)

PI Affiliation
Baruch College, The City University of New York
PI Affiliation
University of California, Berkeley
PI Affiliation
University of California, Berkeley

Additional Trial Information

Status
In development
Start date
2020-09-29
End date
2021-04-01
Secondary IDs
Abstract
Why are micro, small, and medium enterprises slow to adopt profitable business practices? We test the role of three behavioral biases: present bias, limited memory, and overconfidence about memory. In partnership with a payments financial technology (FinTech) provider in Mexico, we randomly offer businesses that are already users of the payments technology the opportunity to be charged a lower merchant fee for each payment they receive. We randomly vary whether the firms face a deadline to register for this lower fee, whether they receive a reminder, and whether the reminder is anticipated (i.e., whether we tell them in advance that they will receive a reminder on a certain date).
External Link(s)

Registration Citation

Citation
Gertler, Paul et al. 2020. "Why are firms slow to adopt profitable business practices? Evidence on the roles of present bias, forgetfulness, and overconfidence about memory." AEA RCT Registry. October 06. https://doi.org/10.1257/rct.6540-2.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
We randomly offer a cost-saving measure to firms who are already users of a financial technology (FinTech) to process electronic payments by debit and credit card: through the FinTech company that processes their payments, we offer to lower the merchant fee they are charged for each sale they make through the technology. This fee reduction intervention is offered through an email and SMS campaign, where firms can complete a short form to obtain the fee reduction. Firms are further randomized into groups that obtain different versions of the emails and SMS messages with some combination of a one-week deadline, one-day deadline, or no deadline, and an anticipated reminder, unanticipated reminder, or no reminder. The firms that assigned to an anticipated or unanticipated reminder group will receive a follow-up reminder email the day before the deadline. Each of the emails will be complemented by two SMS text messages that contain similar information in a condensed format.
Intervention Start Date
2020-09-29
Intervention End Date
2020-10-06

Primary Outcomes

Primary Outcomes (end points)
Our primary outcome is a dummy variable that indicates whether a firm took up the lower merchant fee. We will also use this outcome to estimate the model in Ericson (2017) to quantify the relative importance and interaction of different behavioral biases in preventing firms from taking up profitable decision.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Through link tracking, we will be able to tell which firms clicked the link. Thus, a secondary outcome will be whether a firm clicked the link, regardless of whether they actually completed the form or not. (Note that for the email links, we are able to track the specific user that clicked the link. For the SMS links, we can only track the number of clicks by treatment arm by day but not merge the clicks to individual users.)

We will also measure the elasticity of firm sales with respect to the experimental change in fee. Thus we will use asinh(sales) and asinh(number of transactions) as secondary outcomes, where asinh() is the inverse hyperbolic sine transformation, a log-like transformation that can deal with 0s. We will also use levels as robustness checks of these outcomes: sales (winsorized at 5%), number of transactions (winsorized at 5%), and a dummy for whether the firm made any sale in the time period.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Messages with deadlines will either have a one-day deadline or a one-week deadline. For the one-day deadline, they will state that the link must be clicked and form completed “with a deadline of today September 29.” For the one-week deadline, they will state that the form must be completed “with a deadline of October 6.” Messages with a reminder will receive a reminder on October 5; in the groups with both a deadline and reminder, the reminder will also remind the user of the deadline. Messages with an unanticipated reminder will receive a reminder to activate the lower fee on October 5. Messages with an anticipated reminder will receive a reminder to access the link one week after and will also be told in the initial email and SMS message that they will receive a reminder on October 5 if they have not yet activated the lower fee.

As a result, we will have 15 treatment arms:
1) Control, no messages
1. Control, no messages.
2. Messages with no deadline, no reminder, 3.00% offer.
3. Messages with no deadline, anticipated reminder, 3.00% offer.
4. Messages with no deadline, unanticipated reminder, 3.00% offer.
5. Messages with one-week deadline, no reminder, 3.00% offer.
6. Messages with one-week deadline, anticipated reminder, 3.00% offer.
7. Messages with one-week deadline, unanticipated reminder, 3.00% offer.
8. Messages with one-day deadline, no reminder, 3.00% offer.
9. Messages with no deadline, no reminder, 2.75% offer.
10. Messages with no deadline, anticipated reminder, 2.75% offer.
11. Messages with no deadline, unanticipated reminder, 2.75% offer.
12. Messages with one-week deadline, no reminder, 2.75% offer.
13. Messages with one-week deadline, anticipated reminder, 2.75% offer.
14. Messages with one-week deadline, unanticipated reminder, 2.75% offer.
15. Messages with one-day deadline, no reminder, 2.75% offer.
Experimental Design Details
Randomization Method
Randomization was conducted using the R programming language to assign firms to one of the fifteen treatment arms. No rerandomization was conducted to ensure balance. While no rerandomization was conducted to ensure balance, we did test balance on the variables that we stratified on (business type and baseline sales quartiles) as well as other baseline variables we could measure in the administrative data (tax status of firm, gender of merchant, above/below median time with the technology).
Randomization Unit
The randomization unit is the firm level and we use stratified randomization. We use two variables in our stratification. In particular, we stratify based on baseline sales quartiles, which are defined by the average monthly sales from July 2019 to August 2020 (or from the month in which the firm entered the data to August 2020 if it entered the data later than July 2019). We also stratify based on business type, which includes 6 categories: Beauty, Clothing, Professionals, Restaurants, Small Retailers and Other. In total we have 21 cells, because some of the interaction cells between business type and baseline sales quartile were empty. For example, among the “Beauty” business types, every firm was in the 3rd or 4th quartile of baseline sales, so the “Beauty” x 1st quartile and “Beauty” x 2nd quartile strata were empty.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
34,010 firms
Sample size: planned number of observations
34,010 firms
Sample size (or number of clusters) by treatment arms
1. Control (no messages): 4010.
2. No deadline, no reminder with 3.00% offer: 2230.
3. No deadline, anticipated reminder with 3.00% offer: 1838.
4. No deadline, unanticipated reminder with 3.00% offer: 2227.
5. One-week deadline, no reminder with 3.00% offer: 2311.
6. One-week deadline, anticipated reminder with 3.00% offer: 1755.
7. One-week deadline, unanticipated reminder with 3.00% offer: 2316.
8. One-day deadline, no reminder with 3.00% offer: 2324.
9. No deadline, no reminder with 2.75% offer: 2230.
10. No deadline, anticipated reminder with 2.75% offer: 1838.
11. No deadline, unanticipated reminder with 2.75% offer: 2229.
12. One-week deadline, no reminder with 2.75% offer: 2311.
13. One-week deadline, anticipated reminder with 2.75% offer: 1753.
14. One-week deadline, unanticipated reminder with 2.75% offer: 2316.
15. One-day deadline, no reminder with 2.75% offer: 2322.

Sampling frame: The FinTech company has two types of rates: a fixed rate that is independent of the amount of monthly sales made by the firm, and a “smart rate” that is fixed if the firm makes up to 20,000 pesos in sales in a particular month, and begins decreasing if they make over 20,000 pesos in sales. To define our sampling frame we identified firms in a certain range of monthly sales. Specifically, we set a maximum of monthly sales at 20,000 pesos for firms that had the smart rate (since their status quo rate begins to fall if they have higher sales), and no maximum for firms that had the fixed rate. For the minimum, we are informed by a randomized pilot we did with 11,755 firms in May 2019 where we offered a smaller fee reduction from 3.75% to 3.50%. In that pilot, we found that the take-up rate of the lower fee was increasing in baseline sales, and that the elasticity of sales through the technology with respect to the fee (which is what our partner is interested in, as offering a lower fee may or may not be profitable depending on the elasticity) was statistically significant only for the fourth quartile in baseline sales. Thus, we use the 75th percentile of baseline monthly sales from firms in that randomized pilot (i.e., the minimum sales among firms in the fourth quartile) as the minimum.

As a result, our sampling frame for this experiment consists of all firms who are users of this FinTech that had (i) fixed rate and August 2020 sales greater than 1400 pesos with no maximum or (ii) smart rate and August 2020 sales greater than 1400 pesos and less than 20,000 pesos. The sampling frame was made up of 34,010 users.

Randomization: among these 34,010 firms, 4,010 firms were randomly assigned to the pure control group that receives no offer; the size of this arm was based on the FinTech company’s desire to cap the number of firms receiving an offer at 30,000. The remaining 30,000 firms were assigned to one of the fourteen other groups. The firms were first randomly assigned to one of seven groups combining deadlines and reminders: (i) no deadline, no reminder; (ii) no deadline, anticipated reminder; (iii) no deadline, unanticipated reminder; (iv) one-week deadline, no reminder; (v) one-week deadline, anticipated reminder; (vi) one-week deadline, unanticipated reminder; (vii) one-day deadline, no reminder. The sample size in each of these seven groups receiving an offer was determined based on power calculations using the results from our May 2019 randomized pilot, as we describe in more detail below. As described above, this randomization is stratified on business type and business sales quartiles. Then among firms assigned to one of these 7 groups receiving an offer, we cross-randomize the rate they are offered, either 3.00% or 2.75%; this cross-randomization is stratified based on initial group assignment, as well as business type and baseline sales quartiles. This leaves us with 15 total treatment arms.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
We conduct two types of power calculations: first, we use observed effect sizes from our pilot in May 2019, in addition to theoretical predictions from Ericson (2017) for the cases we did not test in our pilot, to determine the minimum sample size needed in each arm for each pairwise comparison of interest. We use these power calculations to determine how to optimally allocate our fixed total sample (30,000 firms to be allocated to arms 2 through 15). Second, after conducting the randomization, we take as given the sample size in each arm and calculate the minimum detectable effect for each pairwise comparison of interest. Note that for the measuring the effect of a deadline, anticipated reminder, and unanticipated reminder, in the main results we will pool the 2.75% and 3.00% cross-randomized groups, as both of these are relevant potential fee reductions. Thus, the treatment effects will be a weighted average of the effect of a deadline or reminder with a 2.75% fee offer and the effect of a deadline or reminder with a 3.00% fee offer. Minimum sample size per arm: In the pilot we estimated the take-up rates of merchants with a 3.75% merchant fee to accept a 3.50% merchant fee reduction offer. Compared to the control group that received no offer, we estimated treatment effects of merchants accepting the offer with an email with no deadline and with a 24-hour deadline. We did not test different length deadlines, as we will do in the RCT. Thus, for the expected effect of both a one-week and one-day deadline we use the effect from our pilot of this 24-hour deadline. After three days, we also sent unanticipated reminders to merchants, but did not randomize these reminders. We thus estimate the expected effect of a reminder by comparing take-up right before the reminder to 24 hours after the reminder. With these groups, we use the equation for minimum sample size from List, Sadoff, and Wagner (2011, equation 8), plugging in the observed P0 and P1 from the experiment to calculate n*. Doing so, we calculate the following minimum sample sizes per arm for each pairwise comparison of interest. Because we will be pooling the 2.75% and 3.00% fee cross-randomization within each treatment arm defined by the combination of deadline type and reminder type, for the description below we consider the following 8 (pooled) treatment arms: T1) Control T2) No deadline, no reminder T3) No deadline, anticipated reminder T4) No deadline, unanticipated reminder T5) One-week deadline, no reminder T6) One-week deadline, anticipated reminder T7) One-week deadline, unanticipated reminder T8) One-day deadline, no reminder Control (no offer) vs. No Deadline (T1 vs. T5) • P0 = 0.01, P1 = 0.18. Minimum sample size per arm to detect this effect = 46 Control (no offer) vs. Deadline (T1 vs. T5) • P0 = 0.01, P1 = 0.28. Minimum sample size per arm to detect this effect = 26 No Deadline vs. Deadline (T2 vs. T5) • P0 = 0.18, P1= 0.28. Minimum sample size per arm to detect this effect = 277 No Deadline vs. No Deadline with Unanticipated Reminder (T2 vs. T4) • P0 = 0.10, P1 = 0.14. Minimum sample size per arm to detect this effect = 1472. Deadline vs. Deadline with Unanticipated Reminder (T5 vs. T7) • P0 = 0.20, P1= 0.24. Minimum sample size per arm to detect this effect = 1529. In the pilot there was no anticipated reminder treatment group. To obtain an estimate of expected effect size of the anticipated reminder and perform power calculations, we benchmark results from the pilot with model simulations based on the model in Ericson (2017) assuming standard magnitudes for present-bias and forgetfulness from the literature, and assuming full naïveté about present-bias but accurate beliefs about memory (using Ericson's terminology, beta = 0.9, beta_hat = 1, rho = 0.95, rho_hat = 0.95). The model simulations predict ratios of the difference in take-up between the groups with unanticipated and anticipated reminders over the difference in take-up between the groups with unanticipated and no reminders. In our pilot we use the difference in take-up between groups with unanticipated and no reminders and then apply the ratio to scale an estimated take-up rate for groups with an anticipated reminder. The ratio from the model is measured in the period when the reminder is sent, one period before the deadline. With these simulated take-up rates of groups with an anticipated reminder, we get a treatment effect ratio of 1.23, which means the necessary sample size to detect differences in relevant pairwise comparisons are: No Deadline with Unanticipated Reminder vs. No Deadline with Anticipated Reminder (T4 vs. T3) • P0 = 0.14, P1 = 0.18. Minimum sample size per arm to detect this effect = 1222. Deadline with Unanticipated Reminder vs. Deadline with Anticipated Reminder (T7 vs. T6) • P0 = 0.24, P1 = 0.29. Minimum sample size per arm to detect this effect = 1152. To obtain a minimum sample size per arm for our study, we select the largest sample size needed for each group depending on its relevant pairwise power calculations above. For group T8, we assume we need the same number of observations as for group T5, since both include a deadline and no reminder. We calculate the following minimum sample sizes per arm to detect the expected effect sizes based on our pilot and simulations: T1) Control: 46 T2) No deadline, no reminder: 1472 T3) No deadline, anticipated reminder: 1222 T4) No deadline, unanticipated reminder: 1472 T5) One-week deadline, no reminder: 1529 T6) One-week deadline, anticipated reminder: 1152 T7) One-week deadline, unanticipated reminder: 1529 T8) One-day deadline, no reminder: 1529 Thus, we need in total 9,951 observations across all treatment arms to statistically detect the expected differences in take-up between treatment groups of interest, based on outcomes in our pilot and simulations of the Ericson (2017) model. As our available sample size for the experiment is 34,010 firms, with 4,010 assigned to control per our partner’s preferences and the remaining 30,000 available to allocate between T2 and T8, which is much larger than the needed 9,951, we adjust sample sizes of each treatment arm T2-T8 proportionally to arrive at the sample sizes per arm in our study shown under “Sample size by treatment arm.” MDE: For each pairwise comparison of interest between the arms T1 through T8 and the allocation of merchants across treatment arms listed above under “Sample size by treatment arm,” we can calculate the minimum detectable effect we are powered to detect. To do so, we use the formula for minimum detectable effect from List, Sadoff, and Wagner (2011, equation 5), where we can plug in sigma^2_0 = P0*(1 – P0) and sigma^2_1 = P1*(1 – P1). We thus plug in n_0 and n_1 as the sample size for each arm of the comparison, plug in P0 from our pilot data for the relevant group, and solve for P1 and thus for the MDE (= P1 – P0). We express the MDE in percentage points and also divide by the standard deviation to obtain the MDE in standard deviations. The standard deviation we divide by is the standard deviation for all merchants in the comparison of interest, i.e. sqrt(p_bar*(1-p_bar)), where p_bar = (P0 + P1)/2. Effect of the offer (conditional on no deadline). • T2 vs T1 : MDE = 0.73 percentage points. MDE (SD) = 0.07 SD. Effect of the offer (conditional on deadline). • T5 vs T1 : MDE = 0.72 percentage points. MDE (SD) = 0.07 SD. Effect of a deadline (conditional on no reminder). • T5 vs T2 : MDE = 2.32 percentage points. MDE (SD) = 0.06 SD. (Note the MDE in SD went down relative to the last comparison even though the MDE in percentage points went up, since p_bar is higher and hence 1/SD is higher for this comparison.) Effect of an unanticipated reminder (conditional on no deadline). • T4 vs T2 : MDE = 1.88 percentage points. MDE (SD) = 0.06 SD. Effect of an anticipated reminder (conditional on reminder, no deadline). • T3 vs T4 : MDE = 2.22 percentage points. MDE (SD) = 0.06 SD. Effect of an unanticipated reminder (conditional on deadline). • T7 vs T5 : MDE = 2.37 percentage points. MDE (SD) = 0.06 SD. Effect of an anticipated reminder (conditional on reminder, deadline). • T6 vs T7 : MDE = 2.74 percentage points. MDE (SD) = 0.06 SD.
IRB

Institutional Review Boards (IRBs)

IRB Name
UC Berkeley IRB
IRB Approval Date
2020-04-19
IRB Approval Number
2020-03-13091
IRB Name
UC Berkeley IRB
IRB Approval Date
2018-04-04
IRB Approval Number
2018-02-10796
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials