Does Manager Forecasting Matter?

Last registered on January 22, 2021

Pre-Trial

Trial Information

General Information

Title
Does Manager Forecasting Matter?
RCT ID
AEARCTR-0007058
Initial registration date
January 21, 2021

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
January 22, 2021, 9:31 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Stanford

Other Primary Investigator(s)

PI Affiliation
Stanford University

Additional Trial Information

Status
On going
Start date
2019-01-20
End date
2021-12-31
Secondary IDs
Abstract
In a panel survey of online firms, we find that the ability to correctly forecast sales is highly correlated with firm performance. Despite its apparent importance, firms are remarkably bad at forming accurate predictions – if firms simply accurately reported their sales over the past 3-months as their prediction they would perform twice as well. We posit that simply encouraging them to review the financial data already easily available to them would dramatically improve their prediction performance. We aim to run an RCT testing exactly that and show a causal pathway from monitoring business financials to prediction performance.
External Link(s)

Registration Citation

Citation
Bloom, Nicholas and Nicholas Bloom. 2021. "Does Manager Forecasting Matter?." AEA RCT Registry. January 22. https://doi.org/10.1257/rct.7058-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
In conjunction with our partner, Stripe, we have been tracking firms and their forecast abilities over time by comparing 3-month predictions for revenue on Stripe with their actual sales on Stripe. In order to ensure that firms are offering serious predictions, we reward them with a $25 gift card if their predictions are within 10% of their actual sales. We are testing two interventions for significant effects on their predictions: having firms review their prior historical data on the Stripe dashboard before making their prediction and varying the amount of the gift card they can earn with their predictions.

The first intervention tests whether or not having firms review their financial details on the Stripe dashboard as part of the survey prior to forming forecasts can significantly increase their forecast accuracy. In particular, we have them sign-in to their Stripe dashboard and report their revenue over the last 3-months before making their predictions.

On top of this, we are also testing whether or not changing firms earns with a correct prediction can incentivize them to provide more accurate responses. We vary the amounts from $0 to $400.
Intervention Start Date
2020-09-01
Intervention End Date
2021-08-31

Primary Outcomes

Primary Outcomes (end points)
Forecast accuracy, winning prediction (i.e. within 10%), prediction bias, time spent forming prediction, checking their financial dashboard
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Productivity, growth rates, revenue, survival (having transaction in a 6 month period).
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Our interventions are cross randomized, with the dashboard treatment being run twice. For the first dashboard intervention, we randomly assigned 50% of the firms to control and 50% of the firms to treatment. While taking our quarterly survey, the treatment firms are asked to sign-in to their accounts and report their revenue over the last 3 months in the survey form. Control firms are simply asked to report their revenue and are not asked to consult their financial history. Both treatment and control then continue on to the question on 3-month predictions at that point.

On the next round of the survey, 3-4 months later, we cross randomize again so that 25% are dashboard treatment twice, 25% are treatment then control, 25% are control then treatment, and 25% are control twice.

Once at the prediction question, firms are randomly offered different amounts of money if their prediction for the next 3 months is within 10% of their actual revenue. Approximately 25% of the firms aren’t offered a prize and 25% are offered $25 to match what we have previous offered them. The remaining 50% are evenly split into 6.25% shares for rewards of 50, 100, 150, 200, 250, 300, 350, and 400.
Experimental Design Details
Randomization Method
We used the last few characters of their accounts ID numbers to assign sort them into their treatment arm. These account ID numbers are as good as random.
Randomization Unit
Firm
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
8549 Firms
Sample size: planned number of observations
8549 Firms
Sample size (or number of clusters) by treatment arms
1st Dashboard Treatment: 4271 Treatment, 4278 Control
2nd Dashboard Treatment: 4274 Treatment, 4275 Control
Reward Treatment: $0 - 1860 , $25 - $2289, $50 - 504, $100 - 618, $150 - 459 , $200 - 637, $250 - 501, $300 - 510, $350 - 521, $400 - 650
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Stanford University
IRB Approval Date
2020-10-08
IRB Approval Number
IRB-47925

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials