A Scientific Approach to Decision-Making: Experimental Evidence from Italy

Last registered on October 13, 2020

Pre-Trial

Trial Information

General Information

Title
A Scientific Approach to Decision-Making: Experimental Evidence from Italy
RCT ID
AEARCTR-0006578
Initial registration date
October 11, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 13, 2020, 9:15 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
INSEAD

Other Primary Investigator(s)

PI Affiliation
Bocconi University
PI Affiliation
Bocconi University

Additional Trial Information

Status
Completed
Start date
2015-12-01
End date
2016-11-30
Secondary IDs
Abstract
A classical approach to collecting and elaborating information to make entrepreneurial decisions combines search heuristics such as trial and error, effectuation, and confirmatory search. This study tests, intead, a framework for exploring the implications of a more scientific approach to entrepreneurial decision making. The panel sample of our randomized control trial includes 116 Italian startups and 16 data points over a period of about one year. Both the treatment and control groups receive 10 sessions of general training on how to obtain feedback from the market and to gauge the feasibility of their idea. We teach the treated startups to develop frameworks for predicting the performance of their idea and to conduct rigorous tests of their hypotheses, very much as scientists do in their research. We let the firms in the control group, instead, follow their intuitions about how to assess their idea, which has typically produced fairly standard search heuristics.
External Link(s)

Registration Citation

Citation
Camuffo, Arnaldo , Alfonso Gambardella and Chiara Spina. 2020. "A Scientific Approach to Decision-Making: Experimental Evidence from Italy." AEA RCT Registry. October 13. https://doi.org/10.1257/rct.6578-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Startups received 10 sessions of training at Bocconi University, Milan. Five sessions were frontal lectures lasting four hours, and five were one-hour sessions per startup with mentors for both treated and control firms. The duration and content of the intervention was the same for both groups. However, treated startups were taught, in each of the four steps of the process, to frame, identify, and validate the problem; to formulate falsifiable hypotheses; and to test them in a rigorous fashion (using data and experiments), including defining valid and reliable metrics and establishing clear thresholds for concluding whether a hypothesis is corroborated or not. “Scientific” problem framing and identification, hypothesis formulation, and rigorous testing were integrated into both the content of the frontal lectures and the feedback mentors provided to the treated firms during the one-to-one meetings – for example, mentors encouraged startups to think about the broader framework of their idea and the customers’ problem they were trying to solve, to formulate falsifiable hypotheses, and to test them rigorously. This encouragement was not offered to the control group, where startups received, during both the lectures and the oneto-one meetings, general instructions about the importance of keeping their business models or products flexible, seeking and eliciting customer feedback, and using this information to experiment with different solutions before choosing a final business model or product. This approach encouraged them to conduct these activities based on their own intuitions, heuristics, and approaches.
We offered the same number of hours of training to both groups to ensure that there was no other effect in the treatment than a scientific approach to entrepreneurial decision making. The training program consisted of 10 sessions: five lectures lasting 3.5 hours each for both treatment and control groups, and five individual coaching sessions lasting 1 hour each in which mentors provided advice and coaching to each startup. The program was offered on Saturdays, alternating the five frontal lectures with the individual coaching sessions every other Saturday. The same instructor taught the five frontal lectures. Each startup was randomly assigned to a mentor who provided advice and mentorship during the five 1-hour individual coaching sessions. Overall, 21 mentors were involved. Each mentor supported three startups from the treatment group and three from the control group. Both the instructor (frontal lectures) and the mentors had significant mentorship
experience. The authors designed and conducted “training the trainers” activities about the scientific approach and standardized the teaching materials within and across the experimental groups and the coaching process across mentors.Our research team coordinated the activities and ensured that the learning modules and mentoring activities conducted by the instructor and mentors were balanced between treated and control startups
Intervention Start Date
2016-03-05
Intervention End Date
2016-06-04

Primary Outcomes

Primary Outcomes (end points)
Exit. In the panel regressions, this is a binary variable that takes the value 0 until the firm exits (abandons the program and ceases the startup), 1 in the time period in which the firm drops out, and a missing value thereafter. In the cross-section regressions, this is a dummy equal to 1 if the firm exits at any point, and 0 otherwise. To avoid attrition biases, we checked that the entrepreneurs that informed us of their decision to discontinue their initiative truly abandoned their activity. All firms that exit from our sample had not yet made heavy investments in their company.
Pivot. In the panel regressions, this is a binary variable that takes the value 1 in the exact time period in which the firm made a major change to its business model, and 0 whenever the firm does not pivot. Firms that made more than one such major change will take the value 1 at any time in which a major change occurs. In the cross-section, this variable is equal to the total number of major changes of the firms during our time frame. We defined a change to be major by analyzing whether the entrepreneur moved from the original idea to another idea that changed the core value proposition of the product or service offered or its target customers.
Revenue. In the panel regressions, this is the firm’s revenue flow in euros between any two of our 16 time periods. To control for outliers, along with absolute value, we show results using revenue winsorized at 99% and the revenue’s hyperbolic sine transformation. Time_to_exit, Time_to_acquisition, Time to_activation, Time_to_revenue. In the survival regressions, these variables measure the weeks in which the startups exit, acquire, or activate their first client, or begin earning revenue. To account for censored observations, the survival regressions also rely on the failure dummies equal to 1 for firms that experience exit, acquisition, activation, or revenue and 0 for the firms that do not experience the event. Acquisition and activation are standard performance measures that account for early signals of performance (Blank and Dorf 2012, Jackson et al. 2015, Ripsas et al. 2015).
Intervention. In the panel regressions, this is a binary variable equal to 1 for all the observations of the treated firms, and 0 for all the other observations. In the cross-section, it is equal to 1 for the treated firms. In robustness checks we employ cumulative_intervention and postintervention. Bloom et al. (2013) also use these variables as alternative specifications to the intervention dummy. Cumulative_intervention takes values 1 to 8 for the treated startups during the first eight periods in which the firms underwent their training. It is equal to 8 for these firms for the other eight observations after the training, and it is equal to 0 for the control startups during the entire period.
Postintervention is a binary variable equal to 1 for the treated startups after the training (i.e., periods 9–16) and 0 for all the observations of the control group. Cumulative_intervention takes into account the capabilities accrued during the training period (cumulative learning), and we use it together with postintervention to account for different effects in the post-training periods.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We built on existing teaching material to tailor it to our experiment: for the treatment group, we further stressed the scientific component of the validated learning process used by existing providers, but we kept the program as it was for the control group. In this way, we offered a meaningful learning experience to startups in both groups while ensuring that the only difference was related to the scientific method.
We promoted our own training program to nascent startups. We focused on these firms because they are neither established startups, whose past experience could affect the experiment, nor people who are only remotely evaluating the possibility of becoming entrepreneurs and therefore more likely to drop out for lack of commitment. We did not restrict to particular industries. We advertised the course through digital channels as a general course covering the important aspects of new venture creation – market sizing, business model creation and analysis, how to create a landing page, relevant startup data analytics and accounting, and so forth. This helped us attract many startups and avoid self-selection by those only interested in some aspects of the training. To encourage the participation of qualified and motivated startups, we advertised that the training would end with a private event where participant startups could meet with investors. The course was free, to ensure participation of firms with limited financial resources. The call was launched on November 2015 and remained open until mid-January 2016. We received 202 applications.
Before beginning the training, we asked the startups to sign a document, approved by the Ethical Committee of Bocconi University, stating that Bocconi University was investigating the determinants of the success of startups, so that we were providing management advice and training to firms and collecting performance data. In other words, they knew that they were participating in an activity in which we were offering a free service in exchange for monitoring their actions for educational and research purposes. We also told them that there were two groups of startups and that there were some differences in the content of the training program. However, they did not know whether they were part of the treatment or the control group.
Startups received 10 sessions of training at Bocconi University, Milan. Five sessions were frontal lectures lasting 3.5 hours, and five were one-hour sessions per startup with mentors for both treated and control firms. The duration and content of the intervention was the same for both groups. However, treated startups were taught, in each of the four steps of the process, to frame, identify, and validate the problem; to formulate falsifiable hypotheses; and to test them in a rigorous fashion (using data and experiments), including defining valid and reliable metrics and establishing clear thresholds for concluding whether a hypothesis is corroborated or not. “Scientific” problem framing and identification, hypothesis formulation, and rigorous testing were integrated into both the content of the frontal lectures and the feedback mentors provided to the treated firms during the one-to-one meetings – for example, mentors encouraged startups to think about the broader framework of their idea and the customers’ problem they were trying to solve, to formulate falsifiable hypotheses, and to test them rigorously. This encouragement was not offered to the control group, where startups received, during both the lectures and the oneto-one meetings, general instructions about the importance of keeping their business models or products flexible, seeking and eliciting customer feedback, and using this information to experiment with different solutions before choosing a final business model or product. This approach encouraged them to conduct these activities based on their own intuitions, heuristics, and approaches.
We offered the same number of hours of training to both groups to ensure that there was no other effect in the treatment than a scientific approach to entrepreneurial decision making. The program was offered on Saturdays, alternating the five frontal lectures with the
individual coaching sessions every other Saturday. The same instructor taught the five frontal lectures. Each startup was randomly assigned to a mentor who provided advice and mentorship during the five 1-hour individual coaching sessions. Overall, 21 mentors were involved. Each mentor supported three startups from the treatment group and three from the control group. Both the instructor (frontal lectures) and the mentors had significant mentorship experience. The authors designed and conducted “training the trainers” activities about the scientific approach and standardized the teaching materials within and across the experimental groups and the coaching process across mentors.
Our research team coordinated the activities and ensured that the learning modules and mentoring activities conducted by the instructor and mentors were balanced between treated and control startups. To avoid contamination between the two groups, the research team ensured that the 10 sessions were held at different times of the same day (morning and afternoon) and kept all communication to the two groups of startups distinct and segregated. This required creating two separate groups on Facebook publicized to no one but the teams in the relevant group. We systematically monitored startups’ learning and performance by collecting data via phone interviews from March to November. We conducted telephone interviews because we could assess the actual use of a scientific approach only by knowing the activities in which the startups were engaged when they were in their locations, away from the training sessions.
Experimental Design Details
Randomization Method
Simple randomization performed through STATA.
Randomization Unit
Firm
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
No clusters
Sample size: planned number of observations
116 x 16 = 1856
Sample size (or number of clusters) by treatment arms
59 firms in the treatment group and 57 firms in the control group
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Below are power calculations for a small, medium and large effect with a two-tailed test: t tests - Correlation: Point biserial model Analysis: Compromise: Compute implied α & power Input: Tail(s) = Two Effect size |ρ| = 0.1 β/α ratio = 1 Total sample size = 116 Output: Noncentrality parameter δ = 1.0824589 Critical t = 0.8665832 Df = 114 α err prob = 0.3879907 β err prob = 0.3879907 Power (1-β err prob) = 0.6120093 t tests - Correlation: Point biserial model Analysis: Compromise: Compute implied α & power Input: Tail(s) = Two Effect size |ρ| = 0.3 β/α ratio = 1 Total sample size = 116 Output: Noncentrality parameter δ = 3.3871120 Critical t = 1.8643557 Df = 114 α err prob = 0.0648441 β err prob = 0.0648441 Power (1-β err prob) = 0.9351559 t tests - Correlation: Point biserial model Analysis: Compromise: Compute implied α & power Input: Tail(s) = Two Effect size |ρ| = 0.5 β/α ratio = 1 Total sample size = 116 Output: Noncentrality parameter δ = 6.2182527 Critical t = 3.2220323 Df = 114 α err prob = 0.001659539 β err prob = 0.001659539 Power (1-β err prob) = 0.9983405
IRB

Institutional Review Boards (IRBs)

IRB Name
Bocconi IRB
IRB Approval Date
2015-11-02
IRB Approval Number
N/A

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials