Experimental Design
We built on existing teaching material to tailor it to our experiment: for the treatment group, we further stressed the scientific component of the validated learning process used by existing providers, but we kept the program as it was for the control group. In this way, we offered a meaningful learning experience to startups in both groups while ensuring that the only difference was related to the scientific method.
We promoted our own training program to nascent startups. We focused on these firms because they are neither established startups, whose past experience could affect the experiment, nor people who are only remotely evaluating the possibility of becoming entrepreneurs and therefore more likely to drop out for lack of commitment. We did not restrict to particular industries. We advertised the course through digital channels as a general course covering the important aspects of new venture creation – market sizing, business model creation and analysis, how to create a landing page, relevant startup data analytics and accounting, and so forth. This helped us attract many startups and avoid self-selection by those only interested in some aspects of the training. To encourage the participation of qualified and motivated startups, we advertised that the training would end with a private event where participant startups could meet with investors. The course was free, to ensure participation of firms with limited financial resources. The call was launched on November 2015 and remained open until mid-January 2016. We received 202 applications.
Before beginning the training, we asked the startups to sign a document, approved by the Ethical Committee of Bocconi University, stating that Bocconi University was investigating the determinants of the success of startups, so that we were providing management advice and training to firms and collecting performance data. In other words, they knew that they were participating in an activity in which we were offering a free service in exchange for monitoring their actions for educational and research purposes. We also told them that there were two groups of startups and that there were some differences in the content of the training program. However, they did not know whether they were part of the treatment or the control group.
Startups received 10 sessions of training at Bocconi University, Milan. Five sessions were frontal lectures lasting 3.5 hours, and five were one-hour sessions per startup with mentors for both treated and control firms. The duration and content of the intervention was the same for both groups. However, treated startups were taught, in each of the four steps of the process, to frame, identify, and validate the problem; to formulate falsifiable hypotheses; and to test them in a rigorous fashion (using data and experiments), including defining valid and reliable metrics and establishing clear thresholds for concluding whether a hypothesis is corroborated or not. “Scientific” problem framing and identification, hypothesis formulation, and rigorous testing were integrated into both the content of the frontal lectures and the feedback mentors provided to the treated firms during the one-to-one meetings – for example, mentors encouraged startups to think about the broader framework of their idea and the customers’ problem they were trying to solve, to formulate falsifiable hypotheses, and to test them rigorously. This encouragement was not offered to the control group, where startups received, during both the lectures and the oneto-one meetings, general instructions about the importance of keeping their business models or products flexible, seeking and eliciting customer feedback, and using this information to experiment with different solutions before choosing a final business model or product. This approach encouraged them to conduct these activities based on their own intuitions, heuristics, and approaches.
We offered the same number of hours of training to both groups to ensure that there was no other effect in the treatment than a scientific approach to entrepreneurial decision making. The program was offered on Saturdays, alternating the five frontal lectures with the
individual coaching sessions every other Saturday. The same instructor taught the five frontal lectures. Each startup was randomly assigned to a mentor who provided advice and mentorship during the five 1-hour individual coaching sessions. Overall, 21 mentors were involved. Each mentor supported three startups from the treatment group and three from the control group. Both the instructor (frontal lectures) and the mentors had significant mentorship experience. The authors designed and conducted “training the trainers” activities about the scientific approach and standardized the teaching materials within and across the experimental groups and the coaching process across mentors.
Our research team coordinated the activities and ensured that the learning modules and mentoring activities conducted by the instructor and mentors were balanced between treated and control startups. To avoid contamination between the two groups, the research team ensured that the 10 sessions were held at different times of the same day (morning and afternoon) and kept all communication to the two groups of startups distinct and segregated. This required creating two separate groups on Facebook publicized to no one but the teams in the relevant group. We systematically monitored startups’ learning and performance by collecting data via phone interviews from March to November. We conducted telephone interviews because we could assess the actual use of a scientific approach only by knowing the activities in which the startups were engaged when they were in their locations, away from the training sessions.