Back to History

Fields Changed

Trial
Field Before After
Last Published May 18, 2021 09:41 AM June 14, 2021 04:20 PM
Primary Outcomes (End Points) Dependent Variable: The dependent variable is the propensity to invest in a startup . Dependent Variable: The dependent variable is propensity to invest in a startup.
Primary Outcomes (Explanation) Dependent Variable: The dependent variable is Yijp – the propensity to invest in a startup by i – the investor, j – the startup, and p – the paired program. We will measure the dependent variable using two methods. We will build our model around the scales and the qualitative proportion measurements. Method 1 and 2 are what we used to build our model: Binary: Before meeting each company, each professional investor will be asked “Do you want to meet this company?” [Definitely want to meet, Would meet if there’s time, No thank you]. The first two options will be treated as binary responses. First, 1 = definitely want to meet and 0 = other. Second 1 = meet / other, 0 = No thank you. Qualitative: Each professional investor will be asked what additional information they need from the investor. We will assess the proportion of promotion vs. prevention-focused questions (Kanze et al. 2018). Performance-reward bias: Normalized scale (Male normalized qualitative proportion – Female normalized qualitative proportion) – see Castilla (2008). Intuitively, if you get the same scale rank, what is the difference between the qualitative score by the gender of the entrepreneur? Dependent Variable: The dependent variable is Yp – the propensity to invest in a startup by the paired program. The four paired programs will take place in four geographic regions and include entrepreneurs from across those regions: Sub-Saharan Africa, India, MENA and Latin America. This will result in a total sample of eight programs - four treatment programs and four control programs – with one treatment and one control program in each location. We use fixed effects for the paired program in all regressions. (We will only include fixed effects for the investor in pooled regressions when we join up the sample with the professional investors.) We will measure the dependent variable using 4 methods. For the first treatment – prompting consistent enquiry – our primary dependent variable will be qualitative, following Kanze et al. (2018). For the second treatment – evaluating demonstrated competence – our primary dependent variable will be scales, inspired by Clingingsmith and Shane’s (2018) dependent variable. 1. Scales: Each trainee investor will evaluate each startup on a scale. The baseline evaluation takes place on a 6-point scale. Thereafter, evaluators will use a 24-point scale (control group) and a 32-point scale (treatment group). All scale evaluations are normalized by the program using a z-score. 2. Binary: Each trainee investor will know that the top 2 rated startups will receive investment. Therefore, each trainee investor will carefully consider who they place in the top 2. 3. Qualitative: Each trainee investor will be asked what additional information they need from the startup. Trainee investors will also ask for additional information in conversations, and combined, this will form a secondary dependent variable. All responses will be coded as “promotion-focused” or “prevention-focused”. We will assess the proportion of promotion vs. prevention-focused questions (Kanze et al. 2018). 4. Performance-reward bias: Normalized scale (Male normalized qualitative proportion – Female normalized qualitative proportion) – see Castilla (2008). Intuitively, if you get the same scale rank, what is the difference between the qualitative score by the gender of the entrepreneur?
Randomization Unit individual. Planned Number of Clusters Each trainee investor is a cluster and will make at least 9 decisions. Each professional investor is a cluster and will make at least 3 decisions. Planned Number of Observations At least 1,500 individual investor decisions, from at least 200 individual investors. Sample size (or number of clusters) by treatment arms * For treatment 1 and treatment 2, I will have a sample size of at least 1,500 individual investor decisions, and 456 on female founders. These stem from at least 200 individual investors. Power calculation: Minimum Detectable Effect Size for Main Outcomes The minimum detectable effect size for the ANCOVA calculations is 0.225. individual investor Planned Number of Clusters Each trainee investor is a cluster and will make at least 9 decisions. Each professional investor is a cluster and will make at least 3 decisions. Planned Number of Observations At least 1,500 individual investor decisions, from at least 200 individual investors. Sample size (or number of clusters) by treatment arms * For treatment 1 and treatment 2, I will have a sample size of at least 1,500 individual investor decisions, and 456 on female founders. These stem from at least 200 individual investors. Power calculation: Minimum Detectable Effect Size for Main Outcomes The minimum detectable effect size for the ANCOVA calculations is 0.225.
Intervention (Hidden) How can organizations intervene to foster the objective evaluation of novel ideas? We will examine whether changing investors’ evaluation practices affect real funding decisions for trainee and professional investors evaluating startups in the field, over time. Everything is subject to change after we collect baseline data on the first two programs (estimated by or before June 14.) Interventions 1) Connect will prompt consistent inquiry. When interviews are unstructured, evaluators evaluate women worse than men (Rivera 2012, 2015). In investment, equity investors tend to ask female founders more “prevention” or risk-focused questions than male founders. Investors also ask male founders more “promotion” or growth-focused questions than female founders. This is linked to worse founding outcomes for female founders (Kanze et al. 2018). 2) Connect will ask investors to evaluate using demonstrated competence. When orchestras blinded auditions, more females were hired (Goldin & Rouse 2000), perhaps because evaluators were focused on the work task rather than the appearance of candidates (Stephens et al. 2020). In equity investment, our interviews suggest that some investors focus on demonstrated competence by evaluating “what progress has been made… over time, you can definitely gather a lot of data”. Setting the criteria in advance of evaluation can result in less biased hiring in other settings (Stephenson et al. 2020). Asking managers to weight their criteria before they evaluate has also been shown to reduce retroactive criteria construction and increase hiring of non-gender-normative candidates (Uhlman & Cohen 2005). To ensure that investors actually evaluate using demonstrated competence, we also ask them to apply predefined criteria. 3) Connect will share prior evaluations. Organizational transparency, “making relevant, accessible, and accurate…information available” can help to decrease inequity in real hiring outcomes (Castilla 2015: 315). Setting We will leverage a real investment setting, Connect. Connect is the “largest organization in the world supporting impact-driven, seed-stage startups. Since 2009 our team has directly worked with more than 1,100 entrepreneurs in 28 countries, and our affiliated fund, VilCap Investments, has invested in 110 startups that have gone on to raise more than $4 billion in follow-on capital.” Connect will run eight programs, with four paired-treatment and control programs in regions: Africa, India, MENA and Latin America. Each paired-program will select at least 20 startups – with up to 24 if they wish to accept more, with at least 30% of those startups led by female founders. We will leverage this setting in two ways. TRAINEE INVESTORS During the program, Connect will train entrepreneurs to be trainee investors, making real investments on behalf of the program, over three months. Trainee investors will be asked to evaluate the other startups in their program (at least nine), to choose two to receive a $20,000 investment. Each trainee investor will be asked to evaluate over four evaluation periods, where the fourth evaluation determines who receives the investment. We will have access to 720 funding decisions by trainee investors. Researchers will randomize trainee investors into treatment and control programs of at least ten startups each. This randomization will be stratified by region, gender, and venture subsector (so that entrepreneurs’ ventures are not competitors). Connect will implement all three interventions we designed for the trainee investors. 1) Prompting consistent enquiry: At the end of all four evaluation periods, Connect will ask trainee investors: “what additional information would you want on this venture?” For the treatment group, Connect will ask: “what additional information would you want on how this venture’s potential for growth?”; and “what additional information would you want on how this venture will mitigate risks?” 2) Evaluating using demonstrated competence: During the second, third and fourth evaluation periods, Connect will ask investors to complete due diligence and rank companies. Connect will ask control trainee investors their normal set of evaluation questions on: “what is the company’s growth opportunity? and what is the company’s growth opportunity?” across eight categories (i.e. team, value proposition, market, scale). They will use a 4 point scale per category, resulting in a 24 point scale overall. In the control group, after rank 1, Connect will ask investors which criteria they used as a mechanism check. “Please think about how you made your decisions and weight the criteria below with percentages of how much weight you placed on each criterion. (Please make sure it adds up to 100%!) – [Growth opportunity, Investment opportunity, Progress made during program]. For the treatment group, Connect will add four questions, each on a 4-point scale – weighted to equal 1/3 of the overall evaluation set: “Since the beginning of the program, how much has this company improved in understanding its path to growth?”, “Since the beginning of the program, how much has this company improved in executing on its path to growth?”, “Since the beginning of the program, how much has this company improved in executing its path to growth?”, “Since the beginning of the program, how much has this company improved in understanding its risks?” For the treatment group, before the second evaluation period (rank 1), Connect will ask investors which criteria they will use – “Please think about how you make your decisions and weight the criteria below with percentages of how much weight you would place on each criterion. (Please make sure it adds up to 100%!) – [Growth opportunity, Investment opportunity, Improvement in company strategy]”. 3) Sharing prior evaluations – TBD (estimated June 30, 2021). During the second, third and fourth evaluation periods, Connect will ask investors to complete due diligence, and rank companies. Before the third evaluation period (rank 2), evaluators will share the differences between rank by people that prioritized progress vs. other elements. Connect managers will then re-share the importance of using progress. Note: we will define this exact treatment based on the data we collect during rank 1. We will add to this register after rank 1 (estimated June 30, 2021). PROFESSIONAL INVESTORS The program will invite professional investors to meet the startups, and researchers will track progress through due diligence processes and potential eventual investment decisions for 18 months after the program. We will have access to at least 720 funding decisions by professional investors (Per 8 programs, 30 investors evaluating at least 3 startups). 1) Prompting consistent enquiry: Professional investors will receive surveys during multiple evaluation periods (before the program on deciding who to meet, after meeting the startups, six months after the program, twelve months after the program, and 18 months after the program). Connect will ask professional investors (all mentors) in the control group: “what additional information would you want on this venture?” For the treatment group, Connect will ask: “what additional information would you want on how this venture’s potential for growth?”; and “what additional information would you want on how this venture will mitigate risks?” This information will be shared with startups at each stage – if all startups in the cohort receive at least one comment. Professional investors will be randomized into treatment and control groups during every evaluation period. Therefore, some professional investors will be treated more than others, which will result in different levels of treatment over time. 2) Evaluating using demonstrated competence – TBD (estimated September 30, 2021). After the program, researchers will follow up with the investors that met the companies during the program. They will receive surveys six months after the program, twelve months after the program, and 18 months after the program. The treatment group of investors will receive a reminder of their assessments of the companies Note: We will define this exact treatment based on the data we collect over the program, after the program ends. How can organizations intervene to foster the objective evaluation of novel ideas? We will examine whether changing investors’ evaluation practices affect real funding decisions for trainee and professional investors evaluating startups in the field, over time. Everything is subject to change after we collect baseline data on the first two programs (estimated by or before June 14.) Interventions 1) Connect will prompt consistent inquiry. When interviews are unstructured, evaluators evaluate women worse than men (Rivera 2012, 2015). In investment, equity investors tend to ask female founders more “prevention” or risk-focused questions than male founders. Investors also ask male founders more “promotion” or growth-focused questions than female founders. This is linked to worse founding outcomes for female founders (Kanze et al. 2018). 2) Connect will ask investors to evaluate using demonstrated competence. When orchestras blinded auditions, more females were hired (Goldin & Rouse 2000), perhaps because evaluators were focused on the work task rather than the appearance of candidates (Stephens et al. 2020). In equity investment, our interviews suggest that some investors focus on demonstrated competence by evaluating “what progress has been made… over time, you can definitely gather a lot of data”. Setting the criteria in advance of evaluation can result in less biased hiring in other settings, because evaluators are more likely to use the criteria (Stephenson et al. 2020). For example, asking managers to weight their criteria before they evaluate has also been shown to reduce retroactive criteria construction and increase hiring of non-gender-normative candidates (Uhlman & Cohen 2005). To ensure that investors actually evaluate using demonstrated competence, we also ask them to apply predefined criteria. 3) As a third, and supporting intervention, Connect will share prior evaluations after the first evaluation period. Organizational transparency, “making relevant, accessible, and accurate…information available” can help to decrease inequity in real hiring outcomes (Castilla 2015: 315). Setting We will leverage a real investment setting, Connect. Connect is the “largest organization in the world supporting impact-driven, seed-stage startups. Since 2009 our team has directly worked with more than 1,100 entrepreneurs in 28 countries, and our affiliated fund, Connect Investments, has invested in 110 startups that have gone on to raise more than $4 billion in follow-on capital.” Connect will run eight programs, with four paired-treatment and control programs in regions: Africa, India, MENA and Latin America. Each paired-program will select at least 20 startups – with up to 24 if they wish to accept more, with at least 30% of those startups led by female founders. We will leverage this setting in two ways. TRAINEE INVESTORS During the program, Connect will train entrepreneurs to be trainee investors, making real investments on behalf of the program, over three months. Trainee investors will be asked to evaluate the other startups in their program (at least nine), to choose two to receive a $20,000 investment. Each trainee investor will be asked to evaluate over four evaluation periods, where the fourth evaluation determines who receives the investment. During the second, third and fourth evaluation periods, Connect will ask investors to complete due diligence, and rank companies. We will have access to 720 funding decisions by trainee investors. Researchers will randomize trainee investors into treatment and control programs of at least ten startups each. This randomization will be stratified by region, gender, and venture subsector (so that entrepreneurs’ ventures are not competitors). Connect will implement all three interventions we designed for the trainee investors. 1) Prompting consistent enquiry: At the end of all four evaluation periods, Connect will ask trainee investors: “what additional information would you want on this venture?” For the treatment group, Connect will ask: “what additional information would you want on how this venture’s potential for growth?”; and “what additional information would you want on how this venture will mitigate risks?” 2) Evaluating using demonstrated competence: During the second, third and fourth evaluation periods, Connect will ask investors to complete due diligence and rank companies. Connect will ask control trainee investors their normal set of evaluation questions on: “what is the company’s growth opportunity? and what is the company’s growth opportunity?” across eight categories (i.e. team, value proposition, market, scale). They will use a 4 point scale per category, resulting in a 24 point scale overall (from 4 to 32). In the control group, after the final rank, Connect will ask investors which criteria they used as a mechanism check. “Please think about how you made your decisions and weight the criteria below with percentages of how much weight you placed on each criterion. (Please make sure it adds up to 100%!) – [Growth opportunity, Investment opportunity, Improvement made during program]. For the treatment group, Connect will add four questions, each on a 4-point scale – weighted to equal 1/3 of the overall evaluation set: “Since the beginning of the program, how much has this company improved in understanding its path to growth?”, “Since the beginning of the program, how much has this company improved in executing its path to growth?”, “Since the beginning of the program, how much has this company improved in understanding its risks?” “Since the beginning of the program, how much has this company improved in executing on risk mitigation?” For the treatment group, before the second evaluation period (rank 1), Connect will ask investors which criteria they will use – “Please think about how you make your decisions and weight the criteria below with percentages of how much weight you would place on each criterion. (Please make sure it adds up to 100%!) – [Growth opportunity, Investment opportunity, Improvement made during program]”. 3) Sharing prior evaluations – TBD (estimated June 30, 2021). As a supporting intervention, before the third evaluation period, evaluators will share the differences between rank by people that prioritized improvement vs. other elements. Connect managers will then re-share the importance of using improvement in evaluating companies. Note: we will define this exact treatment based on the data we collect during rank 1. We will add to this register after rank 1 (estimated June 30, 2021). (PROFESSIONAL INVESTORS As a secondary population, we will observe professional investors that participate in Connect’s programming. The program will invite professional investors to meet the startups, and researchers will track progress through due diligence processes and potential eventual investment decisions for 18 months after the program. We will have access to at least 720 funding decisions by professional investors (Per 8 programs, 30 investors evaluating at least 3 startups). 1) Prompting consistent enquiry: Professional investors will receive surveys during multiple evaluation periods (before the program on deciding who to meet, after meeting the startups, six months after the program, twelve months after the program, and 18 months after the program). Connect will ask professional investors (all mentors) in the control group: “what additional information would you want on this venture?” For the treatment group, Connect will ask: “what additional information would you want on how this venture’s potential for growth?”; and “what additional information would you want on how this venture will mitigate risks?” Professional investors will be randomized into treatment and control groups during every evaluation period. Therefore, some professional investors will be treated more than others, which will result in different levels of treatment over time. 2) Evaluating using demonstrated competence – TBD (estimated September 30, 2021). After the program, researchers will follow up with the investors that met the companies during the program. They will receive surveys six months after the program, twelve months after the program, and 18 months after the program. Note: We will define this exact treatment based on the data we collect over the program, after the program ends. )
Back to top
Other Primary Investigators
Field Before After
Back to top
Field Before After

Affiliation

GIL - World Bank

Email

[email protected]
Back to top
Field Before After

Affiliation

GIL - World Bank

Email

[email protected]
Back to top