Back to History

Fields Changed

Registration

Field Before After
Trial Start Date August 26, 2020 October 20, 2020
Trial End Date October 31, 2020 December 25, 2020
Last Published August 26, 2020 11:49 AM October 18, 2020 04:49 PM
Intervention (Public) We will assess whether changing evaluation practices affect the investment decisions of investors. For the first treatment, we will ask investors to weight a list of evaluation criteria provided in proportion to their importance before evaluating the startups (between welcome screen 1 and decision screen 2). Next, we will ask investors to evaluate each startup. We will ask the control group to weight their criteria after they have evaluated the startups. For the second treatment, we will edit the term sheet that investors receive and place an additional investment from a “public” fund. After the investor has evaluated, if they respond “no” on the binary startup evaluation question, we will increase the percentage of ownership from 10% of the deal size in 10% increments until investor switches to "yes" or we reach the limit of the deal size. For the first treatment, we will ask investors to weigh a list of evaluation criteria provided in proportion to their importance before evaluating the startups. Next, we will ask investors to evaluate startups. In contrast, we will ask the control group to weight their criteria after they have evaluated the startups. For the second treatment, we will edit the term sheet that investors receive and place an additional investment from a “public” fund. After the investor has evaluated, if they respond “no” on our binary startup evaluation question, we will increase the percentage of ownership until investor switches to "yes" or we reach the limit of the deal size.
Intervention Start Date August 26, 2020 October 20, 2020
Intervention End Date October 31, 2020 December 25, 2020
Primary Outcomes (Explanation) We will assess the dependent variable – the propensity to invest in a startup led by a female CEO - with a judge score, using a seven-point Likert-scale from “strongly disagree” to “strongly agree.” Following (Clingingsmith & Shane, (2018), we will ask investors to answer four questions on whether they would take further action to invest in the startup: 1) “I would pursue a follow-up meeting to learn more about the venture.”; 2) “I would be interested in seeing the business plan for this venture.”; 3) “I would recommend this opportunity to a co-investor.”; 4) “I would initiate due diligence on this venture.” I will aggregate responses into a score from 4-28. To think more closely about how an investor decision might play out in practice where investors either do or do not take further action, we also ask a binary question “Imagine you need to make an urgent decision because the round closes this week. Given the information you have on this deal, would you take this startup forward and shorten your due diligence process?” We assess the dependent variable – the propensity to invest in a startup founded by a female CEO – in two ways. First, following Clingingsmith & Shane (2018), we use an aggregate score variable from 4-28. We will ask investors to answer four questions on whether they would take further action to invest in the startup using a seven-point Likert-scale: 1) “I would pursue a follow-up meeting to learn more about the venture.”; 2) “I would be interested in seeing the business plan for this venture.”; 3) “I would recommend this opportunity to a co-investor.”; 4) “I would initiate due diligence on this venture.” To think more closely about how an investor decision might play out in practice where investors either do or do not take further action, I also ask a binary question, drawing from previously collected interview data: “Imagine the round closes this week. Given the information you have on this venture, would you shorten your due diligence process to assess the venture? (and I explain what this might entail for an organization). To analyze the effect of treatments on an investor’s propensity to invest, I will use OLS, probit and logit regressions.
Experimental Design (Public) We will assess the effectiveness of the two treatments on the propensity of investors to invest in a startup led by a female CEO. The two interventions are (1) weighting evaluation criteria prior to assessment of candidates; (2) receiving a valuation signal from the market, in which investors are told that the firm has received an additional investment from a “public” fund – in this case, a US-government-backed fund taking a junior position. Our hypotheses predict that both treatment 1 and treatment 2 will increase the propensity to invest in firms led by female CEOs. For treatment 1, we will have a sample size of 100 investor decisions on startups led by a female CEO and we will assess the effect of this treatment using a between subject design. We predict only a directional effect for hypothesis 1 in this study. We will assess the effect of treatment 2 using a within treatment design, which will result in a sample size of 200 decisions. We predict a significant effect to test hypothesis 2, based on an expected effect size of d=0.23. We are unlikely to find a significant effect for the size of public investment that is most likely to change the propensity of investment, but will again receive directional information. We will also track potential control variables at the investor level and organizational level to assess potential directional heterogeneous effects of treatment. We will assess the effectiveness of the two treatments on the propensity of investors to invest in a startup founded by a female CEO. Our hypotheses predict that both treatment 1 and treatment 2 will increase the propensity to invest in firms founded by female CEOs. For treatment 1, we will assess the effect of the treatment on startups led by female CEOs only. We will have a sample size of 100 investor decisions on startups led by a female CEO and we will assess the effect of this treatment using a between subject design. We predict only a directional effect for hypothesis 1 in this study. We will assess the effect of treatment 2 using a within treatment design, which will result in a sample size of 200 decisions, with 100 decisions made on startups with female founder-CEOs. We will assess the effects on all startups as well as those led by female CEOs. We predict a significant positive effect on all startups, and a directional effect on startups led by female CEOs.
Planned Number of Clusters First, each investor will make two decisions. Each investor is a cluster. Second, we will cluster by online event, which will result in 2 or more clusters. We have planned 2 events, and need a sample size of 150 investors. If, after the two events, we have not reached 150 investor responses, we will continue to run events until we reach this sample. First, each investor will make two decisions. Each investor is a cluster. Second, we will cluster by online event, which will result in 2 or more clusters. We have planned 2 events, and need a sample size of 150 investors. Third, we will cluster by the startup pair, which will also signify the industry (either fintech or education) where investments are made. If, after the two events, we have not reached 150 investor responses, we will continue to run events until we reach this sample.
Sample size (or number of clusters) by treatment arms For treatment 1, we will have a sample size of 100 investor decisions on startups led by a female CEO (50 treatment investors making two decisions each and weighting the criteria before investing, and 50 control investors each making one decision without any treatment, and weighting their criteria after investing). For treatment 2, we will have a sample size of 200 decisions (each of the 100 investors will see one investment with treatment information on the term sheet, and one control without). For treatment 1, I will have a sample size of 100 investor decisions on startups led by a female CEO (50 treatment investors making two decisions each and weighting the criteria before investing, and 50 control investors each making one decision without any treatment, and weighting their criteria after investing). For treatment 2, I will have a sample size of 200 total investor decisions on startups, and 100 investor decisions on startups led by a female CEO, in a within subject design.
Power calculation: Minimum Detectable Effect Size for Main Outcomes For hypothesis 1, the expected effect of this treatment is d=0.4 on the decision (Uhlmann & Cohen, 2005), and we will need a sample size of 272 decisions to find evidence of a significant effect at this level. We predict only a directional effect for hypothesis 1 in this study. For hypothesis 2, we will have a sample size of 200 decisions. To predict a significant effect, we will need a minimum expected effect size of d=0.23. The effect of small SBIR grants on investor funding is a positive effect of 0.1 (Howell, 2017), but we believe that an equity investment signal we be more powerful. For hypothesis 1, the expected effect of this treatment is d=0.4 (Uhlmann & Cohen, 2005), and we will need a sample size of 272 decisions to find evidence of a significant effect at this level. We predict only a directional effect for hypothesis 1 in this study. For hypothesis 2a, we will have a sample size of 200 decisions. To observe a significant effect, we will need a minimum expected effect size of d=0.23. For hypothesis 2b, we will have a sample size of 100 decisions. To observe a significant effect, we will need a minimum expected effect size of d=0.33.
Intervention (Hidden) We will assess whether changing evaluation practices affect the investment decisions of 150 investors (at least) at online webinar events about decision-making in uncertain conditions. We will share typical investment criteria and startup briefs on two of three real startups that previously passed through the same accelerator training program. All startups address the same problem statement in the same geography and the same business sector to control for industry-specific variation in investment preferences. All applicants passed through a competitive selection process and received similar (high) evaluations at the end of the training program, ensuring they are of similar underlying stage and quality. Investors will receive a real investor brief used in the training program, with information on: the core team; notable progress and current fundraise; problem (the firm will solve); its proposed solution; market; competition; sales and growth; and financials. Investors will also receive a standard cap table, typical of real startups that are accepted into an accelerator program, with founder ownership, lead investor, additional investor and option pool constant in proportions and very close in amounts. We will ask investors to evaluate two hypothetical investment decisions (i.e. whether they would hypothetically follow up with the firm, ask more questions and/or proceed to due diligence. Investors will be asked to share their decisions with other participants in an “investment committee” after they have made an individual decision, to help to ensure that they pay attention to the task. We will also assess real-world interest, by asking investors whether they would like to be introduced to the firms. Professional investors will be randomly assigned 2 of 3 startups in random case order. We will randomly change the gender of the founder-CEO for half the sample. Each investor will assess one firm led by a female CEO and one by a male CEO. We use normalized photographs from the Chicago Face Database (Ma, Correll, & Wittenbrink, 2015) and the most popular names in the US from 1990, to represent each CEO. We selected two photographs from the database, matched by race to account for status, age to account for a potential experience effect, and attractiveness because good-looking men are more likely to successfully raise money in pitch competitions (Brooks et al., 2015). We will test the causal effects of two changes: one to the evaluation practice by manipulating the process by which investors are asked to evaluate investments, and the other to valuation practice. For the first treatment, we will ask investors to weight a list of evaluation criteria provided in proportion to their importance before evaluating the startups (between welcome screen 1 and decision screen 2). We chose the four top criteria found to be important to early stage investors in a US-based survey (Gompers, Gornall, Kaplan, Strebulaev, 2020). Next, I will ask investors to evaluate each startup. We will ask the control group to weight their criteria after they have evaluated the startups. For the second treatment, a valuation signal from the market, we will edit the term sheet that investors receive and place an additional investment from a “public” fund – in this case, an US-government-backed fund taking a junior position, with an equity cap: in the event of a return over the cap, senior investors will split the gains (this type of equity cap deal is necessary for many publicly-backed funds to manage their finances). After the investor has evaluated, if they respond “no” on the binary question, we will increase the percentage of ownership from 10% of the deal size in 10% increments until investor switches to yes or we reach a majority of the deal size. We will assess whether changing evaluation processes affect the investment decisions of 150 investors (at least) at online webinar events about decision-making in uncertain conditions. We will share typical investment criteria and startup briefs on two of four real startups that previously passed through the same accelerator training program. All startups address the same problem statement in the same geography (North America) and the same business sector (either fintech or education) to control for industry-specific variation in investment preferences. All applicants passed through a competitive selection process and received similar (high) evaluations at the end of the training program, ensuring they are of similar underlying stage and quality. Investors will receive a real investor brief used in the training program, with information on: the core team; notable progress and current fundraise; problem (the firm will solve); its proposed solution; market; competition; sales and growth; and financials. Investors will also receive a standard cap table, typical of real startups that are accepted into an accelerator program, with founder ownership, lead investor, additional investor and option pool constant in proportions and very close in amounts. We will ask investors to evaluate two hypothetical investment decisions (i.e. whether they would hypothetically take the next steps in the investment process with the startup, if they would proceed to a shorter due diligence process if necessary, and what additional information they would need to invest. Investors will be asked to share the criteria they used and their decisions with other participants in an “investment committee” after they evaluated two startups, to help to ensure that they pay attention to the task. We will also assess real-world interest by asking investors whether they would like to be introduced to the firms. Professional investors will be randomly assigned 2 of 4 startups in random case order. We will randomly change the gender of the founder-CEO for half the sample. We will randomize the gender of the founder-CEO using two popular names in the US from 1990, and photographs holding clothing and background constant (Chicago Face Database – Ma, Correll, & Wittenbrink, 2015). We matched photographs by race, age, and ratings of attractiveness because good-looking men are more likely to raise money in pitch competitions (Brooks et al., 2015). We selected two photographs from the database, matched by race to account for status, age to account for a potential experience effect, and attractiveness because good-looking men are more likely to successfully raise money in pitch competitions (Brooks et al., 2015). Because we are running a lab experiment that only manipulates the photograph and name of the founder-CEO, we thought about how to make gender more salient. Therefore, we opted to make all other cofounders female. If the gender coefficient has a negative sign, we theorize that this is because investors would prefer not to invest in a firm led by a female founder-CEO. (An alternative explanation may be because investors prefer diverse teams and would therefore rank an all-female-led team lower. We will attempt to measure this diversity explanation as well as the founder-CEO explanation in Study 2.) We will test the causal effects of two changes: one to evaluation practice by manipulating the order in evaluation practices by which investors are asked to evaluate investments, and the other to a legitimizing information from the market. Evaluation practice treatment (T1). We ask investors to weight a list of four evaluation criteria important to early stage investors (Gompers et al., 2020) before evaluating the startups: “Please think about how you make your decisions and weight the criteria below with percentages of how much weight you would place on each criterion. (Please make sure it adds up to 100%!) [The criteria are: Management team; Product / technology; Total addressable market; Business model / competitive decision; Other (please specify)].” We will pipe these criteria through to a table the investor can see when evaluating their startups. We will ask the control group to weigh their criteria after they have evaluated two startups. Legitimating information treatment (T2). We manipulate the information that investors receive on fundraising. The control group will receive text on the actual fundraising progress of each company i.e. “Beta has raised $1.75M to date and is raising a $1.5M Seed Round, of which $500K is committed from private investors”. The treatment group will receive additional information about an investment from a recognized public fund that invests 10% of the round size using a capped return rate, common to public equity investors. “Beta has raised $1.75M to date and is raising a $1.5M Seed Round, of which $500K is committed from private investors, and $175K is committed from Catapult* (*Catapult is a US government (SBA)-backed fund and aims to increase investment in more diverse founders in North America. Catapult takes a junior equity position with a capped return at 1.05x. Any proceeds remaining after investor’s capped return will be distributed to the other investors in the round. For example, if Catapult invested $100K in a round and an exit yields a 10x return for investors in that round, Catapult's return potential would be $1M. However, since their return is capped at 1.05x, they would only take $105K, and $895K would be distributed pro rata amongst the other investors who participated in this round. In the case of a liquidation event, the other institutional investors would receive their capital before Catapult.)”
Pi as first author No Yes
Back to top