Experimental Design Details
We will assess whether changing evaluation practices affect the investment decisions of 150 investors (at least) at online webinar events about decision-making in uncertain conditions.
We will share typical investment criteria and startup briefs on two of three real startups that previously passed through the same accelerator training program. All startups address the same problem statement in the same geography and the same business sector to control for industry-specific variation in investment preferences. All applicants passed through a competitive selection process and received similar (high) evaluations at the end of the training program, ensuring they are of similar underlying stage and quality. Investors will receive a real investor brief used in the training program, with information on: the core team; notable progress and current fundraise; problem (the firm will solve); its proposed solution; market; competition; sales and growth; and financials. Investors will also receive a standard cap table, typical of real startups that are accepted into an accelerator program, with founder ownership, lead investor, additional investor and option pool constant in proportions and very close in amounts.
We will ask investors to evaluate two hypothetical investment decisions (i.e. whether they would hypothetically follow up with the firm, ask more questions and/or proceed to due diligence. Investors will be asked to share their decisions with other participants in an “investment committee” after they have made an individual decision, to help to ensure that they pay attention to the task. We will also assess real-world interest, by asking investors whether they would like to be introduced to the firms.
Professional investors will be randomly assigned 2 of 3 startups in random case order. We will randomly change the gender of the founder-CEO for half the sample. Each investor will assess one firm led by a female CEO and one by a male CEO. We use normalized photographs from the Chicago Face Database (Ma, Correll, & Wittenbrink, 2015) and the most popular names in the US from 1990, to represent each CEO. We selected two photographs from the database, matched by race to account for status, age to account for a potential experience effect, and attractiveness because good-looking men are more likely to successfully raise money in pitch competitions (Brooks et al., 2015).
We will test the causal effects of two changes: one to the evaluation practice by manipulating the process by which investors are asked to evaluate investments, and the other to valuation practice. For the first treatment, we will ask investors to weight a list of evaluation criteria provided in proportion to their importance before evaluating the startups (between welcome screen 1 and decision screen 2). We chose the four top criteria found to be important to early stage investors in a US-based survey (Gompers, Gornall, Kaplan, Strebulaev, 2020). Next, we will ask investors to evaluate each startup. We will ask the control group to weight their criteria after they have evaluated the startups.
For the second treatment, a valuation signal from the market, we will edit the term sheet that investors receive and place an additional investment from a “public” fund – in this case, an US-government-backed fund taking a junior position, with an equity cap: in the event of a return over the cap, senior investors will split the gains (this type of equity cap deal is necessary for many publicly-backed funds to manage their finances). After the investor has evaluated, if they respond “no” on the binary question, we will increase the percentage of ownership from 10% of the deal size in 10% increments until investor switches to yes or we reach a majority of the deal size.