Changing investors' evaluation practices

Last registered on October 18, 2020

Pre-Trial

Trial Information

General Information

Title
Changing investors' evaluation practices
RCT ID
AEARCTR-0006314
Initial registration date
August 25, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 26, 2020, 11:49 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
October 18, 2020, 4:49 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Primary Investigator

Affiliation
Boston University

Other Primary Investigator(s)

PI Affiliation
World Bank
PI Affiliation
World Bank
PI Affiliation
World Bank
PI Affiliation
University of Oregon

Additional Trial Information

Status
In development
Start date
2020-10-20
End date
2020-12-25
Secondary IDs
Abstract
Early stage investors often evaluate startup candidates for investment in highly uncertain environments, where decision outcomes are non-obvious and the majority of startups will fail. Most scholarship has examined how investors work with startups post investment, rather than how investors select startups. Yet this selection process is important as evidence suggests that equity investors concentrate their investments on startups led by white, male founders in California. This is problematic not only for talented diverse founders from elsewhere, but also for any investor concerned with missing promising investment opportunities. Scholars suggest that differences in context can shape which startups are considered or valued, but little is known about the process by which early stage investors evaluate the investment worthiness of deals and the ramifications of these processes on investment decisions. Without this understanding, it is difficult to assess how evaluation processes can be changed to include consideration of more diverse types of founders. How do investment organizations source, evaluate and value early-stage investments? How does this shape the diversity of the founders they consider? We will conduct field experiments with professional and trainee investors to assess the effect of changing two evaluation practices on investment decisions.

External Link(s)

Registration Citation

Citation
Miller, Amisha et al. 2020. "Changing investors' evaluation practices." AEA RCT Registry. October 18. https://doi.org/10.1257/rct.6314-1.1
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
For the first treatment, we will ask investors to weigh a list of evaluation criteria provided in proportion to their importance before evaluating the startups. Next, we will ask investors to evaluate startups. In contrast, we will ask the control group to weight their criteria after they have evaluated the startups.

For the second treatment, we will edit the term sheet that investors receive and place an additional investment from a “public” fund. After the investor has evaluated, if they respond “no” on our binary startup evaluation question, we will increase the percentage of ownership until investor switches to "yes" or we reach the limit of the deal size.
Intervention (Hidden)
We will assess whether changing evaluation processes affect the investment decisions of 150 investors (at least) at online webinar events about decision-making in uncertain conditions.

We will share typical investment criteria and startup briefs on two of four real startups that previously passed through the same accelerator training program. All startups address the same problem statement in the same geography (North America) and the same business sector (either fintech or education) to control for industry-specific variation in investment preferences. All applicants passed through a competitive selection process and received similar (high) evaluations at the end of the training program, ensuring they are of similar underlying stage and quality. Investors will receive a real investor brief used in the training program, with information on: the core team; notable progress and current fundraise; problem (the firm will solve); its proposed solution; market; competition; sales and growth; and financials. Investors will also receive a standard cap table, typical of real startups that are accepted into an accelerator program, with founder ownership, lead investor, additional investor and option pool constant in proportions and very close in amounts.

We will ask investors to evaluate two hypothetical investment decisions (i.e. whether they would hypothetically take the next steps in the investment process with the startup, if they would proceed to a shorter due diligence process if necessary, and what additional information they would need to invest. Investors will be asked to share the criteria they used and their decisions with other participants in an “investment committee” after they evaluated two startups, to help to ensure that they pay attention to the task. We will also assess real-world interest by asking investors whether they would like to be introduced to the firms.

Professional investors will be randomly assigned 2 of 4 startups in random case order. We will randomly change the gender of the founder-CEO for half the sample. We will randomize the gender of the founder-CEO using two popular names in the US from 1990, and photographs holding clothing and background constant (Chicago Face Database – Ma, Correll, & Wittenbrink, 2015). We matched photographs by race, age, and ratings of attractiveness because good-looking men are more likely to raise money in pitch competitions (Brooks et al., 2015). We selected two photographs from the database, matched by race to account for status, age to account for a potential experience effect, and attractiveness because good-looking men are more likely to successfully raise money in pitch competitions (Brooks et al., 2015). Because we are running a lab experiment that only manipulates the photograph and name of the founder-CEO, we thought about how to make gender more salient. Therefore, we opted to make all other cofounders female. If the gender coefficient has a negative sign, we theorize that this is because investors would prefer not to invest in a firm led by a female founder-CEO. (An alternative explanation may be because investors prefer diverse teams and would therefore rank an all-female-led team lower. We will attempt to measure this diversity explanation as well as the founder-CEO explanation in Study 2.)

We will test the causal effects of two changes: one to evaluation practice by manipulating the order in evaluation practices by which investors are asked to evaluate investments, and the other to a legitimizing information from the market.

Evaluation practice treatment (T1). We ask investors to weight a list of four evaluation criteria important to early stage investors (Gompers et al., 2020) before evaluating the startups: “Please think about how you make your decisions and weight the criteria below with percentages of how much weight you would place on each criterion. (Please make sure it adds up to 100%!) [The criteria are: Management team; Product / technology; Total addressable market; Business model / competitive decision; Other (please specify)].” We will pipe these criteria through to a table the investor can see when evaluating their startups. We will ask the control group to weigh their criteria after they have evaluated two startups.

Legitimating information treatment (T2). We manipulate the information that investors receive on fundraising. The control group will receive text on the actual fundraising progress of each company i.e. “Beta has raised $1.75M to date and is raising a $1.5M Seed Round, of which $500K is committed from private investors”. The treatment group will receive additional information about an investment from a recognized public fund that invests 10% of the round size using a capped return rate, common to public equity investors. “Beta has raised $1.75M to date and is raising a $1.5M Seed Round, of which $500K is committed from private investors, and $175K is committed from Catapult* (*Catapult is a US government (SBA)-backed fund and aims to increase investment in more diverse founders in North America. Catapult takes a junior equity position with a capped return at 1.05x. Any proceeds remaining after investor’s capped return will be distributed to the other investors in the round. For example, if Catapult invested $100K in a round and an exit yields a 10x return for investors in that round, Catapult's return potential would be $1M. However, since their return is capped at 1.05x, they would only take $105K, and $895K would be distributed pro rata amongst the other investors who participated in this round. In the case of a liquidation event, the other institutional investors would receive their capital before Catapult.)”
Intervention Start Date
2020-10-20
Intervention End Date
2020-12-25

Primary Outcomes

Primary Outcomes (end points)
Propensity to invest in diverse founders
Primary Outcomes (explanation)
We assess the dependent variable – the propensity to invest in a startup founded by a female CEO – in two ways.

First, following Clingingsmith & Shane (2018), we use an aggregate score variable from 4-28. We will ask investors to answer four questions on whether they would take further action to invest in the startup using a seven-point Likert-scale: 1) “I would pursue a follow-up meeting to learn more about the venture.”; 2) “I would be interested in seeing the business plan for this venture.”; 3) “I would recommend this opportunity to a co-investor.”; 4) “I would initiate due diligence on this venture.”

To think more closely about how an investor decision might play out in practice where investors either do or do not take further action, I also ask a binary question, drawing from previously collected interview data: “Imagine the round closes this week. Given the information you have on this venture, would you shorten your due diligence process to assess the venture? (and I explain what this might entail for an organization). To analyze the effect of treatments on an investor’s propensity to invest, I will use OLS, probit and logit regressions.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We will assess the effectiveness of the two treatments on the propensity of investors to invest in a startup founded by a female CEO. Our hypotheses predict that both treatment 1 and treatment 2 will increase the propensity to invest in firms founded by female CEOs.

For treatment 1, we will assess the effect of the treatment on startups led by female CEOs only. We will have a sample size of 100 investor decisions on startups led by a female CEO and we will assess the effect of this treatment using a between subject design. We predict only a directional effect for hypothesis 1 in this study.

We will assess the effect of treatment 2 using a within treatment design, which will result in a sample size of 200 decisions, with 100 decisions made on startups with female founder-CEOs. We will assess the effects on all startups as well as those led by female CEOs. We predict a significant positive effect on all startups, and a directional effect on startups led by female CEOs.
Experimental Design Details
We will explore the effect of changing two elements of investors’ evaluation processes on individual investor decisions. One intervention changes the evaluation practice used within an organization, and one intervention changes the information received from the environmental level. Following Yang and Aldrich (2014), We conceptualize organization-level practices and environmental information as frames or inputs to decision-making. By testing changes at each level, we will begin to assess the level and type of intervention that may affect investor decision-making.

Evaluation practice. For the first intervention, we draw from research on diversity in hiring decisions (Kalev, Dobbin, & Kelly, 2006; Dobbin, Kim, & Kalev, 2011). Many diversity training programs have been ineffective at cultivating diversity (Kalev et al., 2006), and an overt focus on meritocracy can even result in more bias (e.g. Castilla & Bernard, 2010). When there is ambiguity in hiring criteria, not only do hiring managers fill in the blanks with stereotypes, but the criteria used to assess merit “can be defined flexibly in a manner congenial to the idiosyncratic strengths of applicants who belong to desired groups” (Uhlmann & Cohen, 2005: 474). Uhlmann and Cohen designed an experiment that reduced the opportunity to retroactively construct criteria for hiring managers, which resulted in more female candidates being recruited. By asking evaluators to commit to placing weights on a set of hiring criteria before assessing applications, candidates were less able to define merit based on the applications they saw compared with a control group. We will replicate this study with equity investors who will weigh criteria before assessing startup briefs. This leads to two hypotheses:

Hypothesis 1a: Investors that weigh investment criteria before evaluating startup applications will be more likely to invest in startups founded by female CEOs than those that weight criteria after evaluating startup applications
Hypothesis 1b: Investors that consider and weigh investment criteria before evaluating startup applications will prioritize different criteria than those weighing criteria after evaluating startup applications (who will retroactively construct criteria)

Legitimating information. For the second intervention, we draw from the entrepreneurial finance literature, which has found that investors must consider how to obtain follow-on investment to succeed (e.g. Gompers, 1995), that they are more likely to invest when they receive information that the startup has a prominent affiliate (Stuart, Hoang, & Hybels, 1999), or when they gain information about a startup from a trusted syndicate partner (Sorenson & Stuart, 2001). The literature on equity investments made by public actors is rare, and evidence on policies designed to increase equity investment is mixed (Lerner, 2012; Lerner & Nanda, 2020). However, in some contexts, equity investors are more likely to invest in early-stage startups when they receive legitimating information about startup quality i.e. when startups receive publicly funded R&D grants (Howell, 2017). This effect could be even more valuable to startups founded by female founders, particularly if women are less likely to fit into investor heuristics (e.g. Huang, 2018; Kanze et al., 2018). This leads to two hypotheses:

Hypothesis 2a: Investors will be more likely to invest in startups that have previously received investment from a legitimate publicly-funded source
Hypothesis 2b: Investors will be more likely to invest in startups founded by female CEOs that have previously received investment from a legitimate publicly-funded source

We will assess the main hypotheses using the regression:
Yijt = a0 + a1.Gijt + a2.T1ijt + a3.T2ijt + a4.[T1ijt.Gijt] + a5.[T2ijt.Gijt] + Li + Lj + Lt + eijt

The dependent variable is Yijt, the propensity to invest in a startup led by a female CEO by the investor (i), the startup pairing (j) and the order in which the investment is made (t). I use fixed effects for the startup pairing (j) and the order in which the investment is made (t).
Our variables of interest are startups led by a female CEO (G), and two treatments: how organizations ask investors to apply the criteria they use (T1); and the type of legitimizing information investors receive from the field (T2). Our hypotheses predict that both treatment 1 and treatment 2 will increase the propensity to invest in firms founded by female CEOs - T1ijt.Gijt and T2ijt.Gijt. Therefore, we will be looking for a positive sign on a4 and a5.

For treatment 1, we will have a sample size of 100 investor decisions on startups led by a female CEO (25 treatment investors making two decisions each and weighting the criteria before investing, and 25 investors each making two decisions without any treatment, and weighting their criteria after investing). We will assess the effect of this treatment using a between subject design. To analyze our main hypothesis 1a, we expect a positive sign on the interaction between startups founded by a female CEO and the treatment. We predict only a directional effect, because the expected effect of this treatment is d=0.4 (Uhlmann & Cohen, 2005), for which we need a sample size of 272 decisions to find evidence of a significant effect. We will causally test this hypothesis further in study 2.

We will assess the effect of treatment 2 as a whole using a within treatment design, which will result in a sample size of 200 decisions (each of the 100 investors will see one investment with treatment information on the term sheet, and one without). While the expected effect of this treatment is unknown, it is likely to be more effective than a research grant at d=0.1 (Howell, 2017), because all startups in the sample are actively pursuing equity investment, and we will increase the value of the investment. We predict a positive significant effect on coefficient a3. This will test hypothesis 2a, and requires an expected effect size of d=0.23. Therefore, we predict a significant positive effect. For hypothesis 2b, we predict a positive, directional effect on coefficient a5, because with a sample of 100 decisions made on startups with female CEOs, we would need an expected effect size of 0.33 – which is a larger effect than we expect. We are very unlikely to find a significant effect for the size of public investment that is most likely to change the propensity of investment, but will again receive some directional information. Again, we can causally test these hypotheses further in study 2 with a larger sample.

We will also analyze some key mechanism variables. First, following hypothesis 1b, we predict that the T1 group will weigh certain criteria differently to the control group. In the control group, we predict that investors will more heavily weigh criteria based on attributes of the startup with a male CEO: they will retroactively construct criteria to fit their perception of what a good founder looks like (Uhlmann & Cohen, 2005).

Second, we will analyze what additional information investors would request from startups before making an investment, which will provide some information on mechanisms that might lead more investors to invest. We predict that startups with female CEOs will be asked to provide more information, following work that suggests that investors require more information from startups presented by a female founder (Kanze et al., 2019).

We will measure other potentially important variables at the investor level including the gender of the investor (Greenberg & Mollick, 2018) and experience of the investor (Clingingsmith & Shane, 2018). We will also measure organizational variables about the organization the investor works for, to assess potential directional heterogenous effects of treatment based on: the size of the investment organization; the date it last raised funds (e.g. Hochberg, Serrano, Ziedonis, 2017); its investment thesis i.e. sector and geography; stated interest in gender-based investments; and motivation of the firm to focus on gender (e.g. Ely & Thomas, 2001).

Using this data, we will assess potential directional heterogenous effects of treatment. We predict that some investors would be naturally more likely to invest in female founders: female investors; investors working for an organization with a founder diversity mandate. We predict that these investors would be less likely to be affected by all treatments.

We predict that treatment 2 will be more effective for experienced investors following Clingingsmith and Shane’s finding that experienced investors responded more to an information shock than other investors. We predict that treatment 1 will be more effective for less experienced investors, as those with experience may be more likely to rely on their existing heuristics when thinking about criteria.
Randomization Method
Qualtrics randomization tool
Randomization Unit
Individual decisions
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
First, each investor will make two decisions. Each investor is a cluster.

Second, we will cluster by online event, which will result in 2 or more clusters. We have planned 2 events, and need a sample size of 150 investors.

Third, we will cluster by the startup pair, which will also signify the industry (either fintech or education) where investments are made.
If, after the two events, we have not reached 150 investor responses, we will continue to run events until we reach this sample.
Sample size: planned number of observations
300 individual decisions, from 150 individual investors
Sample size (or number of clusters) by treatment arms
For treatment 1, I will have a sample size of 100 investor decisions on startups led by a female CEO (50 treatment investors making two decisions each and weighting the criteria before investing, and 50 control investors each making one decision without any treatment, and weighting their criteria after investing).

For treatment 2, I will have a sample size of 200 total investor decisions on startups, and 100 investor decisions on startups led by a female CEO, in a within subject design.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
For hypothesis 1, the expected effect of this treatment is d=0.4 (Uhlmann & Cohen, 2005), and we will need a sample size of 272 decisions to find evidence of a significant effect at this level. We predict only a directional effect for hypothesis 1 in this study. For hypothesis 2a, we will have a sample size of 200 decisions. To observe a significant effect, we will need a minimum expected effect size of d=0.23. For hypothesis 2b, we will have a sample size of 100 decisions. To observe a significant effect, we will need a minimum expected effect size of d=0.33.
IRB

Institutional Review Boards (IRBs)

IRB Name
Boston University
IRB Approval Date
2020-08-21
IRB Approval Number
5690X
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials