Back to History Current Version

Changing investors' evaluation practices

Last registered on August 26, 2020

Pre-Trial

Trial Information

General Information

Title
Changing investors' evaluation practices
RCT ID
AEARCTR-0006314
Initial registration date
August 25, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 26, 2020, 11:49 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
Boston University

Other Primary Investigator(s)

PI Affiliation
World Bank
PI Affiliation
World Bank
PI Affiliation
World Bank
PI Affiliation
University of Oregon

Additional Trial Information

Status
In development
Start date
2020-08-26
End date
2020-10-31
Secondary IDs
Abstract
Early stage investors often evaluate startup candidates for investment in highly uncertain environments, where decision outcomes are non-obvious and the majority of startups will fail. Most scholarship has examined how investors work with startups post investment, rather than how investors select startups. Yet this selection process is important as evidence suggests that equity investors concentrate their investments on startups led by white, male founders in California. This is problematic not only for talented diverse founders from elsewhere, but also for any investor concerned with missing promising investment opportunities. Scholars suggest that differences in context can shape which startups are considered or valued, but little is known about the process by which early stage investors evaluate the investment worthiness of deals and the ramifications of these processes on investment decisions. Without this understanding, it is difficult to assess how evaluation processes can be changed to include consideration of more diverse types of founders. How do investment organizations source, evaluate and value early-stage investments? How does this shape the diversity of the founders they consider? We will conduct field experiments with professional and trainee investors to assess the effect of changing two evaluation practices on investment decisions.

External Link(s)

Registration Citation

Citation
Alibhai, Salman et al. 2020. "Changing investors' evaluation practices." AEA RCT Registry. August 26. https://doi.org/10.1257/rct.6314-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
We will assess whether changing evaluation practices affect the investment decisions of investors.

For the first treatment, we will ask investors to weight a list of evaluation criteria provided in proportion to their importance before evaluating the startups (between welcome screen 1 and decision screen 2). Next, we will ask investors to evaluate each startup. We will ask the control group to weight their criteria after they have evaluated the startups.

For the second treatment, we will edit the term sheet that investors receive and place an additional investment from a “public” fund. After the investor has evaluated, if they respond “no” on the binary startup evaluation question, we will increase the percentage of ownership from 10% of the deal size in 10% increments until investor switches to "yes" or we reach the limit of the deal size.
Intervention (Hidden)
We will assess whether changing evaluation practices affect the investment decisions of 150 investors (at least) at online webinar events about decision-making in uncertain conditions.

We will share typical investment criteria and startup briefs on two of three real startups that previously passed through the same accelerator training program. All startups address the same problem statement in the same geography and the same business sector to control for industry-specific variation in investment preferences. All applicants passed through a competitive selection process and received similar (high) evaluations at the end of the training program, ensuring they are of similar underlying stage and quality. Investors will receive a real investor brief used in the training program, with information on: the core team; notable progress and current fundraise; problem (the firm will solve); its proposed solution; market; competition; sales and growth; and financials. Investors will also receive a standard cap table, typical of real startups that are accepted into an accelerator program, with founder ownership, lead investor, additional investor and option pool constant in proportions and very close in amounts.

We will ask investors to evaluate two hypothetical investment decisions (i.e. whether they would hypothetically follow up with the firm, ask more questions and/or proceed to due diligence. Investors will be asked to share their decisions with other participants in an “investment committee” after they have made an individual decision, to help to ensure that they pay attention to the task. We will also assess real-world interest, by asking investors whether they would like to be introduced to the firms.

Professional investors will be randomly assigned 2 of 3 startups in random case order. We will randomly change the gender of the founder-CEO for half the sample. Each investor will assess one firm led by a female CEO and one by a male CEO. We use normalized photographs from the Chicago Face Database (Ma, Correll, & Wittenbrink, 2015) and the most popular names in the US from 1990, to represent each CEO. We selected two photographs from the database, matched by race to account for status, age to account for a potential experience effect, and attractiveness because good-looking men are more likely to successfully raise money in pitch competitions (Brooks et al., 2015).
We will test the causal effects of two changes: one to the evaluation practice by manipulating the process by which investors are asked to evaluate investments, and the other to valuation practice. For the first treatment, we will ask investors to weight a list of evaluation criteria provided in proportion to their importance before evaluating the startups (between welcome screen 1 and decision screen 2). We chose the four top criteria found to be important to early stage investors in a US-based survey (Gompers, Gornall, Kaplan, Strebulaev, 2020). Next, I will ask investors to evaluate each startup. We will ask the control group to weight their criteria after they have evaluated the startups.
For the second treatment, a valuation signal from the market, we will edit the term sheet that investors receive and place an additional investment from a “public” fund – in this case, an US-government-backed fund taking a junior position, with an equity cap: in the event of a return over the cap, senior investors will split the gains (this type of equity cap deal is necessary for many publicly-backed funds to manage their finances). After the investor has evaluated, if they respond “no” on the binary question, we will increase the percentage of ownership from 10% of the deal size in 10% increments until investor switches to yes or we reach a majority of the deal size.
Intervention Start Date
2020-08-26
Intervention End Date
2020-10-31

Primary Outcomes

Primary Outcomes (end points)
Propensity to invest in diverse founders
Primary Outcomes (explanation)
We will assess the dependent variable – the propensity to invest in a startup led by a female CEO - with a judge score, using a seven-point Likert-scale from “strongly disagree” to “strongly agree.” Following (Clingingsmith & Shane, (2018), we will ask investors to answer four questions on whether they would take further action to invest in the startup: 1) “I would pursue a follow-up meeting to learn more about the venture.”; 2) “I would be interested in seeing the business plan for this venture.”; 3) “I would recommend this opportunity to a co-investor.”; 4) “I would initiate due diligence on this venture.” I will aggregate responses into a score from 4-28.

To think more closely about how an investor decision might play out in practice where investors either do or do not take further action, we also ask a binary question “Imagine you need to make an urgent decision because the round closes this week. Given the information you have on this deal, would you take this startup forward and shorten your due diligence process?”

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We will assess the effectiveness of the two treatments on the propensity of investors to invest in a startup led by a female CEO. The two interventions are (1) weighting evaluation criteria prior to assessment of candidates; (2) receiving a valuation signal from the market, in which investors are told that the firm has received an additional investment from a “public” fund – in this case, a US-government-backed fund taking a junior position. Our hypotheses predict that both treatment 1 and treatment 2 will increase the propensity to invest in firms led by female CEOs.

For treatment 1, we will have a sample size of 100 investor decisions on startups led by a female CEO and we will assess the effect of this treatment using a between subject design. We predict only a directional effect for hypothesis 1 in this study.

We will assess the effect of treatment 2 using a within treatment design, which will result in a sample size of 200 decisions. We predict a significant effect to test hypothesis 2, based on an expected effect size of d=0.23. We are unlikely to find a significant effect for the size of public investment that is most likely to change the propensity of investment, but will again receive directional information.

We will also track potential control variables at the investor level and organizational level to assess potential directional heterogeneous effects of treatment.
Experimental Design Details
We will assess whether changing evaluation practices affect the investment decisions of 150 investors (at least) at online webinar events about decision-making in uncertain conditions.

We will share typical investment criteria and startup briefs on two of three real startups that previously passed through the same accelerator training program. All startups address the same problem statement in the same geography and the same business sector to control for industry-specific variation in investment preferences. All applicants passed through a competitive selection process and received similar (high) evaluations at the end of the training program, ensuring they are of similar underlying stage and quality. Investors will receive a real investor brief used in the training program, with information on: the core team; notable progress and current fundraise; problem (the firm will solve); its proposed solution; market; competition; sales and growth; and financials. Investors will also receive a standard cap table, typical of real startups that are accepted into an accelerator program, with founder ownership, lead investor, additional investor and option pool constant in proportions and very close in amounts.

We will ask investors to evaluate two hypothetical investment decisions (i.e. whether they would hypothetically follow up with the firm, ask more questions and/or proceed to due diligence. Investors will be asked to share their decisions with other participants in an “investment committee” after they have made an individual decision, to help to ensure that they pay attention to the task. We will also assess real-world interest, by asking investors whether they would like to be introduced to the firms.

Professional investors will be randomly assigned 2 of 3 startups in random case order. We will randomly change the gender of the founder-CEO for half the sample. Each investor will assess one firm led by a female CEO and one by a male CEO. We use normalized photographs from the Chicago Face Database (Ma, Correll, & Wittenbrink, 2015) and the most popular names in the US from 1990, to represent each CEO. We selected two photographs from the database, matched by race to account for status, age to account for a potential experience effect, and attractiveness because good-looking men are more likely to successfully raise money in pitch competitions (Brooks et al., 2015).

We will test the causal effects of two changes: one to the evaluation practice by manipulating the process by which investors are asked to evaluate investments, and the other to valuation practice. For the first treatment, we will ask investors to weight a list of evaluation criteria provided in proportion to their importance before evaluating the startups (between welcome screen 1 and decision screen 2). We chose the four top criteria found to be important to early stage investors in a US-based survey (Gompers, Gornall, Kaplan, Strebulaev, 2020). Next, we will ask investors to evaluate each startup. We will ask the control group to weight their criteria after they have evaluated the startups.

For the second treatment, a valuation signal from the market, we will edit the term sheet that investors receive and place an additional investment from a “public” fund – in this case, an US-government-backed fund taking a junior position, with an equity cap: in the event of a return over the cap, senior investors will split the gains (this type of equity cap deal is necessary for many publicly-backed funds to manage their finances). After the investor has evaluated, if they respond “no” on the binary question, we will increase the percentage of ownership from 10% of the deal size in 10% increments until investor switches to yes or we reach a majority of the deal size.
Randomization Method
Qualtrics randomization tool
Randomization Unit
Individual decisions
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
First, each investor will make two decisions. Each investor is a cluster.

Second, we will cluster by online event, which will result in 2 or more clusters. We have planned 2 events, and need a sample size of 150 investors.

If, after the two events, we have not reached 150 investor responses, we will continue to run events until we reach this sample.
Sample size: planned number of observations
300 individual decisions, from 150 individual investors
Sample size (or number of clusters) by treatment arms
For treatment 1, we will have a sample size of 100 investor decisions on startups led by a female CEO (50 treatment investors making two decisions each and weighting the criteria before investing, and 50 control investors each making one decision without any treatment, and weighting their criteria after investing).

For treatment 2, we will have a sample size of 200 decisions (each of the 100 investors will see one investment with treatment information on the term sheet, and one control without).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
For hypothesis 1, the expected effect of this treatment is d=0.4 on the decision (Uhlmann & Cohen, 2005), and we will need a sample size of 272 decisions to find evidence of a significant effect at this level. We predict only a directional effect for hypothesis 1 in this study. For hypothesis 2, we will have a sample size of 200 decisions. To predict a significant effect, we will need a minimum expected effect size of d=0.23. The effect of small SBIR grants on investor funding is a positive effect of 0.1 (Howell, 2017), but we believe that an equity investment signal we be more powerful.
IRB

Institutional Review Boards (IRBs)

IRB Name
Boston University
IRB Approval Date
2020-08-21
IRB Approval Number
5690X
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials