x

Please fill out this short user survey of only 3 questions in order to help us improve the site. We appreciate your feedback!
Changing investors' evaluation practices
Last registered on October 18, 2020

Pre-Trial

Trial Information
General Information
Title
Changing investors' evaluation practices
RCT ID
AEARCTR-0006314
Initial registration date
August 25, 2020
Last updated
October 18, 2020 4:49 PM EDT
Location(s)

This section is unavailable to the public. Use the button below to request access to this information.

Request Information
Primary Investigator
Affiliation
Boston University
Other Primary Investigator(s)
PI Affiliation
World Bank
PI Affiliation
World Bank
PI Affiliation
World Bank
PI Affiliation
University of Oregon
Additional Trial Information
Status
In development
Start date
2020-10-20
End date
2020-12-25
Secondary IDs
Abstract
Early stage investors often evaluate startup candidates for investment in highly uncertain environments, where decision outcomes are non-obvious and the majority of startups will fail. Most scholarship has examined how investors work with startups post investment, rather than how investors select startups. Yet this selection process is important as evidence suggests that equity investors concentrate their investments on startups led by white, male founders in California. This is problematic not only for talented diverse founders from elsewhere, but also for any investor concerned with missing promising investment opportunities. Scholars suggest that differences in context can shape which startups are considered or valued, but little is known about the process by which early stage investors evaluate the investment worthiness of deals and the ramifications of these processes on investment decisions. Without this understanding, it is difficult to assess how evaluation processes can be changed to include consideration of more diverse types of founders. How do investment organizations source, evaluate and value early-stage investments? How does this shape the diversity of the founders they consider? We will conduct field experiments with professional and trainee investors to assess the effect of changing two evaluation practices on investment decisions.

External Link(s)
Registration Citation
Citation
Miller, Amisha et al. 2020. "Changing investors' evaluation practices." AEA RCT Registry. October 18. https://doi.org/10.1257/rct.6314-1.1.
Sponsors & Partners

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Experimental Details
Interventions
Intervention(s)
For the first treatment, we will ask investors to weigh a list of evaluation criteria provided in proportion to their importance before evaluating the startups. Next, we will ask investors to evaluate startups. In contrast, we will ask the control group to weight their criteria after they have evaluated the startups.

For the second treatment, we will edit the term sheet that investors receive and place an additional investment from a “public” fund. After the investor has evaluated, if they respond “no” on our binary startup evaluation question, we will increase the percentage of ownership until investor switches to "yes" or we reach the limit of the deal size.
Intervention Start Date
2020-10-20
Intervention End Date
2020-12-25
Primary Outcomes
Primary Outcomes (end points)
Propensity to invest in diverse founders
Primary Outcomes (explanation)
We assess the dependent variable – the propensity to invest in a startup founded by a female CEO – in two ways.

First, following Clingingsmith & Shane (2018), we use an aggregate score variable from 4-28. We will ask investors to answer four questions on whether they would take further action to invest in the startup using a seven-point Likert-scale: 1) “I would pursue a follow-up meeting to learn more about the venture.”; 2) “I would be interested in seeing the business plan for this venture.”; 3) “I would recommend this opportunity to a co-investor.”; 4) “I would initiate due diligence on this venture.”

To think more closely about how an investor decision might play out in practice where investors either do or do not take further action, I also ask a binary question, drawing from previously collected interview data: “Imagine the round closes this week. Given the information you have on this venture, would you shorten your due diligence process to assess the venture? (and I explain what this might entail for an organization). To analyze the effect of treatments on an investor’s propensity to invest, I will use OLS, probit and logit regressions.
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
We will assess the effectiveness of the two treatments on the propensity of investors to invest in a startup founded by a female CEO. Our hypotheses predict that both treatment 1 and treatment 2 will increase the propensity to invest in firms founded by female CEOs.

For treatment 1, we will assess the effect of the treatment on startups led by female CEOs only. We will have a sample size of 100 investor decisions on startups led by a female CEO and we will assess the effect of this treatment using a between subject design. We predict only a directional effect for hypothesis 1 in this study.

We will assess the effect of treatment 2 using a within treatment design, which will result in a sample size of 200 decisions, with 100 decisions made on startups with female founder-CEOs. We will assess the effects on all startups as well as those led by female CEOs. We predict a significant positive effect on all startups, and a directional effect on startups led by female CEOs.
Experimental Design Details
Not available
Randomization Method
Qualtrics randomization tool
Randomization Unit
Individual decisions
Was the treatment clustered?
Yes
Experiment Characteristics
Sample size: planned number of clusters
First, each investor will make two decisions. Each investor is a cluster.

Second, we will cluster by online event, which will result in 2 or more clusters. We have planned 2 events, and need a sample size of 150 investors.

Third, we will cluster by the startup pair, which will also signify the industry (either fintech or education) where investments are made.
If, after the two events, we have not reached 150 investor responses, we will continue to run events until we reach this sample.
Sample size: planned number of observations
300 individual decisions, from 150 individual investors
Sample size (or number of clusters) by treatment arms
For treatment 1, I will have a sample size of 100 investor decisions on startups led by a female CEO (50 treatment investors making two decisions each and weighting the criteria before investing, and 50 control investors each making one decision without any treatment, and weighting their criteria after investing).

For treatment 2, I will have a sample size of 200 total investor decisions on startups, and 100 investor decisions on startups led by a female CEO, in a within subject design.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
For hypothesis 1, the expected effect of this treatment is d=0.4 (Uhlmann & Cohen, 2005), and we will need a sample size of 272 decisions to find evidence of a significant effect at this level. We predict only a directional effect for hypothesis 1 in this study. For hypothesis 2a, we will have a sample size of 200 decisions. To observe a significant effect, we will need a minimum expected effect size of d=0.23. For hypothesis 2b, we will have a sample size of 100 decisions. To observe a significant effect, we will need a minimum expected effect size of d=0.33.
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
Boston University
IRB Approval Date
2020-08-21
IRB Approval Number
5690X
Analysis Plan

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information