Changing the System, Not the Seeker

Last registered on October 04, 2024

Pre-Trial

Trial Information

General Information

Title
Changing the System, Not the Seeker
RCT ID
AEARCTR-0007685
Initial registration date
May 18, 2021

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 18, 2021, 9:41 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
October 04, 2024, 10:33 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Primary Investigator

Affiliation
Boston University

Other Primary Investigator(s)

PI Affiliation
University of Oregon
PI Affiliation
GIL - World Bank
PI Affiliation
GIL - World Bank

Additional Trial Information

Status
Completed
Start date
2021-05-18
End date
2023-03-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We will assess the effect of multiple organization-level treatments on the propensity of investors to invest in a startup. We will assess this variable in multiple ways including evaluation on a scale, and more qualitative evaluation.
External Link(s)

Registration Citation

Citation
Miller, Amisha et al. 2024. "Changing the System, Not the Seeker." AEA RCT Registry. October 04. https://doi.org/10.1257/rct.7685-2.1
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Researchers have designed three interventions which an investment organization making investments in early-stage startups will apply to treatment investors as they evaluate startups.
1) Prompting consistent inquiry
2) Evaluating demonstrated competence
3) Sharing prior evaluations
Intervention (Hidden)
How can organizations intervene to foster the objective evaluation of novel ideas? We will examine whether changing investors’ evaluation practices affect real funding decisions for trainee and professional investors evaluating startups in the field, over time.
Everything is subject to change after we collect baseline data on the first two programs (estimated by or before June 14.)
Interventions
1) Connect will prompt consistent inquiry. When interviews are unstructured, evaluators evaluate women worse than men (Rivera 2012, 2015). In investment, equity investors tend to ask female founders more “prevention” or risk-focused questions than male founders. Investors also ask male founders more “promotion” or growth-focused questions than female founders. This is linked to worse founding outcomes for female founders (Kanze et al. 2018).
2) Connect will ask investors to evaluate using demonstrated competence. When orchestras blinded auditions, more females were hired (Goldin & Rouse 2000), perhaps because evaluators were focused on the work task rather than the appearance of candidates (Stephens et al. 2020). In equity investment, our interviews suggest that some investors focus on demonstrated competence by evaluating “what progress has been made… over time, you can definitely gather a lot of data”.
Setting the criteria in advance of evaluation can result in less biased hiring in other settings, because evaluators are more likely to use the criteria (Stephenson et al. 2020). For example, asking managers to weight their criteria before they evaluate has also been shown to reduce retroactive criteria construction and increase hiring of non-gender-normative candidates (Uhlman & Cohen 2005). To ensure that investors actually evaluate using demonstrated competence, we also ask them to apply predefined criteria.
3) As a third, and supporting intervention, Connect will share prior evaluations after the first evaluation period. Organizational transparency, “making relevant, accessible, and accurate…information available” can help to decrease inequity in real hiring outcomes (Castilla 2015: 315).
Setting
We will leverage a real investment setting, Connect. Connect is the “largest organization in the world supporting impact-driven, seed-stage startups. Since 2009 our team has directly worked with more than 1,100 entrepreneurs in 28 countries, and our affiliated fund, Connect Investments, has invested in 110 startups that have gone on to raise more than $4 billion in follow-on capital.” Connect will run eight programs, with four paired-treatment and control programs in regions: Africa, India, MENA and Latin America. Each paired-program will select at least 20 startups – with up to 24 if they wish to accept more, with at least 30% of those startups led by female founders. We will leverage this setting in two ways.
TRAINEE INVESTORS
During the program, Connect will train entrepreneurs to be trainee investors, making real investments on behalf of the program, over three months. Trainee investors will be asked to evaluate the other startups in their program (at least nine), to choose two to receive a $20,000 investment. Each trainee investor will be asked to evaluate over four evaluation periods, where the fourth evaluation determines who receives the investment. During the second, third and fourth evaluation periods, Connect will ask investors to complete due diligence, and rank companies. We will have access to 720 funding decisions by trainee investors.
Researchers will randomize trainee investors into treatment and control programs of at least ten startups each. This randomization will be stratified by region, gender, and venture subsector (so that entrepreneurs’ ventures are not competitors).
Connect will implement all three interventions we designed for the trainee investors.
1) Prompting consistent enquiry: At the end of all four evaluation periods, Connect will ask trainee investors: “what additional information would you want on this venture?” For the treatment group, Connect will ask: “what additional information would you want on how this venture’s potential for growth?”; and “what additional information would you want on how this venture will mitigate risks?”
2) Evaluating using demonstrated competence: During the second, third and fourth evaluation periods, Connect will ask investors to complete due diligence and rank companies. Connect will ask control trainee investors their normal set of evaluation questions on: “what is the company’s growth opportunity? and what is the company’s growth opportunity?” across eight categories (i.e. team, value proposition, market, scale). They will use a 4 point scale per category, resulting in a 24 point scale overall (from 4 to 32).
In the control group, after the final rank, Connect will ask investors which criteria they used as a mechanism check. “Please think about how you made your decisions and weight the criteria below with percentages of how much weight you placed on each criterion. (Please make sure it adds up to 100%!) – [Growth opportunity, Investment opportunity, Improvement made during program].
For the treatment group, Connect will add four questions, each on a 4-point scale – weighted to equal 1/3 of the overall evaluation set: “Since the beginning of the program, how much has this company improved in understanding its path to growth?”, “Since the beginning of the program, how much has this company improved in executing its path to growth?”, “Since the beginning of the program, how much has this company improved in understanding its risks?” “Since the beginning of the program, how much has this company improved in executing on risk mitigation?”
For the treatment group, before the second evaluation period (rank 1), Connect will ask investors which criteria they will use – “Please think about how you make your decisions and weight the criteria below with percentages of how much weight you would place on each criterion. (Please make sure it adds up to 100%!) – [Growth opportunity, Investment opportunity, Improvement made during program]”.
3) Sharing prior evaluations – TBD (estimated June 30, 2021).
As a supporting intervention, before the third evaluation period, evaluators will share the differences between rank by people that prioritized improvement vs. other elements. Connect managers will then re-share the importance of using improvement in evaluating companies.
Note: we will define this exact treatment based on the data we collect during rank 1. We will add to this register after rank 1 (estimated June 30, 2021).

(PROFESSIONAL INVESTORS
As a secondary population, we will observe professional investors that participate in Connect’s programming. The program will invite professional investors to meet the startups, and researchers will track progress through due diligence processes and potential eventual investment decisions for 18 months after the program. We will have access to at least 720 funding decisions by professional investors (Per 8 programs, 30 investors evaluating at least 3 startups).
1) Prompting consistent enquiry: Professional investors will receive surveys during multiple evaluation periods (before the program on deciding who to meet, after meeting the startups, six months after the program, twelve months after the program, and 18 months after the program).
Connect will ask professional investors (all mentors) in the control group: “what additional information would you want on this venture?” For the treatment group, Connect will ask: “what additional information would you want on how this venture’s potential for growth?”; and “what additional information would you want on how this venture will mitigate risks?”
Professional investors will be randomized into treatment and control groups during every evaluation period. Therefore, some professional investors will be treated more than others, which will result in different levels of treatment over time.
2) Evaluating using demonstrated competence – TBD (estimated September 30, 2021).
After the program, researchers will follow up with the investors that met the companies during the program. They will receive surveys six months after the program, twelve months after the program, and 18 months after the program.
Note: We will define this exact treatment based on the data we collect over the program, after the program ends. )

Intervention Start Date
2021-05-25
Intervention End Date
2022-09-30

Primary Outcomes

Primary Outcomes (end points)
Dependent Variable: The dependent variable is propensity to invest in a startup.
Primary Outcomes (explanation)
Dependent Variable: The dependent variable is Yp – the propensity to invest in a startup by the paired program. The four paired programs will take place in four geographic regions and include entrepreneurs from across those regions: Sub-Saharan Africa, India, MENA and Latin America. This will result in a total sample of eight programs - four treatment programs and four control programs – with one treatment and one control program in each location. We use fixed effects for the paired program in all regressions. (We will only include fixed effects for the investor in pooled regressions when we join up the sample with the professional investors.)
We will measure the dependent variable using 4 methods.
For the first treatment – prompting consistent enquiry – our primary dependent variable will be qualitative, following Kanze et al. (2018).
For the second treatment – evaluating demonstrated competence – our primary dependent variable will be scales, inspired by Clingingsmith and Shane’s (2018) dependent variable.
1. Scales: Each trainee investor will evaluate each startup on a scale. The baseline evaluation takes place on a 6-point scale. Thereafter, evaluators will use a 24-point scale (control group) and a 32-point scale (treatment group). All scale evaluations are normalized by the program using a z-score.
2. Binary: Each trainee investor will know that the top 2 rated startups will receive investment. Therefore, each trainee investor will carefully consider who they place in the top 2.
3. Qualitative: Each trainee investor will be asked what additional information they need from the startup. Trainee investors will also ask for additional information in conversations, and combined, this will form a secondary dependent variable. All responses will be coded as “promotion-focused” or “prevention-focused”. We will assess the proportion of promotion vs. prevention-focused questions (Kanze et al. 2018).
4. Performance-reward bias: Normalized scale (Male normalized qualitative proportion – Female normalized qualitative proportion) – see Castilla (2008). Intuitively, if you get the same scale rank, what is the difference between the qualitative score by the gender of the entrepreneur?

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We will assess the effect of multiple treatments on the propensity of investors to invest in a startup. We will assess this variable in multiple ways including evaluation on a scale, and more qualitative evaluation.
Experimental Design Details
TRAINEE INVESTORS
We will assess the main hypothesis using the regressions:
Y_ip=α_1 y_0p±α_2 F_ip±α_3 〖T1〗_ip ±〖 α〗_4 〖〖T1×F〗_ip±X_j±L_p±ε_ip〗_
Y_ip=〖β_1 y_0p± β〗_2 F_ip±β_3 〖T2〗_ip ±〖 β〗_4 〖〖T2×F〗_ip±X_j±L_p±ε_ip〗_
This is an ANCOVA regression, which we plan to use to increase our statistical power, following McKenzie (2012). Y0 is the baseline measurement of the dependent variable – before the program starts. i is the stage of measurement – the ranking variable over three time periods. We will measure the dependent variable three times during each program and pool the three measurements. (We will also measure the dependent variable at specific points in time, which will give us less statistical power, but will produce directional effects over time, which we also plan to analyze.)
Dependent Variable: The dependent variable is Yp – the propensity to invest in a startup by the paired program. The four paired programs will take place in four geographic regions and include entrepreneurs from across those regions: Sub-Saharan Africa, India, MENA and Latin America. This will result in a total sample of eight programs - four treatment programs and four control programs – with one treatment and one control program in each location. We use fixed effects for the paired program in all regressions. (We will only include fixed effects for the investor in pooled regressions when we join up the sample with the professional investors.)
We will measure the dependent variable using 4 methods.
For the first treatment – prompting consistent enquiry – our primary dependent variable will be qualitative, following Kanze et al. (2018).
For the second treatment – evaluating demonstrated competence – our primary dependent variable will be scales, inspired by Clingingsmith and Shane’s (2018) dependent variable.
Scales: Each trainee investor will evaluate each startup on a scale. The baseline evaluation takes place on a 6-point scale. Thereafter, evaluators will use a 24-point scale (control group) and a 32-point scale (treatment group). All scale evaluations are normalized by the program using a z-score.
Binary: Each trainee investor will know that the top 2 rated startups will receive investment. Therefore, each trainee investor will carefully consider who they place in the top 2.
Qualitative: Each trainee investor will be asked what additional information they need from the startup. Trainee investors will also ask for additional information in conversations, and combined, this will form a secondary dependent variable. All responses will be coded as “promotion-focused” or “prevention-focused”. We will assess the proportion of promotion vs. prevention-focused questions (Kanze et al. 2018).
Performance-reward bias: Normalized scale (Male normalized qualitative proportion – Female normalized qualitative proportion) – see Castilla (2008). Intuitively, if you get the same scale rank, what is the difference between the qualitative score by the gender of the entrepreneur?

Gender. In the model, F is our gender variable – a female-led company. In our context, we define a female-led company as the founder that the investor interacts with identifies as female. We made this choice because other researchers have found the gender of the person pitching a startup to be meaningful to evaluations of that startup (e.g., Brooks at al. 2015, Kanze et al. 2018). Because our interventions focus on changing the organizational evaluation process after an interaction with a founder, we find this the most apt definition.
In practice, we know that many teams have multiple co-founders, who may be present during the program. To implement our definition, we have three measures, but the first is our preferred measure:
Binary: Was the female founder present in the interaction with the investor?
Binary: Does the company have a female founder on their founding team sheet for investors?
Scale (percentage): How much was the company represented by a female founder in interactions? We will note which founder speaks in each workshop during the program.

Intervention 1: For intervention 1 on prompting confident inquiry, we will have a sample size of at least 216 investor decisions on startups led by a female founder (within 720 total investor decisions on all startups). We will assess the effect of this treatment using a between subject design. Using 1 pre measure and 3 post measures in ANCOVA requires a sample size of 212 total decisions on females (across treatment and control) for a total power calculation of 0.85. We therefore estimate that we will see a significant positive effect of the treatment on the qualitative dependent variable primarily (but we will look for positive and significant effects on other DVs too). We expect a positive and significant sign on a4 on all DVs.
Intervention 2: For intervention 2 on evaluating using demonstrated competence, we will have the same sample size as intervention 1, with the same power calculations. We therefore estimate that we will see a significant positive effect of the treatment on the scale dependent variable primarily (but we will look for positive and significant effects on other DVs too). We expect a positive and significant sign on b4 on all DVs.
Intervention 3: TBD.
Control variables. In the equations where we do not use investor fixed effects, we will include potential control variables at the investor level including the gender of the investor (Greenberg & Mollick, 2018), and experience of the investor (Clingingsmith & Shane, 2018), which we will measure by a binary variable: having participated in another accelerator, and continuous variables: years of experience in the market.
In the equations where we do not use startup fixed effects, we will include control variables for the startups including:
Evidence of their underlying quality: whether they were accepted by Connect or waitlisted (binary); their average score in Connect due diligence (1-4 continuous)
Evidence of their maturity: whether they were in the most popular geographic market for the program from the finalist group i.e., Egypt for MENA (binary); number of founders (categorical); total employees (categorical); the log of funds raised (continuous).

We have additional hypotheses and data drawing from professional investors - attached.
Randomization Method
randomization done in office by a computer
Randomization Unit
individual investor

Planned Number of Clusters
Each trainee investor is a cluster and will make at least 9 decisions.
Each professional investor is a cluster and will make at least 3 decisions.
Planned Number of Observations
At least 1,500 individual investor decisions, from at least 200 individual investors.
Sample size (or number of clusters) by treatment arms *
For treatment 1 and treatment 2, I will have a sample size of at least 1,500 individual investor decisions, and 456 on female founders. These stem from at least 200 individual investors.
Power calculation: Minimum Detectable Effect Size for Main Outcomes
The minimum detectable effect size for the ANCOVA calculations is 0.225.
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
200 investors
Sample size: planned number of observations
1,500
Sample size (or number of clusters) by treatment arms
100 investors treatment, 100 investors control
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
The minimum detectable effect size for the ANCOVA calculations is 0.225.
IRB

Institutional Review Boards (IRBs)

IRB Name
Boston University IRB
IRB Approval Date
2020-08-21
IRB Approval Number
5690X
Analysis Plan

Analysis Plan Documents

2021_05_11_AEA+Registry_Vilcap_Experiment_2.docx

MD5: 8756513eb543bd589e9c765ea1ff9802

SHA1: d2fe096ae9792f3316fd9bf15da117bea31b0719

Uploaded At: May 17, 2021

2021_06_14_AEA+Registry_Vilcap_Experiment_CLEAN.docx

MD5: 4fdd04ed700b7322da4b810a4e99dbad

SHA1: 73f5b3d5884b740d697db7d0a4a794ecac46b07b

Uploaded At: June 14, 2021

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
Yes
Intervention Completion Date
September 30, 2022, 12:00 +00:00
Data Collection Complete
Yes
Data Collection Completion Date
September 30, 2022, 12:00 +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
278 investors
Was attrition correlated with treatment status?
No
Final Sample Size: Total Number of Observations
33,541 investor-startup-time decisions
Final Sample Size (or Number of Clusters) by Treatment Arms
133 investors control, 127 investors treatment
Data Publication

Data Publication

Is public data available?
No

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Program Files

Program Files
No
Reports, Papers & Other Materials

Relevant Paper(s)

Abstract
Asking Better Questions : The Effect of Changing Investment Organizations’ Evaluation Practices on Gender Disparities in Funding Innovation
Citation
Miller,Amisha; Lall,Saurabh A.; Goldstein,Markus P.; Montalvao Machado,Joao H. C.. Asking Better Questions : The Effect of Changing Investment Organizations’ Evaluation Practices on Gender Disparities in Funding Innovation (English). Policy Research working paper ; no. WPS 10625; Impact Evaluation series Washington, D.C. : World Bank Group. http://documents.worldbank.org/curated/en/099928412042326894/IDU0ab42caf50f6a8048af0b22203c59c8887bef

Reports & Other Materials