Back to History Current Version

Elicit preferences in the entrepreneurial financing process

Last registered on July 07, 2020

Pre-Trial

Trial Information

General Information

Title
Elicit preferences in the entrepreneurial financing process
RCT ID
AEARCTR-0004982
Initial registration date
February 01, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
February 07, 2020, 3:48 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
July 07, 2020, 10:06 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Primary Investigator

Affiliation
Columbia University

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2020-02-01
End date
2020-11-01
Secondary IDs
Abstract
Startups are crucial to the innovation and development of an economy. However, many startups face difficulties in terms of raising funding for their projects. In this project, we use field experiments and unique databases to provide causal empirical evidence for identifying potential frictions in the entrepreneurial financing process.
External Link(s)

Registration Citation

Citation
Zhang, Ye. 2020. "Elicit preferences in the entrepreneurial financing process." AEA RCT Registry. July 07. https://doi.org/10.1257/rct.4982-1.1
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Individual-level randomized intervention is implemented.
Intervention (Hidden)
The experiment design contains three sub-experiments:
1. A correspondence test experiment (using pitch emails): reveal the preferences of investors with respect to entrepreneurs and help disentangle different underlying theories
2. An investor-side incentivized survey experiment (investor-side IRR experiment): reveal the preferences of investors with respect to entrepreneurs and help disentangle different underlying theories. There is also a dictator game embedded within the investor-side IRR experiment.
3. An entrepreneur-side incentivized survey experiment (startup-side IRR experiment): reveal the preferences of entrepreneurs with respect to investors

1. Email Experiment:

Hypothesis:
a. female entrepreneurs raise less money from VC industry because VC investors discriminate against female entrepreneurs.
b. Asian entrepreneurs are likely to face discrimination after the COVID-19 outbreak.

In this experiment, the principal subjects will be global VC investors (except investors from European Union countries), and I will measure their response rates to cold-call emails sent from female co-founders and male co-founders. We control all the other email contents almost the same except who sends the cold call emails and whether the email describes the comparative advantages of the startup team. Such audit designs are standard (see the following papers: “Are Emily and Greg more employable than Lakisha and Jamal?”, “Gender, Race, and Entrepreneurship: A Randomized Field Experiment on Venture Capitalists and Angels”)

I did not follow the previous experiment design of "collaborating with a few real startup teams with both male co-founders and female co-founders" because Columbia IRB rejected such experiment design considering the risk level involved.

Factorial Experiment: Gender x Race x Education x Startup Quality (or ESG Criteria)
I randomize the following three layers of information: a. Email subject line b. Email content c. Email sender’s names
Outcomes: I will record all types of responses from investors:
a. Who opens the email (including the related timestamp) and the time spent on reading each email;
b. Who replies the email (including the related timestamp) and the replied contents;
c. Other communication and investment results provided by the startup team, like whether the startup team got investment or not, how many rounds of communication needed before first meeting or phone call, etc. Each startup team only knows the information about their own firm.
d. Which investors viewed the fictitious founders’ personal website or startup website. (For real startup teams, if they have online websites, we also record which investors clicked their online website.)
e. Time spent on reading each email

2. Investor-side IRR experiment

I have designed a survey tool using Qualtrics (a commonly used survey software) which randomly generates hypothetical startup team profiles. Real investors who are willing to participate in this survey experiment will first read a consent form at the beginning of the survey and decide whether they want to participate in the survey or not. After they click the button “Yes, I consent” and leaving their email address, investors will start evaluating 16 hypothetical randomly generated startup profiles varying in many characteristics. Investors need to rate how much they would like to collaborate with such a type of startup team and how likely the team will collaborate with them rather than other investors. After the evaluation section, investors need to answer some background questions asking their demographic information and investment preferences. Lastly, we will implement a “dictator game” (or donation-related experiment) at the end of the survey.

Note: 1. Inviting investors to evaluate 32 profiles in the previous design is extremely hard because participants complain that it is very time-consuming. Therefore, I changed it to 16 profile evaluation after obtaining the IRB approval.
2. Some investors told me that the risk level of the startup also plays an important role in their investment decision making process after they participated in this IRR experiment. Therefore, I added one extra question asking people to evaluate the risk level of each startup profile when recruiting investors who only receive the "matching" incentive. This risk-related question is added at the end of each profile evaluation section in order to minimize its impact on the previously designed questions.
3. In order to provide incentives for each participant to reveal her true preferences, I provide both the "matching incentive" (i.e. the collaborative incubators will ask the matched startups to contact the participant, see Low, Kessler and Sullivan 2019) and the "monetary incentive" (see Armona, Fuster, and Zafar 2019)



“Donation-related experiment”: I designed a “donation-related experiment” at the end of the survey tool to test whether investors have any taste-based discrimination against female startup teams or minority startup teams. After filling out the survey, investors will see the following donation question:

“Thank you for completing the questionnaire. We will offer you a $15 Amazon Gift Card. However, you can also choose to donate a proportion to White Female Startup Club to show your encouragement and support. (Your donation decision is anonymous and will not be disclosed to anyone. We will use your donated money to purchase chocolate for one of the White female startup founders.) Please choose how much you want to donate.
(For example, if you donate $5 to the club, we will send you $10 Amazon Gift Card and use the donated $5 to purchase chocolate for a White female startup founder in our incubators to give them the anonymous encouragement.)”

Each investor can decide how much money they want to donate (i.e. donate $3 and they receive $12 compensation.) We randomize the characteristics of the startup club receiving donations (i.e the wording marked orange.) Some randomly selected investors will decide to donate to the “White Female Startup Club”, some will decide to donate to “Asian Female Startup Club” or “Asian Male Startup Club” or just “our Startup Club” which usually contains White male startup founders. After receiving the finished survey and investors’ donations, we will send the Amazon Gift Card number to their email addresses left in the survey and send the donated money to our collaborative incubators, who will use the donated money to buy chocolates and bring the anonymous support and encouragement to the corresponding types of startup founders.


3. Startup-side IRR experiment

I have designed a survey tool using Qualtrics (a commonly used survey software) which generates hypothetical investor profiles. Real entrepreneurs who are willing to participate in this survey experiment will first fill in some background information and then rate a number of hypothetical investor profiles (varying in numerous characteristics) on how much they would like to collaborate with such a type of investor. Importantly, before filling in the survey and starting to rate different hypothetical investor profiles, entrepreneurs will be informed that all the profiles are hypothetical but that these answers will be used to subsequently match them to real investors, so they should answer carefully. In particular, a machine learning algorithm will provide them with the contact information and other public information of 10 real investors (e.g. email, title, VC company, etc.), chosen based on their preferences in the survey. The recommended investors will be further checked manually to avoid any problems that the algorithm did not take into consideration (For example, whether the investor has invested in the competitor of the startup etc.) The real investors whose contact information will be provided are not considered subjects since the information (including contact information) is extracted from a widely available global investor database. This incentivizes the entrepreneur to think carefully about their answers, since the more accurate the preferences they provide, the better the results and recommendations the algorithm can generate. (Similar design is described in the following paper from Wharton Business School: "Incentivized Resume Rating: Eliciting Employer preferences without deception"). To avoid potential gender and race discrimination problems, I will not use gender or race information in the recommendation algorithm.


The previous designed “Follow-up Experiment” is cancelled because it did not get the approval from Columbia IRB.
Intervention Start Date
2020-02-01
Intervention End Date
2020-10-31

Primary Outcomes

Primary Outcomes (end points)
Investors' evaluation, behaviors
Startups' evaluation, feedback
Primary Outcomes (explanation)
Primary outcomes are direct measurements rather than synthetic outcomes.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The experiment design contains a series of field experiments.
Experimental Design Details
Outcomes:
Experiment 1: Email Correspondence Test
Test:
First order effect: gender, race, education, startup comparative advantages or the ESG criteria (startup quality)
Interaction Effect: gender by race, gender by education, gender by quality, and other interaction effects
Heterogeneous effect: we will replicate Gornall and Strebulav (2019)’s heterogeneous effect. Specifically, heterogeneous effect along investors’ gender, education, fund size, age, location etc.

Experiment 2: Investor-side IRR experiment
Test:
First order effect: all the randomized characteristics of the startups will be tested. All the measurements from the four questions asked for each profile will be used as the outcome variable. Specifically, we want to test the effect of gender, race, age, education and comparative advantages (startup quality)
Interaction Effect: gender by race, gender by education, gender by quality, gender by age and other interaction effects
Heterogeneous effect: if the sample size is large enough, we will check heterogeneous effect along all the demographic dimension of the investors. Also, I will study the heterogeneous effect based on investors' decisions after developing a new econometric estimator by utilizing the extra within-individual level random variation.

Dictator Game: use the donated money as the main outcome variables to disentangle the taste-based discrimination.

Experiment 3: Startup-side IRR experiment
Test:
First order effect: all the randomized characteristics of the investors will be tested. All the measurements from the four questions asked for each profile will be used as the outcome variable. Specifically, we want to test the effect of gender, race, education and experience (investor quality)
Interaction Effect: gender by race, gender by education, gender by quality, and other interaction effects
Heterogeneous effect: if the sample size is large enough, we will check heterogeneous effect along all the demographic dimensions of the evaluators (i.e gender, race, industry, etc) and also the heterogeneous effect based on evaluators' decisions using the recently developed econometric estimator.


Detailed regressions are described in the attached files.
Randomization Method
Individual-level randomization is implemented on the computer by coding.
Randomization Unit
Individual-level randomization is implemented.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
20,000-30,000
Sample size: planned number of observations
experiment 1: 20,000-30,000 experiment 2: 50-100 experiment 3: 900
Sample size (or number of clusters) by treatment arms
balanced, 50% treatment and 50% control
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
experiment 1: 20,000-30,000 experiment 2: 400 experiment 3: 900
IRB

Institutional Review Boards (IRBs)

IRB Name
Columbia Morningside Campus Institutional Review Board
IRB Approval Date
2020-01-17
IRB Approval Number
AAAS6419; AAAS8362; AAAS8730
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials