Judging Judges: How VCs and Entrepreneurs Evaluate Early-Stage Innovations

Last registered on November 15, 2024

Pre-Trial

Trial Information

General Information

Title
Judging Judges: How VCs and Entrepreneurs Evaluate Early-Stage Innovations
RCT ID
AEARCTR-0014748
Initial registration date
November 01, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 15, 2024, 1:12 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
Harvard Business School

Other Primary Investigator(s)

PI Affiliation
Harvard Business School

Additional Trial Information

Status
On going
Start date
2024-10-28
End date
2025-03-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Over the last two decades, venture competitions have emerged as a prominent phenomenon of the entrepreneurial ecosystem. Major competitions like the MIT $100K Entrepreneurship Competition, MassChallenge, the Hult Prize, and TechCrunch Disrupt’s Startup Battlefield have attracted thousands of startups, including companies like Grubhub, Cloudflare, and Akamai Technologies. These contests have provided participants with resources and visibility that have contributed to their growth. Increasingly, universities are hosting competitions that go beyond traditional business plans, rewarding projects aimed at creating both financial value and positive social impact. Investors, experienced entrepreneurs, and industry professionals from diverse networks often serve as judges, bringing varied perspectives to the evaluation of these ventures. However, we know little about how evaluators from differing backgrounds and expertise align or misalign in their assessments of early-stage innovations, what they prioritize given the duo objectives of ventures achieving both financial and social returns, and the consequences of convergent or divergent perceptions among experts in shaping venture success.
External Link(s)

Registration Citation

Citation
Lane, Jacqueline and Miaomiao Zhang. 2024. "Judging Judges: How VCs and Entrepreneurs Evaluate Early-Stage Innovations." AEA RCT Registry. November 15. https://doi.org/10.1257/rct.14748-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Researchers have designed interventions to judges who are affiliated with an innovation ecosystem support organization during their evaluation of early-stage ventures. Specifically, we collected past years' venture submissions and re-engineer the applications to either focus on the financial or social value of the venture. We randomize four re-framed venture ideas to judges from various industry and occupational backgrounds and expertise. We additionally randomize the order of which "social value" and "financial value" appear in the rubric.
Intervention (Hidden)
Judges are asked to participate in the pilot study of our partner organization's venture evaluation program. They will rate four early-stage venture ideas on a scale of 1 to 5 on: 1) Ingenuity (The venture reflects originality and innovativeness); 2) Social Value (The venture promises contribution and change to the society); 3) Financial Value (The venture promises growth and profitability potential); and 4) Recommendation (Based on the criteria above, how strongly do you recommend this venture overall)
- Each of the four ideas is randomly drawn from a pool of 48 re-engineered submissions. Specifically, LLM first neutralizes the original submission. Then, LLM reframes each 24 original submissions to either focus on the social or financial value by adding one sentence to the {Problem, Solution, Value Proposition, Customer} section of the venture idea. Additionally, an "Impact Statement" section is added to similarly focus on either the financial or the social value that this venture is predicted to achieve.
- The four reframed submissions judges see will include a combination of socially- or financially-reframed ones.
- In addition to the 21 original submissions submitted, we also generated 3 AI ones ourselves using enterprise LLM tools. No identifiable information is disclosed or inputted into any AI model during the data generation process.
- On the rubric, we will randomize the order in which 2) social and 3) financial value are displayed to the participants.
- They will also be asked to give their rationales leading to their recommendation, answer how they rank over each criteria given, as well as assess the relative importance of generating social and financial impact in a venture.
Intervention Start Date
2024-10-28
Intervention End Date
2024-12-31

Primary Outcomes

Primary Outcomes (end points)
Main DVs/sources of mechanism test constructs:
- Judges' quantitative ratings of Ingenuity, Social Value, Financial Value, Recommendation as stated in the rubric
- Judges' rank ordering of the relative importance of each criteria into their overall recommendation for each of their randomly assigned venture
- Judges' qualitative rationale/reasoning that led them to the overall recommendation for the venture

Main IDVs:
- The random assignment for either financially- or socially-reframed venture idea (four of them for each judge)
- Judges' entrepreneurial/investment background as corporate/institutional investors, founders/entrepreneurs, angel/impact investors, operators, and specialists
Primary Outcomes (explanation)
We will construct judges' entrepreneurial/investment skills and expertise from their LinkedIn profiles, focusing on years of experience in their company afflictions and positions held, in addition to their self-disclosed primary and secondary occupational background.

We will also use NLP or other LLM tools to codify the qualitative responses we have from judges' rationales leading to their overall recommendation for venture.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Participants in this study will be recruited from our partner site's judging network of innovation competitions. They will be directed to the study recruitment script and consent form upon RSVP for the [redacted] Judging Discussion & Networking session that our partner organization programs. There will have the options to either complete the study before/after the scheduled live virtual session or during the session. Even if they decide not to participate in the event, they are still invited to join the study. The experiment involves evaluating reframed venture submissions, which have been restructured using large language models to emphasize either financial or social impact dimensions.
Experimental Design Details
A recruitment script and consent form is embedded in a separate RSVP Qualtrics form apart from our partner organization's regular programming:
• Participants will see the recruitment script after they RSVP for the [redacted] Judging Discussion & Networking session
• Consent forms will be provided online on the first page of the Qualtrics survey. Potential participants will be asked if they are located outside of the U.S. If they are, they will be directed to a consent form with the GDPR addendum.
• The data collection begins with a series of questions regarding: name, LinkedIn URL, self-identified entrepreneurial experiences as institutional/corporate investors, angels, impact investors, founders/entrepreneurs, operators, specialists, or others.
• They will also rate four early-stage venture ideas on a scale of 1 to 5 on: 1) Ingenuity (The venture reflects originality and innovativeness); 2) Social Value (The venture promises contribution and change to the society); 3) Financial Value (The venture promises growth and profitability potential); and 4) Recommendation (Based on the criteria above, how strongly do you recommend this venture overall)
o Each of the four ideas is randomly drawn from a pool of 48 re-engineered submissions. Specifically, LLM reframes each 24 original submissions to either focus on the social or financial value. The four reframed submissions they see will include a combination of socially- or financially-reframed ones.
o In addition to the 21 original submissions submitted, we also generated 3 AI ones ourselves using enterprise-version LLM tools. No identifiable information is disclosed or inputted into any AI model during the data generation process.
o On the rubric, we will randomize the order in which 2) social and 3) financial value are displayed to the participants.
o They will also be asked to give their rationales leading to their recommendation, answer how they rank over each criteria given, as well as assess the relative importance of generating social and financial impact in a venture.
Randomization Method
Qualtrics randomization tool
Randomization Unit
Individual decisions
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
Each judge will make 4 recommendation decisions responding to the study stimuli (either financially engineered or socially engineered venture submissions), along with other rubric criteria they are asked to give scores for. Each judge will make up their own cluster. We plan to recruit up to 150 judges.

Sample size: planned number of observations
600 individual decisions, from 150 individual judges.
Sample size (or number of clusters) by treatment arms
This study features within-subject design using mixed-effects model estimation. 300 of the ventures randomly assigned to judges will be financially-engineered and the rest of 300 are socially-focused.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
Institutional Review Board (IRB) of the Harvard University-Area
IRB Approval Date
2024-10-28
IRB Approval Number
IRB24-1384

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials