Judging Judges: How VCs and Entrepreneurs Evaluate Early-Stage Innovations

Last registered on November 15, 2024

Pre-Trial

Trial Information

General Information

Title
Judging Judges: How VCs and Entrepreneurs Evaluate Early-Stage Innovations
RCT ID
AEARCTR-0014748
Initial registration date
November 01, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 15, 2024, 1:12 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Harvard Business School

Other Primary Investigator(s)

PI Affiliation
Harvard Business School

Additional Trial Information

Status
On going
Start date
2024-10-28
End date
2025-03-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Over the last two decades, venture competitions have emerged as a prominent phenomenon of the entrepreneurial ecosystem. Major competitions like the MIT $100K Entrepreneurship Competition, MassChallenge, the Hult Prize, and TechCrunch Disrupt’s Startup Battlefield have attracted thousands of startups, including companies like Grubhub, Cloudflare, and Akamai Technologies. These contests have provided participants with resources and visibility that have contributed to their growth. Increasingly, universities are hosting competitions that go beyond traditional business plans, rewarding projects aimed at creating both financial value and positive social impact. Investors, experienced entrepreneurs, and industry professionals from diverse networks often serve as judges, bringing varied perspectives to the evaluation of these ventures. However, we know little about how evaluators from differing backgrounds and expertise align or misalign in their assessments of early-stage innovations, what they prioritize given the duo objectives of ventures achieving both financial and social returns, and the consequences of convergent or divergent perceptions among experts in shaping venture success.
External Link(s)

Registration Citation

Citation
Lane, Jacqueline and Miaomiao Zhang. 2024. "Judging Judges: How VCs and Entrepreneurs Evaluate Early-Stage Innovations." AEA RCT Registry. November 15. https://doi.org/10.1257/rct.14748-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Researchers have designed interventions to judges who are affiliated with an innovation ecosystem support organization during their evaluation of early-stage ventures. Specifically, we collected past years' venture submissions and re-engineer the applications to either focus on the financial or social value of the venture. We randomize four re-framed venture ideas to judges from various industry and occupational backgrounds and expertise. We additionally randomize the order of which "social value" and "financial value" appear in the rubric.
Intervention Start Date
2024-10-28
Intervention End Date
2024-12-31

Primary Outcomes

Primary Outcomes (end points)
Main DVs/sources of mechanism test constructs:
- Judges' quantitative ratings of Ingenuity, Social Value, Financial Value, Recommendation as stated in the rubric
- Judges' rank ordering of the relative importance of each criteria into their overall recommendation for each of their randomly assigned venture
- Judges' qualitative rationale/reasoning that led them to the overall recommendation for the venture

Main IDVs:
- The random assignment for either financially- or socially-reframed venture idea (four of them for each judge)
- Judges' entrepreneurial/investment background as corporate/institutional investors, founders/entrepreneurs, angel/impact investors, operators, and specialists
Primary Outcomes (explanation)
We will construct judges' entrepreneurial/investment skills and expertise from their LinkedIn profiles, focusing on years of experience in their company afflictions and positions held, in addition to their self-disclosed primary and secondary occupational background.

We will also use NLP or other LLM tools to codify the qualitative responses we have from judges' rationales leading to their overall recommendation for venture.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Participants in this study will be recruited from our partner site's judging network of innovation competitions. They will be directed to the study recruitment script and consent form upon RSVP for the [redacted] Judging Discussion & Networking session that our partner organization programs. There will have the options to either complete the study before/after the scheduled live virtual session or during the session. Even if they decide not to participate in the event, they are still invited to join the study. The experiment involves evaluating reframed venture submissions, which have been restructured using large language models to emphasize either financial or social impact dimensions.
Experimental Design Details
Not available
Randomization Method
Qualtrics randomization tool
Randomization Unit
Individual decisions
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
Each judge will make 4 recommendation decisions responding to the study stimuli (either financially engineered or socially engineered venture submissions), along with other rubric criteria they are asked to give scores for. Each judge will make up their own cluster. We plan to recruit up to 150 judges.

Sample size: planned number of observations
600 individual decisions, from 150 individual judges.
Sample size (or number of clusters) by treatment arms
This study features within-subject design using mixed-effects model estimation. 300 of the ventures randomly assigned to judges will be financially-engineered and the rest of 300 are socially-focused.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
Institutional Review Board (IRB) of the Harvard University-Area
IRB Approval Date
2024-10-28
IRB Approval Number
IRB24-1384