The Selective Tailwind Effect of A.I. on Startups: Predictions and Anomalies

Last registered on May 04, 2026

Pre-Trial

Trial Information

General Information

Title
The Selective Tailwind Effect of A.I. on Startups: Predictions and Anomalies
RCT ID
AEARCTR-0018524
Initial registration date
April 30, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 04, 2026, 8:09 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
IE University

Other Primary Investigator(s)

PI Affiliation
IE University

Additional Trial Information

Status
In development
Start date
2026-05-04
End date
2026-05-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
What role does AI play in entrepreneurial decision making? We have partially explored this question by analyzing the impact of AI predictive tools on the performance of a large sample of US startups. We have exploited the release of Google Analytics 4 (GA4) which introduced AI predictive tools especially useful for mobile app developers. We found some evidence that post-GA4-release there is a boost in customer acquisition. The positive premium is largest for skilled human capital. These preliminary findings suggest that AI predictive tools are useful for complementing skilled human capital in formulating new testable business hypotheses, especially relevant for the detection of breakthroughs.

To further explore our results obtained from large-scale data, we plan to conduct a lab experiment organizing an in-person Hackathon involving a large spectrum of students enrolled at schools in Madrid area. The goal is to better understand the role of predictive AI tools for anomaly detection for the production of breakthrough innovations, as well as the interaction of these tools with human capital.
External Link(s)

Registration Citation

Citation
Conti, Annamaria and Danilo Messinese. 2026. "The Selective Tailwind Effect of A.I. on Startups: Predictions and Anomalies." AEA RCT Registry. May 04. https://doi.org/10.1257/rct.18524-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
The goal of this lab experiment is to test whether the use of AI for anomaly detection can improve the ability to identify innovation opportunities and whether these tools complement or not human capital. The treatment in this experiment is a nudge about the importance of anomalies detection to uncover novel ideas and business opportunities, delivered through a short video and an ad hoc AI platform which provides standard predictive tools plus anomaly detection. The control group is administered a video about the importance of predictive AI for innovating and the same platform as the treated group but without the anomaly detection tool.

More specifically, each group will access a platform developed by the research team to analyze data from a B2C software startup. This platform closely replicates Google Analytics 4 (GA4). Participants will be given a dataset simulating user behavior for a consumer wellness and sleep app as of 2026, including variables such as user sessions, revenue, engagement, and customer acquisition. The dataset, in addition to data trends, contains anomalies that, if detected, could inspire innovative solutions. Notably, the treated group will have access to a default option within the platform that highlights data anomalies as part of the treatment.
Intervention Start Date
2026-05-05
Intervention End Date
2026-05-15

Primary Outcomes

Primary Outcomes (end points)
Level of innovativeness of business plan submitted by participants
Primary Outcomes (explanation)
Investors will have to judge several aspects of the innovativeness of a project on ad hoc scales provided by the organizers.

Secondary Outcomes

Secondary Outcomes (end points)
Viability of the business plan
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The goal of this study is to evaluate whether AI-based anomaly detection enhances the ability to identify innovation opportunities and to explore whether these tools complement human capital. Participants will participate in the IE VentureMind AI Hackathon and will be tasked with analyzing data from a B2C startup, with the objective of proposing innovations based on their analysis.

Participants will be provided with an ad hoc data-analysis platform developed by the research team containing time-series data on key performance metrics in 2026.

Experimental Design Details
We expect the group that receives the anomaly detection treatment to outperform the control group in terms of our primary outcomes, but with variance depending on the participants' human capital. To measure participants' human capital, we will administer a Qualtrics survey collecting information about participants' backgrounds and including questions aimed at assessing their ability to formulate business theories and hypotheses.

We will recruit at least 70 participants per group (depending on how many participants will show up at the hackathon). Participants are students (bachelor's, master's, MBA, and PhD) located in the Madrid area. We will reward the three most innovative projects with a monetary prize.

The Hackathon will last two days. During the first day, participants will be asked to act as the CEO of the Arora company and produce a business plan, which might revise the initial board's plan. To produce the business plan, the participants will have to analyze data displayed on an ad hoc AI platform.

Experimental Design Details
Not available
Randomization Method
Randomization into the treated and control group described above will be performed by Qualtrics
Randomization Unit
Participants in the IE VentureMind AI Hackathon
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
0
Sample size: planned number of observations
At least 70 units per group, but the final number dpends on how many people attend the Hackathon event.
Sample size (or number of clusters) by treatment arms
At least 70 units per group.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Per-group n: 70 MDE for k = 2, N = 140 (80% power, α = 0.05) Per-group n: 70 Degrees of freedom: df1 = k − 1 = 1, df2 = N − k = 138 Critical F (approx): Fcrit ≈ 3.92 (for F1,138 at α = 0.05) Noncentrality parameter: λ = f^2 × N=8.06 Minimum detectable effect size (in terms of f): f_min ≈ 0.24
IRB

Institutional Review Boards (IRBs)

IRB Name
IE Research Committee
IRB Approval Date
2026-03-20
IRB Approval Number
IERC-54/2025-2026