The Selective Tailwind effect of Artificial Intelligence

Last registered on January 22, 2025

Pre-Trial

Trial Information

General Information

Title
The Selective Tailwind effect of Artificial Intelligence
RCT ID
AEARCTR-0015201
Initial registration date
January 20, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
January 22, 2025, 8:37 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
IE University (Business School)

Other Primary Investigator(s)

PI Affiliation
IE University (Business School)

Additional Trial Information

Status
In development
Start date
2025-01-20
End date
2025-03-15
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
What role does AI play in entrepreneurial decision making? We have partially explored this question by analyzing the impact of AI predictive tools on the performance of a large sample of US startups. We have exploited the random release of Google Analytics 4 (GA4) which introduced AI predictive tools especially useful for mobile app developers. Leveraging this shock in a difference-in-differences model, we found that post-GA4-release there is a boost in customer acquisition. However, the positive premium is driven by the upper tail of the treatment effect distribution, and not by marginal improvements. These effects are largest for innovative startups led by highly skilled founders. Shedding light on the mechanisms, we show that GA4 boosts the productivity of A/B testing tools. Overall, these findings suggest that AI predictive tools are useful for complementing skilled human capital in formulating new testable business hypotheses, especially relevant for the detection of breakthroughs.

To further explore our results obtained from large-scale data, we have built a platform for data analysis and plan to conduct a lab experiment on Prolific. The goal is to better understand the role of predictive AI tools for anomaly detection for the production of breakthrough innovations, as well as the interaction of these tools with entrepreneurs' human capital.

External Link(s)

Registration Citation

Citation
Conti, Annamaria and Danilo Messinese. 2025. "The Selective Tailwind effect of Artificial Intelligence." AEA RCT Registry. January 22. https://doi.org/10.1257/rct.15201-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
The goal of this lab experiment is to test whether the use of AI for anomaly detection can improve the ability to identify innovation opportunities and whether these tools completment or not human capital. Our experiment encompasses two treatments. The first treatment is a nudge about the importance of anomalies detection to uncover novel ideas and business opportunities, delivered through a short video and an ad-hoc AI tool (details below). The second treatment is a nudge about how to leverage causal reasoning to produce innovations. This nudge will be delivered through a short video.

These two treatments will result in four groups of participants (2x2 matrix):
1. Group 1 will receive only the treatment about anomalies detection
2. Group 2 will receive only the treatment about causal reasoning
3. Group 3 will receive both treatments about anomalies detection & causal reasoning
4. Group 4 is the control group

Each group will access a platform developed by the research team to analyze data from a B2C software startup. This platform closely replicates Google Analytics 4 (GA4). Participants will be given a dataset simulating user behavior for a B2C food delivery app as of 2024, including variables such as user sessions, revenue, engagement, customer acquisition, and feature adoption. The dataset, in addition to data trends, contains anomalies that, if detected, could inspire innovative solutions. Notably, groups 1 and 3 will have access to a default option within the platform that highlights data anomalies as part of the treatment.

Intervention Start Date
2025-01-23
Intervention End Date
2025-03-14

Primary Outcomes

Primary Outcomes (end points)
- Number of innovations uncovered by participants
- Level of innovativeness of innovations uncovered by participants
Primary Outcomes (explanation)
Level of innovativeness of innovations is measured by asking GenAI tools (ChatGPT 4o, Claude, Gemini) and human experts to rate on a scale from 0 to 100 the level of innovativeness of each innovation listed by the participants in the experiment.

Secondary Outcomes

Secondary Outcomes (end points)
Data-driven innovation
Secondary Outcomes (explanation)
We ask participants to clarify on a scale from 1 to 5 (1=not at all, 5= totally yes) the extent to which they leverage data compared to their experience and previous knowledge in producing innovative ideas.

Experimental Design

Experimental Design
The goal of this study is to evaluate whether AI-based anomaly detection enhances the ability to identify innovation opportunities and to explore whether these tools complement or not human capital. Participants will be recruited via Prolific and tasked with analyzing data from a B2C startup, with the objective of proposing innovations based on their analysis.

Participants will be provided with an ad-hoc platform for data analysis developed by the research team with time-series about key performance metrics in 2024.
Experimental Design Details
Not available
Randomization Method
Randomization into the four groups described above will be performed by Qualtrics by randomly assigning participants to each of the four groups of the experiment.
Randomization Unit
Individuals recruited on Prolific
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
0
Sample size: planned number of observations
70 units per treatment group for a total of 280 units. We will target approximately 360 participants to include a buffer in case of attrition.
Sample size (or number of clusters) by treatment arms
70 units per treatment group
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
GPower, assuming an effect of 0.2 F tests – ANOVA fixed effects omnibus one-way: Post hoc: Compute achieved power Input: Effect size f = 0.2 α err prob = 0.05 Total sample size =280 Number of groups = 4 Output: Noncentrality parameter λ = 11.2 Critical F = 2.64 Numerator df = 3 Denominator df = 276 Power (1-β err prob) = 0.81
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
IE University
IRB Approval Date
2024-11-25
IRB Approval Number
IERC-15/2024-2025