AI and strategic decision-making (generation experiment)

Last registered on February 16, 2024

Pre-Trial

Trial Information

General Information

Title
AI and strategic decision-making (generation experiment)
RCT ID
AEARCTR-0011942
Initial registration date
August 18, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 24, 2023, 6:11 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
February 16, 2024, 4:54 AM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Harvard University

Other Primary Investigator(s)

PI Affiliation
University of Michigan
PI Affiliation
Bocconi University

Additional Trial Information

Status
In development
Start date
2024-02-16
End date
2025-09-29
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We explore the role that AI may play in the strategic decision-making process.
External Link(s)

Registration Citation

Citation
Csaszar, Felipe, Harsh Ketkar and Hyunjin Kim. 2024. "AI and strategic decision-making (generation experiment)." AEA RCT Registry. February 16. https://doi.org/10.1257/rct.11942-3.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
See details below.
Intervention Start Date
2024-02-16
Intervention End Date
2024-02-23

Primary Outcomes

Primary Outcomes (end points)
See details below.
Primary Outcomes (explanation)
For each business plan, we will measure the following evaluation outcomes:

1. Five factors evaluated on a scale of 1 (low) to 10 (high), analyzed both individually and as an index: (1) writing of business plan; (2) innovation and value proposition; (3) execution plan; (4) potential to invest in this idea; (5) viability. We also plan to explore the standard deviation of these ratings across participants.

2. A binary indicator of whether the participant would accept the startup into the accelerator: "Would you accept this startup into the accelerator program?" Response: yes/no.

3. A binary indicator of whether the participant would be interested in being introduced to the startup: "Would you be interested in being introduced to the startup?" Response: yes/no.

4. Likelihood of the participant investing in the startup: "How likely would you be to invest in the startup? (Click on the slider and then select the number)". Response: range from 0 to 100. Analyzed using both the raw number and a binary indicator above the median.

We plan to analyze treatment effects by regressing each outcome on an indicator of whether the business plan was a GPT-generated version, running this specification with business plan fixed effects and participant fixed effects to increase precision.

We also plan to analyze heterogeneity in treatment effects along the following dimensions, by interacting each with the treatment indicator of whether the business plan was a GPT-generated version:

1. An indicator of whether the business plan was accepted or rejected from the accelerator program

2. Writing quality of the original version of the business plan (coded as a binary indicator for many grammatical and spelling errors)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
See details below.
Experimental Design Details
Not available
Randomization Method
For each business plan, each participant is randomly assigned to the original or GPT version, using Qualtrics. The order of the business plan displayed to each participant is also randomized using Qualtrics.
Randomization Unit
Each business plan evaluation by a participant (i.e., each of the ten business plans a participant evaluates is randomly assigned to either the original or GPT-generated version)
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
2500 evaluations by 250 investors
Sample size: planned number of observations
2500 evaluations by 250 investors
Sample size (or number of clusters) by treatment arms
1250 evaluations assigned to original versions of business plans, 1250 evaluations assigned to GPT-generated versions of business plans
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
INSEAD
IRB Approval Date
2023-07-31
IRB Approval Number
2023-51A