AI and strategic decision-making

Last registered on October 17, 2023

Pre-Trial

Trial Information

General Information

Title
AI and strategic decision-making
RCT ID
AEARCTR-0012296
Initial registration date
October 13, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 17, 2023, 1:40 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
INSEAD

Other Primary Investigator(s)

PI Affiliation
University of Michigan
PI Affiliation
Bocconi University

Additional Trial Information

Status
In development
Start date
2023-10-13
End date
2025-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We explore the role that AI may play in the strategic decision-making process.
External Link(s)

Registration Citation

Citation
Csaszar, Felipe, Harsh Ketkar and Hyunjin Kim. 2023. "AI and strategic decision-making ." AEA RCT Registry. October 17. https://doi.org/10.1257/rct.12296-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
See details below.
Intervention (Hidden)
Each participant evaluates a set of ten business plans. For each business plan, the intervention randomly assigns each participant to one of two versions of business plans: the original version submitted to a leading European accelerator program, or a GPT-generated version of the business plan. All plans are anonymized.
Intervention Start Date
2023-10-13
Intervention End Date
2023-11-18

Primary Outcomes

Primary Outcomes (end points)
See details below.
Primary Outcomes (explanation)
For each business plan, we will measure the following evaluation outcomes:

1. Five factors evaluated on a scale of 1 (low) to 10 (high), analyzed both individually and as an index: (1) writing of business plan; (2) innovation and value proposition; (3) execution plan; (4) potential to invest in this idea; (5) viability. We also plan to explore the standard deviation of these ratings across participants.

2. A binary indicator of whether the participant would accept the startup into the accelerator: "Would you accept this startup into the accelerator program?" Response: yes/no.

3. A binary indicator of whether the participant would be interested in being introduced to the startup: "Would you be interested in being introduced to the startup?" Response: yes/no.

4. Likelihood of the participant investing in the startup: "How likely would you be to invest in the startup? (Click on the slider and then select the number)". Response: range from 0 to 100. Analyzed using both the raw number and a binary indicator above the median.

We plan to analyze treatment effects by regressing each outcome on an indicator of whether the business plan was a GPT-generated version, running this specification also with business plan fixed effects and participant fixed effects to increase precision.


We also plan to analyze heterogeneity in treatment effects along the following dimensions, by interacting each with the treatment indicator of whether the business plan was a GPT-generated version:

1. An indicator of whether the business plan was accepted or rejected from the accelerator program

2. Writing quality of the original version of the business plan (coded as a binary indicator for many grammatical and spelling errors)

3. A binary indicator of whether the GPT-generated version diverged substantially in its solution from the original version of the business plan

4. A binary and continuous indicator of whether the GPT-generated version received a large difference in scores from the original (and other GPT alternatives) as assessed by GPT

5. A binary indicator of what method of prompting the GPT was used for the chosen GPT-generated version.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
See details below.
Experimental Design Details
Participants are assigned to evaluate ten business plans each for a leading startup accelerator program in Europe. We source these plans by obtaining 10 original business plans submitted to the program (5 accepted, 5 rejected) and generating alternative GPT-generated versions of these plans.

We randomly assign each participant to the original version or a GPT-generated version of the business plan for each of the ten plans, which are displayed in randomized order.

We recruit approximately 300 investors from an online platform. We recruit as many participants as possible with the pre-screening criteria that they are (1) employed in business management and administration, finance, information technology, or science, technology, engineering, and mathematics; (2) they have experience in one of the following types of investments: angel syndicates, venture capital, or private equity; (3) they have an approval rate of 91-100 on the platform. We also screen out any participants that fail a simple knowledge check relating to start-up investing. If we do not obtain sufficient participants, we will drop criteria (1) and recruit from this less restrictive pool. We may also expand the pool to participants in the United Kingdom if we cannot obtain sufficient participants.

After this experiment, we also plan to recruit investors through direct contact, which we will pre-register separately when the design is finalized.
Randomization Method
Each participant is randomly assigned to the original or GPT version of each business plan using Qualtrics. The order of each business plan displayed to each participant is randomized using Qualtrics.
Randomization Unit
Each business plan evaluation by a participant (i.e., each of the ten business plans a participant evaluates is randomly assigned to either the original or GPT-generated version)
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
3000 evaluations by 300 investors from online platform
Sample size: planned number of observations
3000 evaluations by 300 investors from online platform
Sample size (or number of clusters) by treatment arms
1500 evaluations assigned to original versions of business plans, 1500 evaluations assigned to GPT-generated versions of business plans
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
INSEAD
IRB Approval Date
2023-07-31
IRB Approval Number
2023-51A

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials