AI and strategic decision-making

Last registered on October 17, 2023

Pre-Trial

Trial Information

General Information

Title
AI and strategic decision-making
RCT ID
AEARCTR-0012296
Initial registration date
October 13, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 17, 2023, 1:40 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Harvard University

Other Primary Investigator(s)

PI Affiliation
University of Michigan
PI Affiliation
Bocconi University

Additional Trial Information

Status
In development
Start date
2023-10-13
End date
2025-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We explore the role that AI may play in the strategic decision-making process.
External Link(s)

Registration Citation

Citation
Csaszar, Felipe, Harsh Ketkar and Hyunjin Kim. 2023. "AI and strategic decision-making ." AEA RCT Registry. October 17. https://doi.org/10.1257/rct.12296-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
See details below.
Intervention Start Date
2023-10-13
Intervention End Date
2023-11-18

Primary Outcomes

Primary Outcomes (end points)
See details below.
Primary Outcomes (explanation)
For each business plan, we will measure the following evaluation outcomes:

1. Five factors evaluated on a scale of 1 (low) to 10 (high), analyzed both individually and as an index: (1) writing of business plan; (2) innovation and value proposition; (3) execution plan; (4) potential to invest in this idea; (5) viability. We also plan to explore the standard deviation of these ratings across participants.

2. A binary indicator of whether the participant would accept the startup into the accelerator: "Would you accept this startup into the accelerator program?" Response: yes/no.

3. A binary indicator of whether the participant would be interested in being introduced to the startup: "Would you be interested in being introduced to the startup?" Response: yes/no.

4. Likelihood of the participant investing in the startup: "How likely would you be to invest in the startup? (Click on the slider and then select the number)". Response: range from 0 to 100. Analyzed using both the raw number and a binary indicator above the median.

We plan to analyze treatment effects by regressing each outcome on an indicator of whether the business plan was a GPT-generated version, running this specification also with business plan fixed effects and participant fixed effects to increase precision.


We also plan to analyze heterogeneity in treatment effects along the following dimensions, by interacting each with the treatment indicator of whether the business plan was a GPT-generated version:

1. An indicator of whether the business plan was accepted or rejected from the accelerator program

2. Writing quality of the original version of the business plan (coded as a binary indicator for many grammatical and spelling errors)

3. A binary indicator of whether the GPT-generated version diverged substantially in its solution from the original version of the business plan

4. A binary and continuous indicator of whether the GPT-generated version received a large difference in scores from the original (and other GPT alternatives) as assessed by GPT

5. A binary indicator of what method of prompting the GPT was used for the chosen GPT-generated version.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
See details below.
Experimental Design Details
Not available
Randomization Method
Each participant is randomly assigned to the original or GPT version of each business plan using Qualtrics. The order of each business plan displayed to each participant is randomized using Qualtrics.
Randomization Unit
Each business plan evaluation by a participant (i.e., each of the ten business plans a participant evaluates is randomly assigned to either the original or GPT-generated version)
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
3000 evaluations by 300 investors from online platform
Sample size: planned number of observations
3000 evaluations by 300 investors from online platform
Sample size (or number of clusters) by treatment arms
1500 evaluations assigned to original versions of business plans, 1500 evaluations assigned to GPT-generated versions of business plans
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
INSEAD
IRB Approval Date
2023-07-31
IRB Approval Number
2023-51A