AI-Augmented Crowdfunding Campaign: An Online Experiment

Last registered on December 24, 2023

Pre-Trial

Trial Information

General Information

Title
AI-Augmented Crowdfunding Campaign: An Online Experiment
RCT ID
AEARCTR-0012732
Initial registration date
December 20, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
December 21, 2023, 8:07 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
December 24, 2023, 12:35 AM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
University of Minnesota, Twin Cities

Other Primary Investigator(s)

PI Affiliation
PI Affiliation

Additional Trial Information

Status
In development
Start date
2023-12-19
End date
2024-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We test the effectiveness of AI augmentation in crowdfunding campaign preparation. We use a randomized control trial design in which participants are randomly assigned to review various pairs of crowdfunding campaigns. These campaign info pages differ in their degree of AI augmentation. Analyzing the participants' preferences, we examine whether the crowdfunding campaign augmented by AI positively influences their propensity to contribute financially.
External Link(s)

Registration Citation

Citation
Ai, Wei, Qiaozhu Mei and Teng Ye. 2023. "AI-Augmented Crowdfunding Campaign: An Online Experiment." AEA RCT Registry. December 24. https://doi.org/10.1257/rct.12732-1.1
Experimental Details

Interventions

Intervention(s)
We deploy an RCT in online platform, Prolific, in which respondents are randomly assigned to different versions of surveys. The entire RCT will be survey-based, for which we present the interventions and collect participants' preferences.
Intervention (Hidden)
Intervention Start Date
2023-12-19
Intervention End Date
2024-12-31

Primary Outcomes

Primary Outcomes (end points)
Participants' contribution preference towards crowdfunding campaigns
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Participants invited to the survey will be randomly presented with pairs of campaigns to evaluate. We designed three levels of AI augmentation to evaluate the effectiveness of incorporating AI in the crowdfunding context.
Experimental Design Details
There are three experimental conditions in our experiment:
Original introduction. We collect the original campaign introductions from a large crowdfunding platform and use them as our experimental materials.

AI-augmented introduction. Leveraging the insights from previous literature (e.g., Karlan et al. 2011) and empirical analysis, we deploy generative AI models to rewrite the campaign introductions, hopefully improving the likelihood of generative AI extended description.

AI-extended introduction. Since introductions augmented by artificial intelligence (AI) commonly exhibit an increase in expressive elements compared to their original counterparts. Consequently, the length of these AI-augmented introductions tends to be significantly greater, rendering them not directly comparable to the original versions in terms of length. To mitigate potential confounding factors and ensure a more controlled comparison, we introduce a subset of introductions termed 'AI-extended introductions.' In this approach, we employ generative AI to paraphrase the original introductions, thereby increasing their length while deliberately avoiding the introduction of new content. This method allows for an examination of text length augmentation effects, independent of content variation.

For each campaign, we will generate three pairs of comparisons, namely, the original introduction versus its AI-augmented version, the original version versus its AI-extended version, and the AI-augmented version versus the AI-extended version.

In this study, each participant will be presented with two random pairs of campaign introductions. Specifically, for each participant, we first randomly draw one campaign that was originally funded and another campaign that originally had no funding secured. For each campaign, we will randomly draw one pair (from the three types of combinations). The two campaigns will be presented in a random order, and within each pair of comparisons, the two variations of the same original campaign will be shown in a random order side by side. After each pair, we will collect participants' preferences and their predictions of other people's general preferences towards which campaign will be more likely to receive financial support.

Experimental procedures
To summarize, if a participant opts to take the survey and passes the attention check, they will conduct a one-time online survey with the following steps:

Step 1: Attention check
Step 2: The first pair of campaign descriptions is presented side by side.
Step 3: Participant will indicate their potential donation preferences and their estimation of to which campaign more people will be willing to donate, as well as why they believe so.
Step 4: The second pair of campaign descriptions will be presented side by side.
Step 5: Participant will indicate their potential donation preferences and their estimation of to which campaign more people will be willing to donate for the second pair of campaigns, as well as why they believe so.
Step 6: Participants will take a brief questionnaire, such as age, gender, and how many times they have donated in the past year.

The participants will then be thanked and debriefed, which concludes the study.
Randomization Method
Randomization done by algorithm embedded in the survey
Randomization Unit
Pair of campaigns
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
We plan to recruit 200-400 participants.
Sample size: planned number of observations
We plan to recruit 200-400 participants.
Sample size (or number of clusters) by treatment arms
We will have about 200 - 600 observations per cell.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
University of Minnesota
IRB Approval Date
2023-12-20
IRB Approval Number
STUDY00021022

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials