Deciphering the Impact of AI-Generated Fake Reviews on Consumer Behavior and Willingness to Pay

Last registered on October 07, 2024

Pre-Trial

Trial Information

General Information

Title
Deciphering the Impact of AI-Generated Fake Reviews on Consumer Behavior and Willingness to Pay
RCT ID
AEARCTR-0014491
Initial registration date
September 27, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 07, 2024, 7:05 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
The Hong Kong University of Science and Technology(Guangzhou)

Other Primary Investigator(s)

PI Affiliation
The Hong Kong University of Science and Technology(Guangzhou)

Additional Trial Information

Status
On going
Start date
2024-03-01
End date
2024-11-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This experimental study investigates the ability of consumers to identify AI-generated fake reviews in online marketplaces and examines the factors influencing this capability and consumers' willingness to pay. The experiment is divided into a preliminary phase and a main experimental phase. In the preliminary phase, participants are exposed to product information along with reviews categorized as either AI-generated or genuine, followed by assessments of their authenticity. The main experiment manipulates the proportion of fake reviews and the insurance variable across different scenarios to measure its effect on WTP. The outcomes are expected to provide insightful contributions to understanding the dynamics of consumer trust and decision-making in the face of evolving digital marketing strategies.
External Link(s)

Registration Citation

Citation
WANG, Ke and Xu ZHANG. 2024. "Deciphering the Impact of AI-Generated Fake Reviews on Consumer Behavior and Willingness to Pay." AEA RCT Registry. October 07. https://doi.org/10.1257/rct.14491-1.0
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2024-07-01
Intervention End Date
2024-10-31

Primary Outcomes

Primary Outcomes (end points)
Ability to Identify AI-generated Reviews, Cognitive Reflection Test (CRT) Score, Willingness to Pay (WTP)
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
This experiment seeks to understand consumers' ability to detect AI-generated fake reviews and how this affects their economic behavior, specifically their willingness to pay (WTP). The study comprises a preliminary phase where participants evaluate the authenticity of reviews for different products. This phase aims to establish a baseline for consumer discernment abilities. The main experiment then introduces different proportions of fake to genuine reviews and varies the presence of return insurance to observe its impact on WTP. The experiment concludes with a cognitive reflection test to measure participants' reflective thinking capabilities, correlating these with their susceptibility to misinformation.

Experimental Design Details
Not available
Randomization Method
Randomization will be conducted on an online survey platform.
Randomization Unit
Individual.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Preliminary Experiment: 150 adults
Main Experiment: 600-1200 adults.
Sample size: planned number of observations
Preliminary Experiment: 150 adults. Main Experiment: 600-1200 adults.
Sample size (or number of clusters) by treatment arms
Preliminary Experiment:
As a baseline setting without varying conditions, all 150 participants will undergo the same treatment, serving as a control group to establish initial discernment abilities.

Main Experiment:
Low Proportion of Fake Reviews, Insurance Present: 100 participants
Low Proportion of Fake Reviews, Insurance Absent: 100 participants
Medium Proportion of Fake Reviews, Insurance Present: 100 participants
Medium Proportion of Fake Reviews, Insurance Absent: 100 participants
High Proportion of Fake Reviews, Insurance Present: 100 participants
High Proportion of Fake Reviews, Insurance Absent: 100 participants
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number