The impact of fake customer reviews

Last registered on October 08, 2021

Pre-Trial

Trial Information

General Information

Title
The impact of fake customer reviews
RCT ID
AEARCTR-0008343
Initial registration date
October 08, 2021

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 08, 2021, 2:25 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
The Behaviouralist

Other Primary Investigator(s)

PI Affiliation
University of Southern California and NBER
PI Affiliation
The Behaviouralist
PI Affiliation
University of Oxford and Technology Policy Institute

Additional Trial Information

Status
Completed
Start date
2020-02-03
End date
2021-10-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Although fake online customer reviews have become prevalent on platforms such as TripAdvisor, eBay, and Amazon, little is known about how these reviews influence consumer behavior. This paper provides the first causal estimates of the effects of fake reviews on individual consumption choices. We conduct an incentive compatible online experiment with a nationally representative sample of UK respondents (n = 10,000). Participants are asked to browse a platform resembling Amazon and to choose a product. We randomly allocate different participants to variants of the platform with different types of fake reviews. Our analysis of the experimental data yields three key findings. First, fake reviews make consumers more likely to choose poor-quality products. Second, the effect of fake reviews is smaller for those who do not trust customer reviews. Third, we show that educational interventions can help reduce the effects of fake reviews.
External Link(s)

Registration Citation

Citation
Akesson, Jesper et al. 2021. "The impact of fake customer reviews." AEA RCT Registry. October 08. https://doi.org/10.1257/rct.8343
Experimental Details

Interventions

Intervention(s)
Study participants are asked to complete a shopping task on a platform resembling Amazon, and there is a chance that they will receive the product that they choose (i.e., we ship it to their home). All participants are shown a ‘search page’ that displays five similar products. While all five products have the same sale price, one has been classified as a Don’t Buy product by the consumer protection organization Which?, one has been classified as a Best Buy product by Which?, and the remaining three have received mediocre product ratings.

We randomize participants into one of six experimental groups. Those in Group 1 (the control group) are only shown informative reviews (i.e., reviews where the assessments are positively correlated with the quality of the product). Those in Group 2 are shown the same reviews as those in Group 1, but the Don’t Buy product has inflated star ratings, distributed in a way that is typical for products with fake reviews (i.e., mostly 5- and 1-star ratings). Group 3 is shown the same information as Group 2, but with the addition of fake and overly positive written reviews on the product page of the Don’t Buy product. Group 4 is shown the same as Group 3, with the difference being that the fake written reviews are more easily identifiable as being fake. Group 5 is shown the same information as Group 3, with the addition of a platform endorsement for the Don’t Buy product. Group 6 is exposed to the same information as those in Group 5, but with the addition of an educational intervention, which warns participants that some customer reviews are false, and provides them with some tips regarding how to spot fake reviews. The intervention appears at the top of the product search page and does not target a particular product.
Intervention Start Date
2020-02-03
Intervention End Date
2020-02-28

Primary Outcomes

Primary Outcomes (end points)
Our main outcome of interest in this experiment is the product choice that participants made. More specifically, we are interested in whether they chose a Best Buy, Don’t Buy, or a mediocre product.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
In addition to our main outcome of interest, we collected socio-demographic data on participants such as gender, age, education, income, and region in the UK they live in. We also collected data on participants’ online shopping habits, whether they trust reviews on Amazon, the share of reviews on Amazon they think are fake, and whether they think it is easy to spot fake reviews on Amazon. These variables are used to verify the validity of the randomization, and also allow us to conduct heterogeneity analyses.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The experiment was conducted in February 2020, and took place within an online survey (coded using Qualtrics). We recruited a nationally representative sample of 10,000 UK adults to take part in the experiment. Participants began by responding to questions about their socio-demographic characteristics and online shopping behavior, and then completed a shopping task in an environment resembling the Amazon platform.

We randomize participants into one of six experimental groups. Those in Group 1 (the control group) are only shown informative reviews (i.e., reviews where the assessments are positively correlated with the quality of the product). Those in Group 2 are shown the same reviews as those in Group 1, but the Don’t Buy product has inflated star ratings, distributed in a way that is typical for products with fake reviews (i.e., mostly 5- and 1-star ratings). Group 3 is shown the same information as Group 2, but with the addition of fake and overly positive written reviews on the product page of the Don’t Buy product. Group 4 is shown the same as Group 3, with the difference being that the fake written reviews are more easily identifiable as being fake. Group 5 is shown the same information as Group 3, with the addition of a platform endorsement for the Don’t Buy product. Group 6 is exposed to the same information as those in Group 5, but with the addition of an educational intervention, which warns participants that some customer reviews are false, and provides them with some tips regarding how to spot fake reviews. The intervention appears at the top of the product search page and does not target a particular product.

On average, the survey took 11 minutes to complete, and participants were paid £2 in exchange for their participation. The sample was recruited via the panel provider Dynata, and respondents could complete the survey via desktop or mobile devices.
Experimental Design Details
Randomization Method
Randomization done using Qualtrics built-in program.
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
n/a
Sample size: planned number of observations
10,000 individuals
Sample size (or number of clusters) by treatment arms
1648 individuals in Group 1, 1667 individuals in Group 2, 1647 individuals in Group 3, 1671 individuals in Group 4, 1693 individuals in Group 5, 1662 individuals in Group 6
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number

Post-Trial

Post Trial Information

Study Withdrawal

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information

Intervention

Is the intervention completed?
Yes
Intervention Completion Date
February 28, 2020, 12:00 +00:00
Data Collection Complete
Yes
Data Collection Completion Date
February 28, 2020, 12:00 +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
n/a
Was attrition correlated with treatment status?
No
Final Sample Size: Total Number of Observations
9988 individuals
Final Sample Size (or Number of Clusters) by Treatment Arms
1648 individuals in Group 1, 1667 individuals in Group 2, 1647 individuals in Group 3, 1671 individuals in Group 4, 1693 individuals in Group 5, 1662 individuals in Group 6
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
No
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials