The impact of qualitative reviews in online markets: Empirical and experimental evidence on statistical discrimination

Last registered on March 24, 2026

Pre-Trial

Trial Information

General Information

Title
The impact of qualitative reviews in online markets: Empirical and experimental evidence on statistical discrimination
RCT ID
AEARCTR-0015282
Initial registration date
May 22, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 27, 2025, 7:04 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
March 24, 2026, 12:57 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
University of Cambridge

Other Primary Investigator(s)

PI Affiliation
University of Cambridge
PI Affiliation
University of Cambridge

Additional Trial Information

Status
In development
Start date
2026-03-30
End date
2026-07-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We investigate the role of customer reviews and host demographics in statistical discrimination within the sharing economy (specifically in online rental markets). Using a controlled experiment in an Airbnb-like setting, we measure how a host's race, a host's gender, and customer reviews interact to affect accommodation demand. We create fictitious listings using scraped data from Airbnb and systematically vary host characteristics across three primary dimensions in a fully crossed 2x2x2 factorial design: Host Race (Black/White), Host Gender (Man/Woman), and a Review factor (High/Low). To isolate specific mechanisms of review-based discrimination, the exact nature of the Review factor varies across three between-participant treatments, manipulating either review quantity, positive informativeness, or negative informativeness. Our experimental design uses a forced-choice pairwise mechanism across three budget blocks (Low, Mid, High). To ensure perfect orthogonality and counterbalancing, the pairings are drawn from a comprehensive property map, with participants assigned to one of 56 block-randomised survey versions. This approach allows us to estimate the causal main effects of each attribute, as well as their interactions, to understand whether specific types of high-quality reviews can mitigate intersectional demographic penalties. Our findings will provide insights for platform design to reduce discrimination in the sharing economy.
External Link(s)

Registration Citation

Citation
Amano-Patiño, Noriko, Konstantinos Ioannidis and James Morris. 2026. "The impact of qualitative reviews in online markets: Empirical and experimental evidence on statistical discrimination." AEA RCT Registry. March 24. https://doi.org/10.1257/rct.15282-2.0
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2026-03-30
Intervention End Date
2026-07-31

Primary Outcomes

Primary Outcomes (end points)
Whether target property was chosen
Primary Outcomes (explanation)
Each participant will be presented with 33 pairs of fictitious properties across three budget blocks (Low, Mid, High) and asked to select their most preferred property in each pair. Out of these 33 rounds, 28 are the primary experimental rounds containing the fully counterbalanced 2x2x2 attribute variations. The remaining 5 rounds are fixed filler/attention-check rounds. For the analysis, the data from the 28 experimental rounds will be reshaped into a "long format," resulting in 56 observations per participant (two competing properties per round). Our primary outcome is a binary indicator (0 or 1) for whether a specific property variant was selected by the participant.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
General structure of the experiment

The experiment will run online on Prolific. After obtaining informed consent, participants will first report details about their most recent rental experience. The main task consists of 33 rounds divided into three budget blocks. In each round, participants are presented with two fictitious properties and must choose the one they would prefer to rent. To encourage truthful revelation of preferences and mitigate social desirability bias, participants are incentivised with bonus payments based on how closely their choices align with the modal choice (the option most frequently selected by other participants). The experiment concludes with a post-experimental survey comprising an Implicit Association Test to measure implicit biases and standard demographic questions.

Treatments

Our experimental design consists of three between-participant treatments. In all treatments, we use a within-subjects orthogonal design where participants evaluate 28 pairs of properties across three primary dimensions.
• Treatment 1 (Quantity): We vary the host race (minority/non-minority), host gender (man/woman), and review quantity (low/high, keeping the quality of reviews fixed).
• Treatment 2 (Positive Informativeness): We vary the host race, host gender, and the informativeness of reviews (low/high, keeping the number of reviews fixed) when all reviews are positive.
• Treatment 3 (Negative Informativeness): We vary the host race, host gender, and the informativeness of reviews (low/high, keeping the number of reviews fixed) when one of the reviews is negative.

Participants are randomly assigned to one of the three treatments. To prevent participants from seeing the exact same property twice while ensuring that all combinations of traits are tested against each other, the 28 experimental rounds are constructed using a predefined "property map". To achieve perfect counterbalancing, participants within each treatment are randomly assigned to one of 56 distinct participant types. This rotation ensures that property attributes are orthogonal to the round number and budget tier, minimising order effects.
Experimental Design Details
Not available
Randomization Method
Computerized randomization
Randomization Unit
Individual randomisation
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
1,200 individuals (400 per treatment arm)
Sample size: planned number of observations
Because participants make 28 valid experimental choices evaluated in pairs, the effective number of observations is 400 * 28 * 2 = 22,400 observations per treatment (67,200 total).
Sample size (or number of clusters) by treatment arms
400 individuals per treatment
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
We base our sample size on the hardest to detect main effect identified in our pilot (the demographic penalties, which showed a roughly 3.0% difference in selection rates). We simulate data using the 28-round, long-format structure and the group means/standard deviations from the pilot. By running the full LPM on synthetic datasets across a grid of sample sizes, we determine the number of participants required to achieve 90% statistical power to detect a main effect coefficient of 0.032 at α = 0.05 to be N=400 per treatment arm.
IRB

Institutional Review Boards (IRBs)

IRB Name
Director of Research of the Faculty of Economics of University of Cambridge
IRB Approval Date
2025-01-22
IRB Approval Number
UCAM-FoE-25-01