Increasing consumer surplus through a novel product testing mechanism

Last registered on December 07, 2020

Pre-Trial

Trial Information

General Information

Title
Increasing consumer surplus through a novel product testing mechanism
RCT ID
AEARCTR-0006685
First published
October 30, 2020, 9:05 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
December 07, 2020, 9:53 AM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
University of Michigan

Other Primary Investigator(s)

Additional Trial Information

Status
Completed
Start date
2017-01-10
End date
2020-11-24
Secondary IDs
Abstract
Our study proposes a novel mechanism to reduce information asymmetry about product quality between buyers and sellers. Product testing organizations like Consumer Reports (US) and Stiftung Warentest (Germany) seek to reduce this asymmetry by providing credible information. However, limited capacity leads to testing of only a select number of product models, often bestsellers, which can yield suboptimal information. After outlining our mechanism, we develop a game to derive testable predictions. We show theoretically that a unique Nash equilibrium exists in which our mechanism yields optimal information, equivalent to a world of complete information, while selecting bestsellers does not. Subsequently, we confirm experimentally that our mechanism increases consumer surplus.
External Link(s)

Registration Citation

Citation
Vollstaedt, Ulrike. 2020. "Increasing consumer surplus through a novel product testing mechanism." AEA RCT Registry. December 07. https://doi.org/10.1257/rct.6685-3.0
Former Citation
Vollstaedt, Ulrike. 2020. "Increasing consumer surplus through a novel product testing mechanism." AEA RCT Registry. December 07. https://www.socialscienceregistry.org/trials/6685/history/81048
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2017-01-10
Intervention End Date
2018-12-14

Primary Outcomes

Primary Outcomes (end points)
consumer surplus
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We design four experimental treatments. The first two represent currently used product model selection mechanisms (called BESTSELLERS), the latter two represent our new product model selection mechanism (called SELLERS APPLY).

Experimental Design Details
BESTSELLERS-WORSTCASE To model a scenario in which the market functions extremely poorly, we design a worst-case-scenario regarding bestselling product models, i.e., bestsellers are the product models vertically furthest away from globally non-dominated ones.

BESTSELLERS-RANDOM We also design an intermediate bestseller scenario where bestsellers are chosen randomly among all product models. We include this treatment to investigate whether our new mechanism outperforms chance.

SELLERSAPPLY-LYINGPOSS(IBLE) This treatment represents the scenario where sellers may apply for testing and provide a false quality. While the option of providing a false quality does not change the equilibrium predictions, it makes the SELLERSAPPLY mechanism more complex. Therefore, we consider it important to investigate this treatment in the lab.

SELLERSAPPLY-TRUTH This treatment represents the scenario where sellers may apply for testing and are not allowed to provide a false quality.
Randomization Method
randomization done by a computer
Randomization Unit
experimental session as one level of randomization (regarding which treatment), individual as a second level of randomization (regarding whether an a person participates as a seller or buyer, and with which ID)
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
25 experimental sessions
Sample size: planned number of observations
575 participants
Sample size (or number of clusters) by treatment arms
Bestsellers-WorstCase: 5 experimental sessions, Bestsellers-Random: 5 experimental sessions, SellersApply-LyingPoss: 5 experimental sessions, SellersApply-Truth: 10 experimental sessions

Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials