AI-Assisted Decision-Making: An Experimental Study of Bargaining in the Second-Hand Market

Last registered on February 04, 2026

Pre-Trial

Trial Information

General Information

Title
AI-Assisted Decision-Making: An Experimental Study of Bargaining in the Second-Hand Market
RCT ID
AEARCTR-0017607
Initial registration date
February 01, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
February 04, 2026, 10:08 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Hunan University

Other Primary Investigator(s)

Additional Trial Information

Status
On going
Start date
2025-12-30
End date
2026-05-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We study whether generative AI improves bargaining outcomes in real second-hand market transactions. We conduct a randomized field experiment on Xianyu in which participants, acting as buyers, contact real sellers of listed items. A total of 2,000 buyer–seller contacts are randomly assigned to one of four experimental conditions. Primary outcomes include successful bargaining, final negotiated price and the bargaining discount relative to the posted price. This study integrates AI-assisted decision-making with the classic economics and behavioral science issue of bargaining and negotiation, provides randomized controlled evidence in real-world platform settings, helps clarify the relative importance of information supplementation and strategy substitution, and explains the gain and loss mechanisms of AI-generated content in interactive games from the perspectives of algorithm aversion and human-AI collaboration.
External Link(s)

Registration Citation

Citation
Deng, Weiguang. 2026. "AI-Assisted Decision-Making: An Experimental Study of Bargaining in the Second-Hand Market." AEA RCT Registry. February 04. https://doi.org/10.1257/rct.17607-1.0
Experimental Details

Interventions

Intervention(s)
A randomized controlled field experiment is conducted on the Xianyu second-hand marketplace. The experiment assigns real buyer–seller contacts to different conditions that vary the role of generative AI in the bargaining process, with the interventions designed to manipulate the degree to which AI supports decision-making and communication (e.g., informational support versus strategy support, and the extent of human involvement). Participants act as buyers, contact real sellers, and negotiate under standardized rules with a fixed maximum number of bargaining rounds and pre-specified stopping rules. The study covers multiple product keyword categories that differ in price transparency to examine how AI-assisted bargaining performs across market environments.
Intervention Start Date
2025-12-30
Intervention End Date
2026-04-30

Primary Outcomes

Primary Outcomes (end points)
1. Effective bargaining (indicator) — whether the interaction results in a meaningful price concession or a negotiated price.
2. Bargaining discount (share) — the percentage discount relative to the posted price.
Primary Outcomes (explanation)
1. Effective bargaining (indicator)
Coded 1 if, within the allowed rounds, the seller either (i) explicitly agrees to a lower price than the posted price, or (ii) proposes a counteroffer/“acceptable price” that reflects a price concession. Coded 0 if there is no price concession and no negotiated price (e.g., seller refuses to move on price / insists on posted price).
2. Bargaining discount (share)
Constructed as:
Bargaining discount = (Posted price − Final negotiated price) / Posted price.
This is defined as 0 when the final negotiated price equals the posted price (i.e., no effective bargaining).

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
A randomized controlled field experiment is conducted on the Xianyu second-hand marketplace. The unit of randomization is an individual buyer–seller contact (a specific listing contacted by a participant). Contacts are assigned to different conditions that vary the role of generative AI in the bargaining process (e.g., whether AI provides informational support and/or assists with drafting messages, and the degree of human involvement). Bargaining follows a standardized protocol with a fixed maximum number of bargaining rounds and pre-specified stopping rules. The study includes multiple product keyword categories that differ in price transparency to examine whether treatment effects vary across market environments.
Experimental Design Details
Not available
Randomization Method
Computer-based randomization conducted by the research team using a random number generator (e.g., a scripted random assignment tool). Randomization is implemented within each participant × keyword block: for each keyword, the participant selects four eligible listings randomly, and the four experimental conditions (T1–T4) are assigned to these four listing-contacts without replacement (each condition used exactly once within the block).
Randomization Unit
The unit of randomization is an individual buyer–seller contact (i.e., a specific listing contacted by a participant). Randomization is blocked at the participant × keyword level (within each participant and keyword, treatments are assigned across the four contacted listings).
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
100 participants (buyers).
Sample size: planned number of observations
2,000 buyer–seller contacts / contacted listings (individual listing-contacts).
Sample size (or number of clusters) by treatment arms
500 listing-contacts in each arm (total 2,000):
T1 (Human-only): 500 listing-contacts
T2 (AI information only): 500 listing-contacts
T3 (AI information + AI messages, no edits): 500 listing-contacts
T4 (AI information + AI messages + human edits): 500 listing-contacts
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number