Do Generative AI Chatbots Increase Dominant-Strategy Play in Matching Mechanisms?

Last registered on January 28, 2026

Pre-Trial

Trial Information

General Information

Title
Do Generative AI Chatbots Increase Dominant-Strategy Play in Matching Mechanisms?
RCT ID
AEARCTR-0017736
Initial registration date
January 21, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
January 28, 2026, 6:53 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
The University of Tokyo

Other Primary Investigator(s)

PI Affiliation
The University of Tokyo

Additional Trial Information

Status
In development
Start date
2026-01-30
End date
2026-03-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
A central challenge in mechanism design is that strategy-proof or otherwise equilibrium-tractable mechanisms may still fail in practice when participants do not understand the incentive structure. In matching environments, truthful reporting is often observed, yet it can reflect a focal default rather than sophisticated dominant-strategy play. This study tests whether generative AI chatbots can reduce mechanism understanding costs and thereby increase dominant-strategy play in matching mechanisms, including cases in which truth-telling is not optimal. We conduct a 2×2 laboratory experiment that cross-randomizes the assignment rule (Deferred Acceptance vs. Reverse Deferred Acceptance) and the availability of a generative AI chatbot (present vs. absent). Participants make a one-shot preference submission in a school-choice environment with priorities and random tie-breaking; they interact with computerized counterparts, and payoffs depend on the assigned school. Instruction time and on-screen layout are held constant across conditions. In the AI condition, participants may ask questions in natural language, and the chatbot is constrained to support comprehension of the provided instructions without recommending specific actions or revealing meta-level properties of the mechanisms. Our primary outcome is dominant-strategy play, defined as truthful reporting under Deferred Acceptance and reverse-truthful reporting under Reverse Deferred Acceptance. We compare dominant-strategy play between the AI-available and AI-unavailable conditions within each mechanism, and estimate treatment effects using the assigned availability of the chatbot. Secondary outcomes include comprehension test performance, decision times, and usage logs, as well as post-experimental measures of perceived demand effects. The design directly tests whether access to generative AI assistance shifts behavior toward dominant strategies even when the dominant strategy deviates from truthful reporting, helping distinguish defaults from comprehension as drivers of equilibrium (non)play.
External Link(s)

Registration Citation

Citation
Kitadai, Ayato and Nariaki Nishino. 2026. "Do Generative AI Chatbots Increase Dominant-Strategy Play in Matching Mechanisms?." AEA RCT Registry. January 28. https://doi.org/10.1257/rct.17736-1.0
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2026-01-30
Intervention End Date
2026-03-31

Primary Outcomes

Primary Outcomes (end points)
Dominant-strategy play (DSPlay): Indicator variable equal to 1 if the participant submits the dominant-strategy rank-order list under the assigned mechanism (DA or RDA), and 0 otherwise.
Primary Outcomes (explanation)
DSPlay is constructed from the submitted rank-order list and the participant’s induced preference ordering implied by the displayed payoff table. We define the induced preference ranking as the ordering of schools by the participant-specific monetary payoffs (highest to lowest), allowing arbitrary ordering among tied payoffs.

Under Deferred Acceptance (DA), DSPlay = 1 if the submitted list is consistent with the induced ranking.
Under Reverse Deferred Acceptance (RDA), DSPlay = 1 if the submitted list is consistent with the reverse of the induced ranking.
If no rank-order list is submitted by the decision-stage deadline, DSPlay is coded as 0. To distinguish non-submission from submitted non-dominant play, we pre-specify decomposition measures in the Secondary outcomes section.

Secondary Outcomes

Secondary Outcomes (end points)
1. Dominant-strategy play conditional on submission (DSPlay_Submit): Indicator equal to 1 if the participant submits a list by the deadline and the submitted list matches the dominant strategy under the assigned mechanism; 0 otherwise. Defined among submitters only.

2. Comprehension quiz accuracy (QuizCorrect): Indicator equal to 1 if the incentivized comprehension quiz is answered correctly within the quiz time limit; 0 otherwise.

3. Decision time (DecisionTime): Time in seconds from entering the preference-submission screen to final submission.

4. AI usage measures (AIUseAny, AINumMsg, AITotalChars or AITotalTokens): Existence or number of participant messages and total characters (or total tokens) sent to the chatbot (AI-available arm only).

5. Perceived experimenter demand (DemandFeel, DemandText): DemandFeel is a categorical response (Yes, No, Unsure) to whether the participant felt the experimenter expected a particular behavior; DemandText is a free-response explanation collected if Yes or Unsure.

6. Perceived influence of the right-side panel (PanelInfluence, PanelReason): PanelInfluence is a 5-point scale on whether the right-side panel affected understanding or judgment; PanelReason is a free-response explanation.

7. Prior knowledge of DA (PriorDA): Post-experimental survey response to “Before participating in this experiment, did you already know the Deferred Acceptance (DA) algorithm (or Gale-Shapley school choice mechanism)?” Response options: Yes, No, Unsure.
Secondary Outcomes (explanation)
1. DSPlay_Submit
DSPlay_Submit isolates dominant-strategy compliance among those who submit and is computed using the same dominant-strategy definition as the primary outcome DSPlay, restricted to participants with Submit = 1. We will report DSPlay together with this decomposition to clarify whether effects operate through submission behavior, conditional compliance, or both.

2. QuizCorrect
QuizCorrect is based on an incentivized quiz in which participants compute the final assignment for a worked example given priorities and a lottery order. QuizCorrect = 1 if the participant’s entered final assignment exactly matches the allocation implied by the stated procedure and the information shown on the quiz page; otherwise 0. If no answer is submitted within the quiz time limit, QuizCorrect = 0.

3. Decision Time
DecisionTime is recorded automatically as elapsed time in seconds from when the participant first enters the preference-submission screen to when they submit their rank-order list. If no submission occurs before the deadline, DecisionTime is set to the time limit and NonSubmit = 1.

4. AI usage measures
AIUseAny equals 1 if at least one message is sent to the chatbot. AINumMsg is the count of participant messages. AITotalChars is the sum of characters in participant messages (or AITotalTokens if token counts are available from the logging system). These are defined only for participants assigned to AI availability.

5. Experimenter demand
DemandFeel and DemandText capture perceived experimenter demand. They will suggest the possibility of the existence of experimenter demand effect.

6. Panel influence
PanelInfluence and PanelReason capture perceived influence of the right-side interface element on understanding or judgment.

7. PriorDA
We will use PriorDA primarily as a pre-specified moderator for heterogeneity analyses. we will estimate treatment effect heterogeneity by interacting treatment assignment with indicators for PriorDA categories.

Experimental Design

Experimental Design
2×2 between-subjects factorial experiment. Factor A is mechanism (DA vs. RDA). Factor B is AI assistance (AI access vs. no AI). Participants make a one-shot ranking decision over four schools. Each participant is matched with three computer agents. Priorities are determined by residency (local vs. non-local) and a uniform random lottery among non-residents, displayed to participants. Payoffs depend on the assigned school slot via participant-specific payoff tables.
Experimental Design Details
Not available
Randomization Method
Randomization done in office by a computer
Randomization Unit
Individual participant.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
200 individuals
Sample size: planned number of observations
200 individuals
Sample size (or number of clusters) by treatment arms
50 treatments
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
The University of Tokyo, Graduate School of Engineering Research Ethics Committee
IRB Approval Date
2026-01-09
IRB Approval Number
KE25-141