The Conformity Effects of AI-Generated Advice on Individual Choice

Last registered on February 04, 2026

Pre-Trial

Trial Information

General Information

Title
The Conformity Effects of AI-Generated Advice on Individual Choice
RCT ID
AEARCTR-0017799
Initial registration date
January 30, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
February 04, 2026, 10:01 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation

Other Primary Investigator(s)

PI Affiliation

Additional Trial Information

Status
In development
Start date
2026-02-02
End date
2026-02-28
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study examines how advice from artificial intelligence influences human image-recognition decisions. We conduct a 2×2 experiment that varies (i) the source of advice (human vs. AI) and (ii) the number of advisors (single vs. multiple). Participants first make an individual image-recognition judgment and then receive advice generated either by one human, multiple humans, one AI system, or multiple AI systems. We test whether and how participants revise their decisions in response to these different advice conditions. We use four images that are known to be prone to AI hallucination-like errors, which allows us to study advice effects under challenging image-recognition tasks. We plan to conduct both a laboratory experiment and a web-based (online) experiment.
External Link(s)

Registration Citation

Citation
Kitadai, Ayato and Yasuhiro Nakamoto. 2026. "The Conformity Effects of AI-Generated Advice on Individual Choice." AEA RCT Registry. February 04. https://doi.org/10.1257/rct.17799-1.0
Experimental Details

Interventions

Intervention(s)
Participants view images (e.g., pictures containing apples and other objects) for approximately 10 seconds. After each image, they are asked to choose, from multiple-choice options, how many target objects (e.g., apples) were shown. Participants then receive advice about the number of objects, provided either by a human advisor or an AI advisor.
Intervention Start Date
2026-02-02
Intervention End Date
2026-02-28

Primary Outcomes

Primary Outcomes (end points)
Choice revision (binary): whether a participant changes their initially selected answer option after receiving advice (AI vs. human; single vs. multiple advisors).
Primary Outcomes (explanation)
We construct the primary outcome variable ChoiceRevision as an indicator equal to 1 if the participant’s final answer (after receiving advice) differs from their initial answer (before advice), and 0 otherwise.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We conduct a 2×2 between-subjects experiment that varies (i) the source of advice (human vs. AI) and (ii) the number of advisors (single vs. multiple). Participants are randomly assigned to one of four conditions. In each task, participants view an image for approximately 10 seconds and then select, from multiple-choice options, the number of target objects shown. After making an initial judgment, participants receive advice corresponding to their assigned condition and then provide a final answer. The primary outcome is whether participants revise their initial choice after receiving advice.
Experimental Design Details
Not available
Randomization Method
Participants are randomly assigned to one of the four treatment arms by the research team in the office.
Randomization Unit
Session-level assignment: the experimental condition is set for each session (all participants in the same session receive the same condition). Within each session, the order of the four images is randomized for each participant by the experiment software.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
about 12 laboratory sessions (about 20 participants per session; total N=240)
Sample size: planned number of observations
240 participants in the laboratory experiment and 200–500 participants in the web-based (online) experiment.
Sample size (or number of clusters) by treatment arms
Laboratory experiment: 50 participants per arm (4 arms; total N=240).
Web-based experiment: 50–100 participants per arm (4 arms; total N=200–400).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
With 60 participants per arm (two-sided α=0.05, 80% power), the MDE for the binary outcome is approximately 23–24 percentage points (e.g., 30% vs 53–54%).
IRB

Institutional Review Boards (IRBs)

IRB Name
Institutional Review Board (IRB), Faculty of Informatics, Kansai University
IRB Approval Date
2026-01-29
IRB Approval Number
2025-43