AI pre-consultation support in outpatient clinics

Last registered on January 06, 2026

Pre-Trial

Trial Information

General Information

Title
AI pre-consultation support in outpatient clinics
RCT ID
AEARCTR-0017494
Initial registration date
December 29, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
January 06, 2026, 6:39 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Cornell University

Other Primary Investigator(s)

PI Affiliation
Cornell University
PI Affiliation
The Chinese University of Hong Kong

Additional Trial Information

Status
On going
Start date
2025-12-15
End date
2026-04-03
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Artificial intelligence (AI) systems, particularly large language model (LLM)-based applications, are increasingly integrated into frontline healthcare workflows to facilitate documentation and clinical decision-making. However, empirical evidence regarding their impact on operational efficiency and the quality of medical decision-making in real-world environments remains scarce. This study examines the implementation of an AI-assisted pre-consultation tool in specialty outpatient clinics. The tool structures patient intake information specific to the clinician's specialty and generates content suitable for inclusion in electronic medical records (EMRs). In one experimental condition, the system produces structured summaries encompassing the chief complaint, history of present illness, and prior diagnostic tests. In a second condition, the AI additionally provides a proposed diagnosis and treatment plan by drafting a complete EMR note. Physicians may choose to accept, modify, or disregard the AI-generated output in favor of composing their own notes. The research employs a randomized controlled field experiment in China, comparing the two AI-assisted arms with a control group that uses standard intake procedures. Outcomes assessed include consultation duration, patient throughput, and the potential for AI-generated reports to induce behavioral distortions in physician decision-making.
External Link(s)

Registration Citation

Citation
Chitla, Sandeep, Yao Cui and Qi Li. 2026. "AI pre-consultation support in outpatient clinics." AEA RCT Registry. January 06. https://doi.org/10.1257/rct.17494-1.0
Experimental Details

Interventions

Intervention(s)
In both treatment arms, patients complete a structured pre-consultation intake on a tablet prior to seeing the physician. The AI tool generates electronic medical record (EMR) content that the physician may use as is, edit, or discard and document independently.
Treatment Arm 1 (AI triage and summarization): The system generates an EMR-ready structured summary of the chief complaint, history of present illness, relevant past history, and prior tests.
Treatment Arm 2 (AI summarization plus diagnostic): In addition to the summary, the system generates a preliminary diagnostic assessment and plan of action by drafting the full EMR note.
Control Arm (standard workflow): Patients follow the usual outpatient intake and consultation process without AI-generated pre-consultation content.
Intervention Start Date
2025-12-15
Intervention End Date
2026-04-03

Primary Outcomes

Primary Outcomes (end points)
Consultation time; diagnostic accuracy; physician reliance on AI.
Primary Outcomes (explanation)
Consultation time: minutes of physician consultation per visit; Diagnostic accuracy: the correctness of the physician's final diagnosis relative to the accurate diagnosis provided by an expert; Physician AI reliance: whether the physician uses, edits, or discards the AI draft (and extent of edits).

Secondary Outcomes

Secondary Outcomes (end points)
Patient throughput; downstream orders.
Secondary Outcomes (explanation)
Patient throughput: completed visits per clinic session (or per physician-hour); Downstream orders: tests, imaging, prescriptions, referrals, follow-ups.

Experimental Design

Experimental Design
This study is conducted in specialty outpatient clinics in China. Patients complete the intake before seeing the physician. The study includes one control arm (standard workflow) and two treatment arms that introduce AI-generated EMR content from the same tablet-based pre-consultation intake: (1) triage and summarization support, and (2) summarization plus diagnostic support that drafts the complete EMR note. Physicians may use, edit, or discard the AI output.
Experimental Design Details
Not available
Randomization Method
Individual random assignment at the patient-visit level.
Randomization Unit
Individual outpatient visit.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Not clustered (individual randomization at the outpatient visit level).
Sample size: planned number of observations
Approximately 150 outpatient visits. We expect approximately 5 to 15 eligible patients per week and plan to run the experiment for 2 to 3 months. The experiment may be extended if hospital staffing and operational feasibility allow.
Sample size (or number of clusters) by treatment arms
Approximately 50 outpatient visits control, approximately 50 outpatient visits Treatment Arm 1 (AI summarization), approximately 50 outpatient visits Treatment Arm 2 (AI summarization plus diagnostic).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Institutional Review BoardĀ of the Chinese PLA General Hospital
IRB Approval Date
2025-07-31
IRB Approval Number
S2022-255-04