Abstract
Artificial intelligence (AI) systems, particularly large language model (LLM)-based applications, are increasingly integrated into frontline healthcare workflows to facilitate documentation and clinical decision-making. However, empirical evidence regarding their impact on operational efficiency and the quality of medical decision-making in real-world environments remains scarce. This study examines the implementation of an AI-assisted pre-consultation tool in specialty outpatient clinics. The tool structures patient intake information specific to the clinician's specialty and generates content suitable for inclusion in electronic medical records (EMRs). In one experimental condition, the system produces structured summaries encompassing the chief complaint, history of present illness, and prior diagnostic tests. In a second condition, the AI additionally provides a proposed diagnosis and treatment plan by drafting a complete EMR note. Physicians may choose to accept, modify, or disregard the AI-generated output in favor of composing their own notes. The research employs a randomized controlled field experiment in China, comparing the two AI-assisted arms with a control group that uses standard intake procedures. Outcomes assessed include consultation duration, patient throughput, and the potential for AI-generated reports to induce behavioral distortions in physician decision-making.