AI as Doctors

Last registered on February 19, 2026

Pre-Trial

Trial Information

General Information

Title
AI as Doctors
RCT ID
AEARCTR-0017897
Initial registration date
February 16, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
February 19, 2026, 7:26 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
February 19, 2026, 1:36 PM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Harvard University

Other Primary Investigator(s)

PI Affiliation
Harvard University

Additional Trial Information

Status
In development
Start date
2026-02-14
End date
2027-01-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Recent advances in artificial intelligence (AI) offer opportunities to improve healthcare delivery through AI-driven primary care consultations. However, patient acceptance remains uncertain. To inform healthcare policy and insurance benefit design, it is essential to understand the conditions under which patients would accept AI consultations instead of traditional doctor visits, especially the financial compensation or savings necessary to encourage adoption.


This study aims to:
Measure patients' willingness-to-accept (WTA) compensation for switching from traditional doctor visits to AI-based consultations for primary care.
Identify how WTA varies based on AI attributes, including:
Price differences
Wait time differences
Medical liability of AI for errors
AI’s ability to prescribe medications
AI’s clinical diagnostic performance compared to doctors (above-average or top 10% of U.S. medical licensing exams)
Familiarity with the doctor (regular vs. new)

and to do so across different types of healthcare situations.
External Link(s)

Registration Citation

Citation
Chan, Alex and David Cutler. 2026. "AI as Doctors." AEA RCT Registry. February 19. https://doi.org/10.1257/rct.17897-1.1
Experimental Details

Interventions

Intervention(s)
The study is conducted as an online survey experiment via Prolific, which presents participants with hypothetical scenarios about AI systems and asks them to indicate their preferences. We vary attributes of the AI and doctor alternatives, and elicit preferences using a multiple price list design.
Intervention Start Date
2026-02-16
Intervention End Date
2026-07-31

Primary Outcomes

Primary Outcomes (end points)
Willingness to pay between binary choices (multiple price list), used to infer the outcomes of interest, namely valuation of various AI and doctor attributes (price, waiting time, ability to prescribe, etc.)
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
See attached Qualtrics file for the full design and randomization.

Participants are randomized to arms where we hold some attributes of AI/doctor fixed (e.g. whether it is a new doctor or the usual doctor), then they will make binary choices using an adaptive multiple price list (MPL) to gauge switching points in price.

*This MPL design is different from usual MPLs in that it will show a random price level as a starting point and present the choices as binary choices instead of the full list (to address the criticism that MPL participants might be biased to just pick a switching price towards the middle of the list), it will then show prices based the choice to hone into the switching price.
Experimental Design Details
Not available
Randomization Method
Qualtrics
Randomization Unit
individual
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
2400 individuals
Sample size: planned number of observations
2400 individuals
Sample size (or number of clusters) by treatment arms
For the experiment, we expect to enroll 2400 respondents. 50 in each “arm”. The 48 arms are simply 3 possible hypothetical Ai memory*4 vignettes*2 types of doctors (new vs usual)* 2 AI performance levels as described above and shown in the instrument. Because we will need to compare across arms, the rough power calculation and minimum detectible effect sizes suggest that this is the smallest sample we need to have adequate power.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
Harvard University
IRB Approval Date
2026-02-16
IRB Approval Number
IRB26-0066