Human Experts and Artificial Intelligence: The Value of Human Input in Diagnostic Imaging (Teleradiology Version)

Last registered on June 29, 2022

Pre-Trial

Trial Information

General Information

Title
Human Experts and Artificial Intelligence: The Value of Human Input in Diagnostic Imaging (Teleradiology Version)
RCT ID
AEARCTR-0009620
Initial registration date
June 20, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
June 27, 2022, 10:00 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
June 29, 2022, 2:01 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
Massachusetts Institute of Technology

Other Primary Investigator(s)

PI Affiliation
Massachusetts Institute of Technology
PI Affiliation
Massachusetts Institute of Technology
PI Affiliation
Harvard Medical School

Additional Trial Information

Status
In development
Start date
2022-06-22
End date
2022-09-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We plan to investigate how human experts combine their own information with AI predictions when making assessments and decisions in the radiology domain. See the attached pre-analysis plan for full details. This study is a follow-on to the experiment described in AEA RCT Registry AEARCTR-0008799.
External Link(s)

Registration Citation

Citation
Agarwal, Nikhil et al. 2022. "Human Experts and Artificial Intelligence: The Value of Human Input in Diagnostic Imaging (Teleradiology Version)." AEA RCT Registry. June 29. https://doi.org/10.1257/rct.9620
Experimental Details

Interventions

Intervention(s)
See the attached pre-analysis plan for full details.
Intervention Start Date
2022-06-22
Intervention End Date
2022-09-30

Primary Outcomes

Primary Outcomes (end points)
To measure the quality of diagnostic assessments and decisions we will focus on the following primary outcomes variables for each pathology group.
1. Error in probability assessment
2. Incorrect treatment/followup recommendation

The primary pathology groups we will consider are:
1. Pooled outcomes for all pathologies
2. Pooled outcomes for all AI assisted pathologies
3. Pooled outcomes for all top-level AI assisted pathologies
Primary Outcomes (explanation)
See the attached pre-analysis plan for full details.

Secondary Outcomes

Secondary Outcomes (end points)
1. Time-taken and measures of effort exerted to parse the information in the X-ray and the clinical history, with and without AI
2. Treatment effects on distance from AI signal
3. Heterogeneity of treatment effects by pathology prevalence and AI performance
Secondary Outcomes (explanation)
See the attached pre-analysis plan for full details.

Experimental Design

Experimental Design
See the attached pre-analysis plan for full details of the experiment and analysis plan.
Experimental Design Details
Not available
Randomization Method
The attached pre-analysis plan contains the full randomization details. Radiologists will read patient cases in varying information environments. The following treatment arms will be cross-randomized.
• AI Treatment: Patient cases are presented alongside assistance of an AI support tool or without it
• Clinical history: Patient cases are presented alongside the patient's clinical history or without it
• Incentive randomization: Responses will be incentivized or not
Randomization Unit
Randomization occurs at the patient case level for each radiologist.
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
We will aim to recruit 250 radiologists split into two groups, as described in the attached pre-analysis plan.
Sample size: planned number of observations
The first group of 200 radiologists will read 60 cases resulting in 12,000 total reads. The second group of 50 radiologists will read 100 cases total, resulting in 5,000 total reads.
Sample size (or number of clusters) by treatment arms
The randomization will be balanced, so we expect half of the observations to be read with AI assistance, half to be read with clinical history, and half of responses to be incentivized. Note these are cross-randomized, so cases will be read with all combinations of AI assistance, clinical history, and incentives.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
See the attached pre-analysis plan for full details of the power calculations.
IRB

Institutional Review Boards (IRBs)

IRB Name
MIT Committee on the Use of Humans as Experimental Subjects
IRB Approval Date
2021-02-05
IRB Approval Number
E-2953
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information