Expertise, Personal Experience, and Algorithm Aversion (study 4)

Last registered on June 21, 2022


Trial Information

General Information

Expertise, Personal Experience, and Algorithm Aversion (study 4)
Initial registration date
June 18, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
June 21, 2022, 8:40 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.



Primary Investigator


Other Primary Investigator(s)

PI Affiliation
Peking University
PI Affiliation
Peking University

Additional Trial Information

Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Research shows that evidence-based algorithms perform better than humans in predicting the future. Yet people give less weight to AI advice than they should. By exogenously variating personal experience with AI predictions, we explore how personal experience impacts weight on algorithm advice and how the level of expertise moderates this relationship. Our results will help design algorithms that are better adopted by human decision-makers, and mitigate the biases that experts hold on algorithms.
External Link(s)

Registration Citation

Gao, Yu, Cong Wang and Chong (Alex) Wang. 2022. "Expertise, Personal Experience, and Algorithm Aversion (study 4)." AEA RCT Registry. June 21.
Experimental Details


Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
Main outcome is reliance on AI after receiving different information about AI's relative performance compared to self.
Primary Outcomes (explanation)
If the initial diagnosis is different from AI's advice, and the subject changed her diagnosis to the AI's advice later, then reliance is 1, and otherwise 0.

Secondary Outcomes

Secondary Outcomes (end points)
their tolerance of the AI and a human expert
Secondary Outcomes (explanation)
To what extent do you agree with the following statement?
“If the AI underperformed me in its domain, although it rarely happens, I would think that the AI is not good enough”.
“If the expert underperformed me in his/her domain, although it rarely happens, I would think that the expert is not good enough”.
(1-5, from completely agree to completely disagree)

Experimental Design

Experimental Design
Each doctor will be randomly assigned to the aggregate information treatment or the control, where they will receive information regarding their performance in a diagnosis task relative to an AI. Then, subjects will receive advise from the AI. We will measure their reliance on AI’s advice.
Experimental Design Details
Randomization Method
Randomization will be done by the survey platform.
Randomization Unit
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
131 doctors per group. That is about 262 in total.
Sample size: planned number of observations
393 observations in each group
Sample size (or number of clusters) by treatment arms
We need at least 393 observations in each group. That is about 131 individuals per group.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
pwr.t.test(n = , d = 0.2, sig.level = 0.05, power = 0.8, type = c("two.sample"))

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials