Expert Bias and Belief Updating

Last registered on October 23, 2023

Pre-Trial

Trial Information

General Information

Title
Expert Bias and Belief Updating
RCT ID
AEARCTR-0012313
Initial registration date
October 18, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 23, 2023, 9:26 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
UC San Diego

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2023-07-10
End date
2024-01-06
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Trust in experts is historically low. Understanding the determinants of trust in experts is therefore a first-order concern. We experimentally vary perceptions of both the expertise and bias of a sender of information to understand whether these affect belief updating. We focus on updating about expected future inflation, a particularly important value for the public to be aware of.
External Link(s)

Registration Citation

Citation
Lindquist, Samuel. 2023. "Expert Bias and Belief Updating." AEA RCT Registry. October 23. https://doi.org/10.1257/rct.12313-1.0
Experimental Details

Interventions

Intervention(s)
We will vary the identity of a sender of information, either to make them seem more biased, or seem to have more expertise.
Intervention (Hidden)
We will have four treatment groups and one control group. While the information given to recipients remains constant across treatments, we will vary the both the level of expertise and bias of the sender to understand how this impacts belief updating. The four treatment groups will be referred to as the a) non-biased layperson, b) non-biased expert, c) biased up expert, and d) ambiguously biased expert. The difference between treatments (a) and (b) is that the "expert" sender was shown a graph of inflation before providing their estimate. The difference between treatments (b), (c), and (d) is that in (b), the individual was incentivized to send the correct estimate, in (c) the sender was incentivized to overestimate, and in (d) the sender is told they were either incentivized to over or underestimate.
Intervention Start Date
2023-10-18
Intervention End Date
2023-11-04

Primary Outcomes

Primary Outcomes (end points)
a) Belief updating of inflation expectations
b) uncertainty reduction about inflation expectations
Primary Outcomes (explanation)
a) To construct belief updating, we will look at the difference between the posterior and prior belief.
b) To construct uncertainty reduction, we will look at differences in the distributions between the posterior and prior belief (for example, the mean absolute deviation)

Secondary Outcomes

Secondary Outcomes (end points)
-Stated beliefs about trust in the source
-Belief updating around topics which subjects in one of our sample pools may have strong priors on (the ranking of their university)
Secondary Outcomes (explanation)
For these stated beliefs we include both a likert score as well as an open ended text box.
-Belief updating in this case will again be measured as the difference between posterior and prior

Experimental Design

Experimental Design
We will use two subject pools, namely a student sample from UC San Diego and from the online platform Prolific. We will conduct the expert over Qualtrics. Students will be provided with the same information, with the experimental treatment coming in the form of changing perceptions of bias and level of expertise about the sender.
Experimental Design Details
Randomization Method
Randomization will be done within the Qualtrics survey platform
Randomization Unit
Individual level
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
1000
Sample size: planned number of observations
1000 overall. 600 will come from UC San Diego and the other 400 will come from Prolific
Sample size (or number of clusters) by treatment arms
150 control, 300 non-biased layperson (a), 200 non-biased expert (b), 150 biased up expert (c), 200 ambiguous expert (d)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
UC San Diego IRB
IRB Approval Date
2023-07-26
IRB Approval Number
808257
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials