Information Complexity and Bias

Last registered on September 17, 2024

Pre-Trial

Trial Information

General Information

Title
Information Complexity and Bias
RCT ID
AEARCTR-0014353
Initial registration date
September 16, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
September 17, 2024, 1:54 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
UC San Diego

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2024-09-18
End date
2024-09-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Experts can be biased, a potential reason for the "crisis of expertise" we are experiencing. Yet it is unclear why bias should affect updating. Indeed, if the bias of an expert is well understood, a receiver could ``debias'' their message and recover a useful underlying signal. However, debiasing may be a cognitively complex process for individuals to undertake. We design a lab experiment to understand how the complexity of debiasing information affects its usage and valuation.
External Link(s)

Registration Citation

Citation
Lindquist, Samuel. 2024. "Information Complexity and Bias ." AEA RCT Registry. September 17. https://doi.org/10.1257/rct.14353-1.0
Experimental Details

Interventions

Intervention(s)
This experiment will be lab-based, and will take place over Prolific and Qualtrics. All treatments within this experiment will be within participant. Participants will receive information that will aid them in solving a "balls and urns" task. The treatment dimensions for this will be (a) the bias of the piece of information and (b) the complexity of debiasing that information.
Intervention Start Date
2024-09-18
Intervention End Date
2024-09-30

Primary Outcomes

Primary Outcomes (end points)
Our primary outcomes are
a) belief updating (that is, the difference between an individual's prior belief that balls were drawn from an urn, and their posterior belief that balls were drawn from an urn)
b) Willingness to pay for information (that is, a monetary value that the participant said they would be willing to pay for a piece of information)
Primary Outcomes (explanation)
For (a) belief updating, we will look at multiple measurements of belief updating
a1) posterior belief - prior belief
a2) log ( posterior belief - prior belief ), this utilizes the format of the so-called "Grether regression" (Grether 1980)
a3) ( posterior belief - prior belief ) / (bayes posterior - prior belief), this can be thought of as the "percent updating", how much you updated normalized by the original difference between your belief and bayes' posterior. Because this value will be undefined when Bayes posterior = prior belief, we will presumably also measure it by coding such values with "1"
b1) Our willingness to pay measure will be a numeric value that participants assign to a piece of information with a given bias

Secondary Outcomes

Secondary Outcomes (end points)
We will mainly focus on the outcomes above, but plan to look at heterogeneity in these outcomes by
a) baseline skill
b) baseline cognitive uncertainty
c) number of balls drawn in the willingness to pay problems
Secondary Outcomes (explanation)
a) baseline skill: skill will be measured using (the inverse of) the average distance between the participants posterior and bayes posterior in the practice questions
b) cognitive uncertainty: Cognitive uncertainty will be an average of the cognitive uncertainty questions asked of participants in the practice section
c) number of balls drawn in the willingness to pay problems

Experimental Design

Experimental Design
Our experiment will be hosted on Qualtrics, and we will recruit participants via Prolific. Participants in our experiment will complete a series of "balls and urns" tasks, in which they guess the probability that balls were drawn from a certain urn. They will then be given valuable information to solve these tasks, after which they can "re-update" their answer. We deliver this information to them with some bias, but tell them exactly how much the information is biased by. We deliver the bias of the information with varying levels of complexity (which we operationalize via algebraic statements of varying length).
Experimental Design Details
Before our experiment begins, participants consent, and have to pass a simple attention check. Those that either (a) don't consent or (b) don't pass the attention check, are kicked out.

We then give our participants instructions of the balls and urns task. Afterwards, we ask them comprehension questions on the instructions. They are kicked out of the survey if they get any of these comprehension questions wrong.

This experiment has four portions:
1) A "practice" stage, during which participants solve three balls and urns tasks to practice them. This stage is also used for the researcher to be able to derive baseline levels of (a) skill at the task and (b) "cognitive uncertainty" (Enke and Graeber, 2023) about the task.
2) A main stage. During this stage, participants solve 11 balls and urns problems, but are then given (a) a possibly biased estimate of the correct answer, and (b) the exact value by which this estimate deviates from the "truth" (i.e. Bayes' posterior). For example, after solving a task, they may told that the correct answer is "25%", but that this value is overestimated by "5%". We vary the complexity of the value "5%", for example, by expressing it as "4 * (7 -2 )/(2 * 2)". We have three complexity levels, where a simple number (e.g. "5%") is the first such complexity value. The first updating task they solve will always have a complexity level of 1.
3) A willingness to pay stage, wherein participants will complete 10 willingness to pay questions. Participants will now be shown (a) a set of balls drawn (without knowing the composition of the urns from which they came, or the prior probabilities of the urns being chosen), and (b) a mathematical expression of how the information given would be biased. They will then state how much they would be willing to pay to receive that information, given its bias.
4) One of the problems from the willingness to pay stage is implemented.
Randomization Method
All randomization takes place using Javascript code implemented through Qualtrics.

1) In the main task (described in Part 2 above), the levels of bias that individuals will see are {-10, -5, -1, 0, 1, 5, 10}. There are three levels of complexity levels for each bias level, and 2 expressions for each complexity level above 1. For example, for bias level = 0, we have complexity level 1 = "0%", complexity level 2 = "4 ∗ (2 – 2)/(2 ∗ 2)%" or "6 ∗ (3 – 3)/(2 ∗ 3)%", and complexity level 3 = "2^2 – 4 + 4 ∗ (2 – 2)/(2 ∗ 2) – 8 + 4 ∗ 2%" or "2^3 – 8 + 6 ∗ (3 – 3)/(2 ∗ 3) – 12 + 4 ∗ 3%." All other bias levels are analogous to this.

We will not have a full factorial design when it comes to the crossing of Bayes' posteriors with biases. For the reasoning behind this, consider the following: if Bayes' posterior were, say, 99%, then stating that the computer was underestimating by 5% would imply the true value was 104%, an impossibility. Thus for each value of Bayes' posterior, the code picks from all aforementioned bias values such that the information given is between 0% and 100%.

2) In the willingness to pay section, we will oversample bias = 0, so that it is selected 50% of the time in expectation (and one of the values -10, -5, -1, 1, 5, 10 are selected the other 50% of the time). Furthermore, we will only select from complexity level 1 with 50% probability and complexity levels 2 and 3 with 50% probability.



Randomization Unit
Randomization is within participant. So the same participant will in likelihood receive messages of varying (a) bias and (b) complexity.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
This study does not have clustered randomization.
Sample size: planned number of observations
We will recruit 500 participants. For our practice updating problems, we will have 3 observations per participant. For our main updating problems, we will have 11 observations per participant. For our willingness to pay problems, we will have 10 observations per participant.
Sample size (or number of clusters) by treatment arms
Randomization takes place via JavaScript code during the study, so we cannot pre-specify exact treatment arm sizes. But we do know the proportions of these treatments arms in expectation.

However, in the main task, we have seven bias values [-10, -5, -1, 0, 1, 5, 10] and three complexity levels [1, 2, and 3]. These 7 x 3 = 21 treatments should be evenly distributed in expectation for questions 2-11 of the main task. For the first question of the main task, we will always use complexity level 1.

In the willingness to pay section, for biases, we will pick 0 with 50% probability, and some value in [-10, -5, -1, 1, 5, 10] with 50% probability. For complexity levels, we will pick complexity level 1 with 50% probability, and from [2, 3] with 50% probability.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
UC San Diego
IRB Approval Date
2024-07-03
IRB Approval Number
808257

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials