Choice Misprediction: Incorrect Beliefs, Mistakes, and Mistaken Learning

Last registered on December 31, 2022

Pre-Trial

Trial Information

General Information

Title
Choice Misprediction: Incorrect Beliefs, Mistakes, and Mistaken Learning
RCT ID
AEARCTR-0009602
Initial registration date
June 17, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
June 18, 2022, 10:25 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
December 31, 2022, 12:25 PM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
University of California, Berkeley

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2022-06-23
End date
2023-05-15
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
When we ask someone to predict their future choices or behavior, we are implicitly asking them to predict the likelihood of future states of the world, to predict their behavior (based off their preferences) in each of these states, and then to aggregate all this information into a final prediction. Each "step" in this mental process could be prone to biases or error, leading to misprediction of future behavior. In this study, we seek to decompose and quantify how much of a role each of these "steps" plays in misprediction and explore mechanisms for why people make mistakes in each "step." We also seek to document whether this decomposition changes over time, as we give people opportunities to learn.

Specifically, we will run a 5-session modification of the survey-based experiment from Augenblick and Rabin (2019), adding uncertainty about the sentence length of a Greek transcription task (i.e., how much time/effort a task takes). We give respondents information about the sentence length distribution and ask them to predict this distribution. We also ask them to predict their future work choices both unconditionally and conditional on being assigned specific sentence lengths. We explore several misprediction mechanisms: present-focus, motivated beliefs, non-Bayesian updating, and cognitive mistakes due to complexity.
External Link(s)

Registration Citation

Citation
Koepcke, Eric. 2022. "Choice Misprediction: Incorrect Beliefs, Mistakes, and Mistaken Learning." AEA RCT Registry. December 31. https://doi.org/10.1257/rct.9602-1.1
Sponsors & Partners

Sponsors

Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2022-06-23
Intervention End Date
2022-07-10

Primary Outcomes

Primary Outcomes (end points)
See Design section below for variable explanation.

1. Misprediction of (Unconditional) Work Predictions: Compare (unconditional) work predictions in session 't' to the average current work choices in session 't+1' (averaged using the true sentence length probability distribution from session 't'). Look into: How does this variable change across sessions, how does it depend on the misprediction discussed in (2) and (3) below, how does it depend on the (randomized) sentence length probability distribution (complexity)

2. Misprediction of (conditional) work predictions: Compare (sentence length) conditional work predictions from session 't' to current work choices in session 't+1.' Look into: How does this variable changes across sessions, how does it respond to the information treatments (and the order of the information treatments)

3. Misprediction of Sentence Length Probability Distribution: Compare the predicted and actual sentence length probability distributions. Look into: How does this variable change as respondents are shown more draws, do they Bayesian update their beliefs (e.g., following the methodology of Augenblick and Rabin, 2021), do they overreact to the most recent draws, how does this variable depend upon the true sentence length probability distribution
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
1. Difference between current work choices, (unconditional) work predictions, and desired work choices (i.e., the present-focus estimation methodology of Augenblick and Rabin, 2019). How do these differences change across sessions, how do they respond to the information treatments (and the order of the information treatments), do these differences depend on how accurate respondents think they are relative to others (motivated beliefs)

2. Accuracy of respondents' guesses about how accurate their and others' (conditional) work predictions will be. How does this change across sessions, how does it respond to the information treatments (and the order of the information treatments)

3. Reported confidence of sentence length probability distribution predictions. How does this change as more draws are shown, how does it change after 'surprising' draws, how does this depend upon the true probability distribution

4. Accuracy of respondents' guesses about how accurate their (unconditional) work predictions will be. How does this change across sessions, how does it depend upon (2) and (3) above, how does it depend on the true sentence length probability distribution

5. Text analysis of: Reasons why own/others' (conditional) work predictions may be inaccurate, (after receiving information treatments) reasons why own/others' (conditional) work predictions were inaccurate/accurate, reasons why their (unconditional) work predictions may be inaccurate

6. Number of tasks respondents actually end up doing. How accurate were their (corresponding) current work choices, does this accuracy change across sessions

Secondary Outcomes (explanation)

Experimental Design

Experimental Design
This is a five-session study. Each session is conducted via a Qualtrics' survey and is spaced a few days apart.

The study is a modified version of the one used in Augenblick and Rabin (2019). In this study, respondents make choices/predictions about how many Greek transcription tasks they will do. Each Greek transcription task involves transcribing a sentence of blurry Greek characters into their English counterpart. In our study, task sentences can either be 21, 28, or 35 characters long. Respondents don't know which sentence length they'll be assigned for tasks or the sentence length probability distribution.

Each session has several components:
(1) Three practice/warm-up tasks, one of each possible sentence length

(2) Survey questions:

(a-i) Current work: Respondents are asked to report how many tasks they would like to do at the end of the Current session for different (per-task) wages. They are asked to give separate answers for each possible sentence length
(a-ii) (Conditional) Work Predictions: Respondents are asked to predict how many tasks they would like to do at the Next session for different wages. They are asked to give separate answers for each possible sentence length
(a-iii) Prediction Accuracy: Respondents are asked to guess how accurate their predictions will be and how accurate (on average) other peoples' predictions will be. They are asked why they think their/others' predictions may be inaccurate and whether they'd like accuracy information in future sessions
(a-iv) Desired work: Respondents are asked to report how many tasks they'd like to do at the next session for different wages. They are only asked to answer for the case where the task sentence length is 28 characters

(b) Sentence Length Probability Distribution Prediction: Respondents are shown three random draws (i.e., sentence lengths) from the sentence length probability distribution, they are then asked to predict the probability of being assigned each task sentence length and to report their confidence in their predictions. This is repeated four more times (for a total of 15 random draws). Note: The probability distribution is randomized both across people and across sessions

(c-i) (Unconditional) Work Predictions: Respondents are asked to predict, on average, how many tasks they will choose to do Next session for different wages (the average should be taken over their perceived sentence length probability distribution)
(c-ii) Respondents are asked to guess how accurate their predictions will be and why they think their predictions may be inaccurate

(3) Task Completion: A random task sentence length is selected (according to the true distribution) and a random (per-task) wage is selected. Respondents are then free to complete tasks of that sentence length for that wage.

Note: Session 5 doesn't involve any questions about future work predictions

Accuracy Information Treatment:
In session 3 and onwards, between (2a-i) and (2a-ii), respondents are given information about prediction accuracy from the previous two sessions. In session 3, half of the respondents are shown their own prediction accuracy and are asked to explain why their predictions were inaccurate/accurate. The other half are shown the sample's average prediction accuracy and are asked to guess why other peoples' predictions were inaccurate/accurate. For session 4, we flip which prediction accuracy information each respondent gets. In session 5, each respondent is shown both pieces of information. They are first shown the sample average prediction accuracy, they are then asked to predict whether their own accuracy was better/worse than the sample average. They are then shown their own prediction accuracy and asked to explain why their accuracy was better/worse than the sample average.
Experimental Design Details
Randomization Method
Computer
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
100-120 respondents (500-600 respondent-session observations)
Sample size: planned number of observations
100-120 respondents (500-600 respondent-session observations)
Sample size (or number of clusters) by treatment arms
50-60 respondents per ordering of the information treatment
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
UC Berkeley Committee for Protection of Human Subjects (IRB)
IRB Approval Date
2022-03-30
IRB Approval Number
2022-02-15077

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials