A Dynamic Investigation of Stereotypes, Belief Updating, and Behavior

Last registered on April 13, 2020

Pre-Trial

Trial Information

General Information

Title
A Dynamic Investigation of Stereotypes, Belief Updating, and Behavior
RCT ID
AEARCTR-0005712
Initial registration date
April 11, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 13, 2020, 12:11 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Harvard Business School

Other Primary Investigator(s)

PI Affiliation
Arizona State University
PI Affiliation
Arizona State University

Additional Trial Information

Status
In development
Start date
2020-03-31
End date
2021-04-30
Secondary IDs
Abstract
Many decisions (such as how much or what kind of education to get) are dynamic in nature, with individuals receiving feedback at one point in time and then making decisions much later. Zimmermann (2019) shows that individuals are less likely to recall negative feedback in the long run. Our project builds on this important work by asking whether there are gender differences in the propensity to incorporate positive and negative feedback in immediate and more long-term decision-making, and whether this gender difference depends upon the gender stereotype of the task. We also plan to explore the link between beliefs and choices about how to be compensated for performance, and whether the difficulty of the domain influences belief updating.
External Link(s)

Registration Citation

Citation
Coffman, Katherine, Paola Ugalde Araya and Basit Zafar. 2020. "A Dynamic Investigation of Stereotypes, Belief Updating, and Behavior." AEA RCT Registry. April 13. https://doi.org/10.1257/rct.5712-1.0
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2020-03-31
Intervention End Date
2020-04-30

Primary Outcomes

Primary Outcomes (end points)
Completion of Session 1 and 2 will provide the following key data:
• Beliefs about Round 1 performance prior to receiving feedback
• Provisional compensation choices for Round 2 performance prior to receiving feedback
• Beliefs about Round 1 performance immediately after receiving feedback (half of treated sample)
• Provisional compensation choices for Round 2 performance immediately after receiving feedback (half of treated sample)
• Beliefs about Round 1 performance approximately one week after receiving feedback
• Provisional compensation choices for Round 2 performance approximately one week after receiving feedback
• Round 1 performance
• Round 2 performance
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
This is an online experiment consisting of two 30-minute sessions, completely approximately 1 week apart.

Session 1:

• Round 1 Performance: Individuals are asked to perform 2 tasks, one male-stereotypical (math) and one female-stereotypical (verbal). Each will be a multiple-choice quiz, calibrated so that quizzes from each domain are of similar difficulty on average across participants. There will be a “hard” treatment, where quizzes are designed to be more difficult on average, and an “easy” treatment, where quizzes are designed to be less difficult on average. Participants will be randomly assigned to one of the two treatments.

• Belief Elicitation and Provisional Choice for Round 2 Performance PRE-FEEDBACK: Using an incentivized procedure, we will next elicit beliefs about performance in each of the two Round 1 quizzes: guess of Round 1 score on each quiz and believed probability of having exactly that score. We will also ask them to assign probabilities to the likelihood of each possible rank relative to a fixed reference group of prior participants who have completed the same quiz (a group of 9 participants who participated before the real experiment started). We will also elicit the belief about being ranked in the top 40% relative to the fixed reference group.

At this stage, we will also introduce the choice about Round 2 performance (to be completed during Session 2). Participants will be informed that problems will be harder on average in Round 2. Participants will have the chance to choose how to be compensated for the quizzes performed in Round 2. Using multiple price lists, they will make a series of choices in which we will vary the incentives available to each task, some of which are competitive and some of which are piece-rate.

• Feedback Provision: Participants will then receive feedback about Round 1 performance. They will be told how their performance compared (better or worse) than one randomly chosen participant from the reference group for each Round 1 quiz.

• Belief Elicitation and Provisional Choice IMMEDIATELY POST-FEEDBACK: We will exactly repeat the belief elicitation and provisional choices for Round 2 compensation that participants completed pre-feedback. This stage will only be fielded to half the treatment participants.

Session 2:

• Belief Elicitation and Provisional Choice ONE WEEK POST-FEEDBACK: One week after the initial study, all participants will be re-invited. We will exactly repeat the belief elicitation and provisional choices for Round 2 performance that they completed pre-feedback.

• So, for some respondents we will have three provisional choices (pre-feedback; immediate post-feedback; one week post-feedback). For others we will have two provisional choices (pre-feedback; one week post-feedback). For each individual, one of these would be picked at random to count as their actual choice for Round 2 performance

• Round 2 Performance (performed one week post-feedback): We will implement the randomization to determine under which compensation scheme the participant will perform. However, we will first ask the participant to perform both tasks and only then inform them which task/incentive scheme was chosen for payment. That way, performance on both tasks would be under very similar informational conditions.
Experimental Design Details
Randomization Method
Randomization through oTree for online experiment
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
n/a
Sample size: planned number of observations
Targeting 1800 Session 2 completes. All ASU Honors College undergraduate students invited to participate. Then, batch emails to subsets of non-honors ASU undergraduate students until we hit the target sample size. We will aim on targeting 1900-2000 session 1 completes. Since the data collection is scattered over weeks, we will adjust the target sample for Session 1 based on the attrition rate between Session 1 and Session 2 for the initial set of students. We are submitting this pre-registration after launching the first batches of session 1 but prior to looking at any data.
Sample size (or number of clusters) by treatment arms
18% of this sample will be randomized to a control group that receives no feedback. The rest of the sample will be uniformly randomized over the treatment conditions (Easy/Hard Round 1 quizzes; Immediate or No-Immediate Belief Elicitation after Feedback).

For our analysis, we plan to use the control group to verify that there are no time trends in our data that are independent of feedback. We will verify this within the control group. We will withhold the control group from our main analysis.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Arizona State University IRB
IRB Approval Date
2020-03-30
IRB Approval Number
STUDY00010684
IRB Name
Harvard Business School IRB
IRB Approval Date
2019-10-09
IRB Approval Number
IRB19-1678

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials