Mental Models, Social Learning and Statistical Discrimination

Last registered on May 21, 2024

Pre-Trial

Trial Information

General Information

Title
Mental Models, Social Learning and Statistical Discrimination
RCT ID
AEARCTR-0013607
Initial registration date
May 20, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 21, 2024, 11:37 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
UC San Diego

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2024-05-20
End date
2024-12-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
I use a laboratory experiment to study how individuals, in the role of an employer, make hiring decisions based on a worker's group identity and an educational signal. The goal is to study how the dynamic of decision-making changes as I exogenously vary the degree of exposure to others' choices.
External Link(s)

Registration Citation

Citation
Batmanov, Alisher. 2024. "Mental Models, Social Learning and Statistical Discrimination." AEA RCT Registry. May 21. https://doi.org/10.1257/rct.13607-1.0
Experimental Details

Interventions

Intervention(s)
I exogenously vary the degree of exposure to others' decisions and measure the optimality of choices based on their consistency with the Bayesian benchmark.
Intervention Start Date
2024-05-20
Intervention End Date
2024-12-01

Primary Outcomes

Primary Outcomes (end points)
Each round, subjects make a binary choice between hiring a green worker and hiring an orange worker, where in each round, one option is optimal (calculated using the Bayes rule); we call this optimal option "Bayesian". The primary outcome is binary; either the subject makes a Bayesian decision or a non-Bayesian decision (i.e. =1 if Bayesian). This outcome will be utilized to determine both how the decision-making changes from round to round for each subject and how the overall dynamics of Bayesian decision-making differs across different treatment groups.
Primary Outcomes (explanation)
Participants are exposed to binary choice in each round, either consistent with Bayesian or not consistent with Bayesian, thus we will identify in each round whether their decision is consistent with the Bayesian benchmark and then use it when aggregating across rounds and/or treatment groups.

Secondary Outcomes

Secondary Outcomes (end points)
(1) I will elicit subjects’ confidence in their own choices in pre-set rounds throughout the experiment, where participants are asked to bet on their own choices by selecting the number of points out of 50, which then gets multiplied by a factor (>1) if their decision turns out to be payoff-maximizing, or by 0 if their decision is sub-optimal ex post. The final amount of points is then used to determine the chance of obtaining the bonus payment. Therefore, if the subject is more certain that their decision in a given round is optimal,betting more points is associated with a higher chance of winning a bonus.

(2) The time participants spend on each question (duration) is another dimension we will investigate as a secondary outcome.

(3) At the end of the experiment all subjects in the Baseline group will be asked to provide incentivized advice to another participant in the experiment. Subjects in all other treatment groups will be encouraged to answer open-ended questions about their strategies used to make their choices in each round and how they utilized others' choices/feedback to influence their decision.
Secondary Outcomes (explanation)
The time spent on each question will be measured and analyzed through the built-in timing function in Qualtrics.

Experimental Design

Experimental Design
In each round, there will be a pool of 4 green workers and 4 orange workers, along with the ability level of each worker, where one worker from each pool will be chosen at random (ability level hidden). High-ability workers are educated with certainty (ph = 1); Low ability workers are not educated with certainty (pl = 0); Medium ability workers are educated with a 90% chance or 10 % chance depending on the round (pm = 0.9 or 0.1). Participants will be given information about the education status of the selected workers from each pool, and then they will face a binary choice of choosing between hiring the green worker or the orange worker. Participants will be incentivized to choose the worker with a higher ability, based on their education status.

I will randomly assign participants into three main treatment groups: Baseline (B), Social Bayesian (SB), and Social Non-Bayesian (SNB), where baseline group subjects receive simple feedback (ability level of the worker they hired) after each round, SB group subjects additionally receive information about the choice of another study participant - we choose a subject whose actions are very close to Bayesian, and SNB group subjects receive information about the choice of another study participant - we choose a subject whose actions are consistent with conservatism. The main part of the experiment consists of 120 rounds in 5 parts. Besides the same 5 parts as other treatments, the Baseline group has a part 6 where participants are incentivized to write advice, which might be shown to participants in other treatments. It is important to note that the additional information for SB and SNB groups is given before the hiring decision is made, and subjects do not know whether the choices they observe are Bayesian or not; they will only know that it is the choice of another participant (fixed across the rounds). Before the last 20 rounds, subjects in treatment groups SB and SNB will additionally read the recommendation from a Bayesian or Non-Bayesian subject, respectively, together with the written explanation for given advice.

It is possible that exposure to others' choices have no influence on subjects' decisions. This could be due to the fact that subjects do not recognize the optimality of the choices others have made, or that even though they do recognize the optimality of others' choices, they simply do not want to replicate others' decisions. To investigate different underlying mechanisms, I might run two additional treatment groups: Social Bayesian Feedback (SBF) and Social Non-Bayesian Feedback (SNBF). These two groups are given the same information as the SB and SNB groups, respectively, but they are also given feedback corresponding to the ability of the worker they did not hire.
Experimental Design Details
Not available
Randomization Method
Participants are randomly assigned to different treatments based on the experimental session they are a part of. They are assigned to different Qualtrics 'surveys'.
Randomization Unit
Randomization is carried out at the individual level.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
The design is not clustered: one cluster is one participant. I recruit 200-250 subjects in total. I will analyze the data clustering standard errors at the individual level.
Sample size: planned number of observations
200-250 subjects
Sample size (or number of clusters) by treatment arms
There are three main treatment groups in the experiment: 50-60 subjects in baseline condition [B], 50-60 subjects in social bayesian condition [SB], 50-60 subjects in social non-bayesian condition [SNB].

To explore mechanisms, there are two potential additional treatments: 50-60 subjects in social bayesian feedback condition [SBF], 50-60 subjects in social non-bayesian feedback condition [SNBF].
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
University of California, San Diego (UCSD)
IRB Approval Date
2023-12-05
IRB Approval Number
809339