Inference about Ability

Last registered on July 12, 2022

Pre-Trial

Trial Information

General Information

Title
Inference about Ability
RCT ID
AEARCTR-0009724
Initial registration date
July 10, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
July 12, 2022, 1:54 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Princeton University

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2022-07-11
End date
2022-10-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Evaluators tasked with learning about others typically receive signals dynamically about a menu of individuals who come from a variety of social groups. The proposed study aims to provide experimental evidence on distortions in belief updating that may occur in such settings. At the heart of the experiment is a statistical evaluation task. Experimental workers are drawn from a given ability distribution. Experimental employers then receive information on worker ability incrementally and report their updates. Treatments consider workers evaluated in isolation and sets of workers — drawn from the same or different distributions — assessed in tandem.
External Link(s)

Registration Citation

Citation
Sarnoff, Kim. 2022. "Inference about Ability." AEA RCT Registry. July 12. https://doi.org/10.1257/rct.9724-1.0
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2022-07-11
Intervention End Date
2022-10-31

Primary Outcomes

Primary Outcomes (end points)
Posterior beliefs
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Participants receive sequential information about the performance of experimental "workers" on a quiz and report their updates. Treatments consider workers evaluated in isolation and sets of workers — drawn from the same or different distributions — assessed in tandem.
Experimental Design Details
I hired participants on Prolific to answer ten math questions from the Armed Services Vocational Aptitude Battery (ASVAB). Each of these experimental workers received a score out of 10, which serves as my ability measure. I then created three groups of workers, each uniformly distributed over four possible scores: one reference group, a mean shift of this group, and a mean preserving spread of this group.

In this experiment, experimental "employers" are tasked with learning about the pool of experimental "workers" just described. Employers are randomized to one of three environments in a between-subjects design:

Treatment 1: One worker
In the baseline treatment, IND, I randomize the employer to one of the three groups of workers and show them the distribution of scores in their group: this gives the employer an accurate prior about the score of each worker they evaluate. Then a worker is drawn at random. For this worker, the employer views three random draws with replacement from the worker’s quiz. For each draw, the employer (1) learns whether the drawn question was answered correctly or incorrectly and (2) reports a full posterior distribution over the worker’s score.

Treatment 2: Two workers from same group
Next, I ask whether receiving information about multiple workers simultaneously, a natural feature of many evaluation environments, distorts inference. In a second treatment, COMP-SAME, the employer is again randomly assigned to a group of workers and sees the distribution of scores in the group. But, rather than one worker, two workers are drawn with replacement from the group. The employer completes three rounds of evaluation for both workers simultaneously.

Treatment 3: Two workers from different groups
Finally, I ask how well people can execute a simple abstract statistical discrimination problem. In a third treatment, COMP-DIFF, the employer is randomly assigned to a pair of worker groups, where these pairs contrast: one group is a mean shift or mean preserving spread of the other group. One worker is drawn from each group, and the employer completes three rounds of evaluation for both workers simultaneously.

The set of signal triplets, or paired signal triplets, an employer observes in a given treatment is fixed in advance. The triplets are generated according to their expected frequency given the prior. For the IND and COMP-SAME treatments, there is one signal set per prior. For the COMP-DIFF treatment, there are two sets of signals per pair of groups. This gives 10 sets.

In the IND treatment, half of the posterior reports are chosen at random to count for payment. In the COMP-SAME and COMP-DIFF treatments, in each round, the posterior report for one worker in the pair is chosen at random to count for payment. Payment is calculated using a binarized scoring rule.
Randomization Method
Randomization done by computer.
Randomization Unit
An experimental session is randomly assigned to one of the 10 signal sets. Within a session, an individual is randomly assigned to view the worker or worker pairs in one of two orders.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
N/A
Sample size: planned number of observations
There are 10 sets of signals across all treatments. For each set of signals, there will be at least 20 observations, i.e. at least 200 people total.
Sample size (or number of clusters) by treatment arms
At least 20 observations per set of signals.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Princeton University IRB
IRB Approval Date
2022-01-03
IRB Approval Number
14084

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials