Alphabetical Order and Gender Discrimination

Last registered on August 28, 2020

Pre-Trial

Trial Information

General Information

Title
Alphabetical Order and Gender Discrimination
RCT ID
AEARCTR-0006372
Initial registration date
August 28, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 28, 2020, 9:48 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
University of Oslo

Other Primary Investigator(s)

PI Affiliation
University of Oslo
PI Affiliation
University of Oslo

Additional Trial Information

Status
In development
Start date
2020-09-07
End date
2020-12-15
Secondary IDs
Abstract

If the abilities of men and women are imperfectly observable, statistical gender discrimination in hiring may occur. In our experiment, we randomly match individuals (candidates) that have performed mathematical quizzes, in teams of two, and, subsequently, ask new subjects to evaluate a subset of the candidates’ individual performance based on information about pair performance. We vary the informativeness of an individual’s mathematical performance by ordering pair members either according to their score (First author treatment) or alphabetically (Alphabetical treatment). In this plan we describe hypotheses to be tested, the coding of variables, and the empirical strategy that will be used.
External Link(s)

Registration Citation

Citation
Brekke, Kjell Arne , Karine Nyborg and Vegard Sjurseike Wiborg. 2020. "Alphabetical Order and Gender Discrimination." AEA RCT Registry. August 28. https://doi.org/10.1257/rct.6372-1.0
Experimental Details

Interventions

Intervention(s)
In our experiment, we randomly match individuals (candidates) that have performed mathematical quizzes, in teams of two, and, subsequently, ask new subjects to evaluate a subset of the candidates’ individual performance based on information about pair performance. Our intervention tries to elicit how the informativeness of ordering the pairs that candidates have been a part of, affect the chances of women being chosen by subjects in our experiment. We vary the informativeness of an individual’s mathematical performance by ordering pair members either according to their score (First author treatment) or alphabetically (Alphabetical treatment).
Intervention Start Date
2020-09-07
Intervention End Date
2020-12-15

Primary Outcomes

Primary Outcomes (end points)
Chosen - Whether a candidate is chosen by a subject
Sharefemale - Share of chosen candidates, by a subject, that are female
Primary Outcomes (explanation)
Sharefemale - number of females chosen by a subject divided by the number of candidates chosen.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
In this experiment, Experiment 2, subjects are asked to evaluate the mathematical performance of a second group of individuals, from hereon called candidates, that participated in Experiment 1. Subjects will only observe the joint score of pairs of candidates from Experiment 1. Hence, the individual performance of each candidate must be inferred from the joint score of a pair. Based on this inference, subjects are incentivised to choose the best performer.

Our intervention tries to elicit how the informativeness of ordering the pairs that candidates have been a part of, affect the chances of women being chosen by subjects in our experiment. We vary the informativeness of an individual’s mathematical performance by ordering pair members either according to their score (First author treatment) or alphabetically according to chosen nicknames (Alphabetical treatment).
Experimental Design Details
Experimental Design:
In this experiment, Experiment 2, subjects are asked to evaluate the mathematical performance of a second group of individuals, from hereon called candidates, that participated in Experiment 1. Subjects will only observe the joint score of pairs of candidates from Experiment 1. Hence, the individual performance of each candidate must be inferred from the joint score of a pair.

We start by describing the procedures of Experiment 1, which will explain what exercises the candidates performed and how we matched them into pairs. We then proceed to explain how information about the performance of candidates in Experiment 1 will be presented to and evaluated by the subjects in Experiment 2.

Finding candidates and constructing pairs:
Experiment 1 was conducted in the spring and fall of 2017. In the experiment, candidates conducted a series of mathematical quizzes (Wiborg, Brekke & Nyborg, 2020). In each quiz, candidates were asked to answer as many mathematical exercises as possible in 60 seconds. The exercises were variations of adding and subtracting two and three digit numbers. We use data from five of these quizzes to generate the information observable to the subjects in Experiment 2. The candidates’ performance on a sixth quiz will be used to incentivise subjects in Experiment 2.

In order to generate pairs of candidates, we randomly match candidates, uniquely, in each quiz, meaning that out of, for instance, 60 candidates there would be 30 unique pairs in each quiz. Hence, a candidate has five partners in total.

Prior to performing the math quizzes in Experiment 1, candidates were asked to provide a nickname for anonymisation purposes. We asked candidates the following question to preserve information about gender: ”Imagine that you were to have a different first name. What name would you prefer to have?”. In Experiment 2 we convey information about the gender of the candidates by using these nicknames.

For the purposes of Experiment 2, we only use data on 42 subjects (out of 148) with American sounding nicknames that, according to social security database and babynames.com, convey one gender. Four of the names are close varieties of common American names, such as Viktoria, that are included to have a sufficiently large pool of candidates. Hence, the 42 candidates participate in one of 21 unique pairs in each of the five quizzes. Only one of the 42 candidates had a name that did not correspond to self-reported sex.

Design of online experiment:
Subjects in Experiment 2 will be given incentives to evaluate the mathematical performance of candidates in Experiment 1. We inform the subjects about the nature of the five mathematical quizzes, the random matching of candidates and that they will only observe the joint score of candidates in a pair, the combined number of correct answers in a pair on a given quiz.

The evaluation of candidates’ performance will be conducted by having the subjects pick one candidate out of four based on the information about the joint scores of pairs in the five preceding quizzes. Subjects receive cash for each correct answer their chosen candidate provided individually on a sixth quiz. We ask subjects to make eight such choices sequentially. Hence, out of 42 candidates, we present subjects with information about 32 (=4∗8) candidates, in total. All subjects are presented the same 32 candidates, although the order of presentation and amount of information vary across treatments. To convey information about the gender of each candidate we rely on the signalling effect of the nicknames chosen by candidates in Experiment 1. The procedure leading to these nicknames, which is described above, will be thoroughly explained to the subjects.

Treatments:
To study the effect of the informativeness of within-pair name ordering on the selection of female candidates, we vary whether subjects observe tables in which pairs are ordered alphabetically or according to individual performance. In the Alphabetical treatment, names in a pair are ordered alphabetically in the first four tables presented to the subjects. In the last four tables names are ordered depending on the number of correct answers of each pair member, listing first the one who obtained the highest number of correct answers. The First Author treatment is exactly opposite, ordering pair members according to score in the first four tables and alphabetically in the last four. Subjects were not informed about the ordering of pairs until they were presented with each type of table. Four of the eight sets of candidates also contain different number of men and women to reduce the chances of subjects realising that the study is concerned with gender.

Apart from the ordering of pair members, the first four tables are the same across treatments. So are the last four tables. To reduce the potential impact of the ordering of tables - i.e. the effect of observing a set of four candidates before another as opposed to the opposite order - we vary the order of the first four and last four tables across treatments.

The following information is provided to the subjects prior to each decision depending on whether the order is alphabetical or first author:

Alphabetical order: For each pair the names are ordered alphabetically.

First Author order: For each pair the names are ordered according to score so that the one with the highest score is listed first. If both have equal scores, the computer randomly draws the order.

Subsequent to making these eight decisions, we ask subjects an incentivised question regarding the performance of male and female candidates, to elicit their beliefs about gender differences in mathematical abilities. Specifically, we tell subjects that the 32 subjects that they have encountered is a subset of 148 candidates that participated in Experiment 1. We then inform them about the average combined, individual score on the five quizzes, in the whole sample, and ask them to guess the difference in average score between men and women in the whole sample. Subjects first indicate who they think did best, guess the difference in score and are compensated if the answer is in an interval which is ±2 from the correct answer. The average in the whole sample is 39.64865 and men score on average 2.93086 better than women. In addition to this question, we asked several questions regarding the evaluation of individual contributions to joint work in order to draw attention away from the issue of gender.



Randomization Method
Randomisation by survey software
Randomization Unit
Individual level
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
900 individuals
Sample size: planned number of observations
900 individuals
Sample size (or number of clusters) by treatment arms
Subjects are randomly assigned to the Alphabetical and First Author treatment. In expectation, 450 individuals in each treatment
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
We can detect a minimum of 4-6 percentage points difference between men and women, in the probability of being chosen, given a significance level of 0.05 and power of 0.83. Power simulations are also uploaded in the registry.
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials