The Impact of Diversity Training on Recruiters Hiring Behaviors

Last registered on December 20, 2019

Pre-Trial

Trial Information

General Information

Title
The Impact of Diversity Training on Recruiters Hiring Behaviors
RCT ID
AEARCTR-0005022
Initial registration date
December 19, 2019

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
December 20, 2019, 11:16 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Institut des Politiques Publiques

Other Primary Investigator(s)

PI Affiliation
Université Paris 1 Panthéon Sorbonne (CES), Paris School of Economics (PSE), Institut des Politiques Publiques
PI Affiliation
Université Paris 1 Panthéon Sorbonne (Centre d’Economie de la Sorbonne) & Laboratoire interdisciplinaire d'évaluation des politiques publiques (LIEPP) de Sciences Po
PI Affiliation
Sciences Po (OSC) & Laboratoire interdisciplinaire d'évaluation des politiques publiques (LIEPP) de Sciences Po
PI Affiliation
University of Warwick & Institut des Politiques Publiques
PI Affiliation
Centre National de la Recherche Scientifique & Institut des Politiques Publiques

Additional Trial Information

Status
In development
Start date
2019-07-01
End date
2021-06-01
Secondary IDs
Abstract
Several countries have sought to promote initiatives that combat discrimination in employment. This led to the rise of diversity management in the United States in the 1980s and more recently to the signing of a diversity charter followed by the promotion of a diversity label that companies can obtain if they meet certain criteria in their recruitment and human resources practices in France. These measures lead to a wide-spread implementation of so called diversity trainings that seek to directly affect the often unconscious biases of employers and HR managers in order to reduce the likelihood to discriminate in the hiring process. Some however argue that these interventions are largely ineffective, or that they even tend to activate stereotypes and increase discrimination. There has been no systematic analysis, based on a randomized trial, that would allow a scientific evaluation of these trainings. With our experimental approach we thus aim to evaluate training programs that are as representative as possible of those currently available on the market to shed light on their effectiveness regarding recruiters attitudes and behaviours.
External Link(s)

Registration Citation

Citation
BREDA, Thomas et al. 2019. "The Impact of Diversity Training on Recruiters Hiring Behaviors." AEA RCT Registry. December 20. https://doi.org/10.1257/rct.5022-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
We address our research questions using a correspondence study paired with the implementation of training interventions provided to the managers and heads of HR in the firms subject to our study.
Intervention Start Date
2020-01-06
Intervention End Date
2020-12-31

Primary Outcomes

Primary Outcomes (end points)
(a) the share of individuals belonging to minority of interest (women, different ethnic background) among hires, M months following the intervention;

(b) ex-ante vs ex-post evaluation of [1] Whether application received a callback, i.e. dummy variable equal to one if the application received a callback and zero otherwise, [2] Whether application received an invitation for a job interview, i.e. dummy variable equal to one if the application received an invitation and zero otherwise
Primary Outcomes (explanation)
A callback is defined as a positive personalized phone, or e-mail contact by a potential employer. This is usually a request for an interview, but employers also contact applicants asking for additional documents/information or for a call-back by the applicant.

An invitation is defined as a personalized phone or e-mail contact in which the potential employer expresses interest in conducting an interview.

Secondary Outcomes

Secondary Outcomes (end points)
Implicit Association Test (IAT) scores
Secondary Outcomes (explanation)
The IAT measures the strength of associations between concepts (e.g., black people, gay people, women...) and evaluations (e.g., good, bad) or stereotypes (e.g., athletic, clumsy). The main idea is that making a response is easier when closely related items share the same response key. We would say that one has an implicit preference for white people relative to North African people if they are faster to complete the task when French people + Good / North African People + Bad are paired together compared to when North African People + Good / French People + Bad are paired together.

Experimental Design

Experimental Design
The experiment/impact assessment of the training interventions is based on an ex-ante and ex-post correspondence study design involving a matched-pair design. We randomly assign markets (firms) into (1) a treatment group who will be offered a face-to-face training, (2) a treatment group who will be offered an intensive e-learning training and (3) a control group who will conduct a minimal e-learning (placebo) training.
Experimental Design Details
(a) Ex-ante measurement of discrimination: before starting the training, we will test discriminatory behaviours of participating firms by measuring their hiring practices using a correspondence test design.

(b) Implemenation of the training: we will randomly divide the participating firms into three groups, corresponding to the three interventions, stratifying on their size (number of employees), location, and share of minority among employees. We thus randomly assign firms into a treatment group who will be offered the face-to-face training, a treatment group who will receive the intensive e-learning training (also called active training, in the sense that 80% of the time is spent on online exercises) and a control group who will conduct a minimal e-learning (placebo) training. Placebo: In order to evaluate the effectiveness of training, the placebo intervention will neutralize the "Hawthorne" or "John Henry" type effects by which participants in experiments alter their behaviour when they know they are observed or know they are part of an experiment, regardless of the content of the intervention in question. This placebo intervention will be a very short e-learning course, containing only passive resources and whose content is calibrated according to the standard applications used in the field. Following their respective training, participants will conduct the above mentioned IAT test.

(c) Ex-post measurement of discrimination: after the training, we will test the participating firms' hiring behaviour again, using the same correspondence test design as before.

(d) Collection of administrative and internal company data: during this last step, we will match experimental and administrative data and build the database for our statistical analysis.

(e) Statistical Analysis: we will first carry out a balance analysis, which aims to verify that the randomization went as planned and that the characteristics have the same distribution in each of the three groups that were randomly assigned to the treatment arms.The main analysis will then be the estimation of a linear model in which each of the outcome variables is respectively regressed on the indicators of participation in the different types of treatment. The outcome variables tested are: (1) The share of individuals belonging to minority of interest among hires, M months following the intervention (these variables are calculated using administrative data and data provided by the company's management). (2) Call-back/invitation rates following an application: the proportion of applications belonging to the minority of interest who have received an invitation to an interview (calculated from the correspondence testing data as described above). (3) The share of individuals with bias against minorities, measured by the IAT. One of the advantages of our approach is that we are able to combine firm-wide measures (via administrative firm and testing data) and individual measures (via IAT data).
Randomization Method
Randomization done in office by a computer
Randomization Unit
Randomization is taking place to create the CV's with which we apply ex-ante and ex-post to each job offer; and to assign the CV's to the job advertisements posted by the markets that take part in our trainings.
For the assignment of training we randomize on the firm (supermarket) level by stratifying on their size (number of employees), their location, the unemployement rate of their administrative zone, the format of the supermarket (big or not) and the distance to the nearest market (isolation criteria). The market is the lowest possible level of randomization for the intervention as recruitment decisions are made at this level. The person who will benefit from the intervention will be the manager of the selected market (as well as the human resources manager, if the market has one).
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
566 supermarkets (6 of them are included for the pilot phase)
Sample size: planned number of observations
4633 managing people (who were received the treatment).
Sample size (or number of clusters) by treatment arms
- 127 markets assigned to face-to-face training
- 216 markets assigned to intensive e-learning
- 217 markets assigned to minimal e-learning
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Our power calculations are based on a power of 80% and a level of significance of 5%. We assume that the average success rate is 15%. In this context, choosing a size of 100 firms per treatment arm leads to a high minimum detectable effect. We thus decided to set the size of intensive and minimal e-learning groups to 400. In this case, the effect of the intensiv e-learning training will be detectable if it increases the percentage of positive responses from 15% to 21%, compared to the placebo training. The effect of face-to-face training will be detectable if it increases the percentage of positive responses from 15% to 24%.
IRB

Institutional Review Boards (IRBs)

IRB Name
PSE International review board
IRB Approval Date
2019-12-18
IRB Approval Number
The IRB has not been approved yet (we entered the submission date).

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials