Finding the best employees: A field experiment in hiring

Last registered on September 28, 2021

Pre-Trial

Trial Information

General Information

Title
Finding the best employees: A field experiment in hiring
RCT ID
AEARCTR-0008219
Initial registration date
September 22, 2021

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
September 28, 2021, 2:16 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Paris Dauphine University

Other Primary Investigator(s)

PI Affiliation
WZB Berlin Social Science Center, Technische Universität Berlin
PI Affiliation
University of Lausanne, WZB Berlin Social Science Center

Additional Trial Information

Status
In development
Start date
2021-09-23
End date
2022-09-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Can AI outperform current HR practices in selecting employees for the position of credit salesman? What is the role of economic preferences versus psychological types in explaining the heterogeneity of the employees’ performance?

We will run a field experiment in a microfinance company in Kyrgyzstan. In the first stage, current employees of the firm will answer survey questions to measure a number of economic preferences and psychological traits. We will anonymously match the responses to the survey questions to the personnel data of the firm including a measures of the productivity of the employees. We will train an AI algorithm to predict which employees perform best. The next stage will consist in applying the algorithm to study its usefulness for hiring decisions. Some employees will be hired following the existing procedure in place in the firm, others accodring to the recommendation of the AI algorithm and we will compare the results of the two groups of employees in term of productivity, size and risk of the portfolio.

External Link(s)

Registration Citation

Citation
Dargnies, Marie-Pierre, Rustamdjan Hakimov and Dorothea Kübler. 2021. "Finding the best employees: A field experiment in hiring." AEA RCT Registry. September 28. https://doi.org/10.1257/rct.8219-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
We will have the current employees of a microfinance company in Kyrgyzstan answer a questionnaire, measuring their risk and time preferences, trust and trustworthiness, altruism, Big 5 personality traits, performance in the Cognitive Reflection Test (CRT) and Wonderlic Test, performance in Reading the Mind in the Eyes, and confidence. Some of the measures (risk and time preferences, for instance) are incentivized, while others are not. We will match the responses to the personnel data of the firm using firm's data (anonymized matching). This data includes age, gender, education, family status, tenure, portfolio of loans, quality of the portfolio, and bonus received in the last 12 months whih is a measure of productivity. We will train an AI algorithm to predict which employees perform best according to the rankings based on productivity, portfolio size, and the risk of the portfolio.

The next stage will consist in applying the algorithm to study its usefulness for hiring decisions. All candidates who apply to the firm will participate in the survey, excluding the incentivized measures . As is the case now, without our experimental treatment, all candidates will undergo an interview with the central office and one of the regional managers. At this point, we will be able to see the extent to which the pool of employees recommended by the managers and by the algorithm overlap. Then half of the candidates will be hired based on the existing procedure in the company, that is according to the managers' recommendation. The other half of the candidates will be hired based on the AI prediction of the similarity of the candidates to the best performing employees. We will evaluate the results based on the firm’s record of the performance of the new employees, including their sales, the risk of their portfolio, and whether the employee was fired for underperformance or left the company. The field experiment will run for six months, with the possibility of prolonging it, in case that the number of new hires is too small to draw any conclusions. We aim for at least 100 new hires in each treatment.
Intervention Start Date
2021-12-01
Intervention End Date
2022-05-31

Primary Outcomes

Primary Outcomes (end points)
The primary outcome of the experiment is the performance of the new employees. We are interested in whether the performances of the new employees are different whether the new employees were hired using the existing hiring procedure of the firm or the AI algorithm recommendation.

For the first stage the main outcomes are performances of the algorithm given personally data only, personnel data and non-incentivized survey measures, and personnel data , non-incentivized and incentivized survey measures.


Portfolio size
Portfolio at risk 30 days or more
Portfolio free of risk
Sales

We are interested in outcomes in 6 and 12 months after starting the job.

We train algorithm on employees with tenure of 12 months or more. This is because the categorization by management is more informative, as well as their objective performance.

We will test the algorithm predictive power first of the employees with lower than 12 month of tenure in the moment. We will predict the category for them, and evaluate based on the performance after 6 and 12 month from start of the job.

Primary Outcomes (explanation)
We are primarily interested in the efficiency of hiring decisions using either the hiring procedure of the firm, or the recommendations of the AI algorithm that was trained to predict the quality of employees with the personnel data and the answers to the non-incentivized questions. We will judge the quality of the hiring decisions with respect to the employees’ portfolio, the portfolio at risk, the portfolio without delayed payments, the number of new loans, whether the employee qualified for a bonus, and whether the employee left the firm. We will take these measures six months and 12 months after the employees were hired.
We will also compare the recommendations of the algorithms with those of the managers with respect to the quality of their predictions regarding the employees when the algorithms use either (i) only personnel data, (ii) personnel data and the answers to the non-incentivized questions, and (iii) personnel data, answers to the non-incentivized questions, and the answers to the incentivized questions.
Finally, we plan to study the predictive power of the algorithms for the sample of employees whose current period of employment is less than 12 months. Their data are not used for training the algorithm. Instead, we will use this sample to predict the employees’ performance in six and 12 months after the start of their employment. We will also compare each of the outcomes of interest between two groups of current employees—those who would have been hired by the algorithms and those who would not have been hired.


To do

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
All candidates for a salesman position in the firm will, as is the case now (before our intervention), undergo an interview with the central office and one of the regional managers. All candidates will also participate in our survey.
Then, half of the candidates will be hired based on the existing procedure in the company, that is according to the managers' recommendation. The other half of the candidates will be hired based on the AI prediction of the similarity of the candidates to the best performing employees.
Experimental Design Details
Randomization Method
Randomization will be done by a computer, with every second candidate being in treatment.
Randomization Unit
Individual.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
No clusters are planned.
Sample size: planned number of observations
At least 200 new employees. In order to train the AI algorithm, we will use the answer to the survey of the current employees of the firm (about 1000 employees).
Sample size (or number of clusters) by treatment arms
At least 100 new employees in each treatment (hired via the existing procedure/ hired using the AI algorithm)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials