The Effects of Evaluator Type and Transparency on Employee Effort

Last registered on March 16, 2020

Pre-Trial

Trial Information

General Information

Title
The Effects of Evaluator Type and Transparency on Employee Effort
RCT ID
AEARCTR-0005555
Initial registration date
March 13, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 16, 2020, 11:35 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Tulane University

Other Primary Investigator(s)

PI Affiliation
Tulane University

Additional Trial Information

Status
In development
Start date
2020-03-13
End date
2020-06-30
Secondary IDs
Abstract
Subjectively evaluating employees is a difficult and necessary part of businesses. Because of the difficulty and expense of evaluating employees and the increase in artificial intelligence (AI) technology, firms are looking towards higher tech solutions. Firms are now investing in big data collection that feeds AI algorithms in order to judge the performance of their employees (BasuMallick 2019; Fisher 2019; Pawar 2019). Understanding how employees react to these judgements and the use of this technology is an increasingly important subject. We use a laboratory study to evaluate how employees’ future effort choices are affected by the use AI to evaluate them.
External Link(s)

Registration Citation

Citation
Bol , Jasmijn and Lisa LaViers. 2020. "The Effects of Evaluator Type and Transparency on Employee Effort ." AEA RCT Registry. March 16. https://doi.org/10.1257/rct.5555-1.0
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2020-03-13
Intervention End Date
2020-06-30

Primary Outcomes

Primary Outcomes (end points)
The effort selected by participants on two sequential projects.
Primary Outcomes (explanation)
We examine effort selection because it the outcome of interest to employers. Employees who believe that the management control system is more or less fair express these emotions by changing the effort levels they provide to the firm.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We will use an online labor market to test the affect of evaluator type and transparency on employee effort decisions.
Experimental Design Details
Online labor market participants will be invited to participate in our study at a minimum payment rate of $2. Participants will then be told to read information about a hypothetical firm, ABC Gaming, and job, Programmer, that they are told to imagine is their place of employment and roles in the firm. They will be asked attention check questions related to this information and paid $0.10 for each question they get correct. Next, the participants will be told they have been assigned to a project developing cell phone gaming application. They will then be told that they have an endowment of $1.50 for the project and be asked to select an effort level for the project. Effort levels between 0-10 will be available to them. The cost of effort increases 1.5x for each unit. Zero effort units cost $0.00. One effort unit costs $0.01. Ten effort units, the full effort, costs $0.50. Participants will be told that they more effort they contribute, the more it costs them but the more it helps ABC gaming.

After participants select an effort level for the project. They will be told that the firm, ABC gaming is evaluating employees for a promotion. We manipulate transparency, the amount of detail employees are given about how the promotion decision was made between less transparency and more transparency, and evaluator type, between human and AI. Participants are then all told they did not get the promotion.

Next participants are told they have been assigned to a new project and asked to make the same effort choice as before with the same costs.

Participants finish the study by taking a short survey about their views on subjective evaluation by human managers and by AI. They are also asked more broadly about how they view the role of technology in the world place.
Randomization Method
Randomization by computer
Randomization Unit
Individual participants
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
100-300 Individuals
Sample size: planned number of observations
100-300 Individuals
Sample size (or number of clusters) by treatment arms
50
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Tulane Institutional Review Board
IRB Approval Date
2020-02-15
IRB Approval Number
Study 2019-2365-TU Uptown (IRB)
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials