Effects of the assessment format on performance ratings

Last registered on July 19, 2021

Pre-Trial

Trial Information

General Information

Title
Effects of the assessment format on performance ratings
RCT ID
AEARCTR-0007599
Initial registration date
May 04, 2021
Last updated
July 19, 2021, 5:59 AM EDT

Locations

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information

Primary Investigator

Affiliation
Paderborn University

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2021-09-01
End date
2021-10-31
Secondary IDs
Abstract
When evaluating employees' job performance, subjective appraisals are frequently used. A large variety of studies has stressed that these subjective performance ratings tend to be biased: they are often too lenient and too similar between employees, meaning that raters do not differentiate between high and low performers. This study investigates which influence the appraisal format has on the performance rating by evaluating written and spoken appraisals.
External Link(s)

Registration Citation

Citation
Gutt, Jana Kim. 2021. "Effects of the assessment format on performance ratings." AEA RCT Registry. July 19. https://doi.org/10.1257/rct.7599-2.0
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2021-09-01
Intervention End Date
2021-10-31

Primary Outcomes

Primary Outcomes (end points)
We study the influence of the assessment format (written or spoken) on the performance rating.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Respondents are shown a video of individuals in work contexts. After viewing the video, respondents are asked to evaluate two individuals' performances. The evaluation involves free texts and rating scales. The control group's free texts are in written form, whereas the treatment group speaks about the performance.
Experimental Design Details
Not available
Randomization Method
The randomization is done by a computer.
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
We do not have clusters.
Sample size: planned number of observations
We plan to have about 200 observations.
Sample size (or number of clusters) by treatment arms
100 respondents control, 100 respondents treatment
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Ethik-Kommision Universit├Ąt Paderborn (Ethics Comittee Paderborn University)
IRB Approval Date
2021-05-04
IRB Approval Number
N/A