Effects of the assessment format on performance ratings

Last registered on March 18, 2022

Pre-Trial

Trial Information

General Information

Title
Effects of the assessment format on performance ratings
RCT ID
AEARCTR-0007599
Initial registration date
May 04, 2021

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 05, 2021, 11:25 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
March 18, 2022, 2:53 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
Paderborn University

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2022-06-30
End date
2022-07-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
When evaluating employees' job performance, subjective appraisals are frequently used. A large variety of studies has stressed that these subjective performance ratings tend to be biased: they are often too lenient and too similar between employees, meaning that raters do not differentiate between high and low performers. This study investigates which influence the appraisal format has on the performance rating by evaluating written and spoken appraisals.
External Link(s)

Registration Citation

Citation
Gutt, Jana Kim. 2022. "Effects of the assessment format on performance ratings." AEA RCT Registry. March 18. https://doi.org/10.1257/rct.7599-2.3
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2022-06-30
Intervention End Date
2022-07-31

Primary Outcomes

Primary Outcomes (end points)
We study the influence of the assessment format (written or spoken) on the performance rating.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Respondents are shown a video of individuals in work contexts. After viewing the video, respondents are asked to evaluate two individuals' performances. The evaluation involves free texts and rating scales. The control group's free texts are in written form, whereas the treatment group speaks about the performance.
Experimental Design Details
Randomization Method
The randomization is done by a computer.
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
We do not have clusters.
Sample size: planned number of observations
We plan to have about 200 observations.
Sample size (or number of clusters) by treatment arms
100 respondents control, 100 respondents treatment
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Ethik-Kommision Universität Paderborn (Ethics Comittee Paderborn University)
IRB Approval Date
2021-05-04
IRB Approval Number
N/A

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials