Who Should Receive Relative Feedback? And Why?

Last registered on October 04, 2023

Pre-Trial

Trial Information

General Information

Title
Who Should Receive Relative Feedback? And Why?
RCT ID
AEARCTR-0012213
Initial registration date
September 29, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 04, 2023, 4:51 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
University of Cologne

Other Primary Investigator(s)

PI Affiliation
University of Cologne
PI Affiliation
Frankfurt School of Finance & Management
PI Affiliation
University of Cologne

Additional Trial Information

Status
In development
Start date
2023-10-04
End date
2023-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Previous research has provided evidence that feedback in the form of relative performance information (RPI) can increase employees’ performance. However, it is very likely that the strength of the performance effects resulting from such a feedback depends on the feedback recipient’s personality traits and prior performance (e.g., whether the feedback is positive or negative). In this study, we aim to investigate whether machine learning can enhance the effects of RPI by tailoring it to an employee's individual characteristics. Additionally, we will utilize both closed-form and open-ended questions to target RPI and explore the use of alternative data sources.
External Link(s)

Registration Citation

Citation
Opitz, Saskia et al. 2023. "Who Should Receive Relative Feedback? And Why?." AEA RCT Registry. October 04. https://doi.org/10.1257/rct.12213-1.0
Experimental Details

Interventions

Intervention(s)
In the RPI treatment, participants will receive feedback in the form of relative performance information. That is, they receive feedback on their percentile ranks based on how many grids they correctly solved during the 7.5 minutes of a real-effort task. In the control group, participants do not receive relative performance information.
Intervention Start Date
2023-10-04
Intervention End Date
2023-12-15

Primary Outcomes

Primary Outcomes (end points)
The primary outcome variable is the number of correctly solved grids in the second 7.5 minutes of the real-effort task (controlling for the number of grids solved in the first working phase). In particular, we will estimate Conditional Average Treatment Effects to investigate whether targeted assignment of relative performance information (i) to specific participants with different characteristics and (ii) depending on their prior performance can increase performance (number of correctly solved grids).
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary outcome variables are stress while working on the task and enjoyment of the task.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The Prolific experiment will consist of three parts. In the first part, participants have to fill out a survey that elicits information on participants’ demographics (age, gender, education) as well as survey measures of performance goal orientation, mastery goal orientation, feedback orientation, social comparison and Big-5 Personality traits. In additional to these established scales, two open-ended questions (“Think of an experience in the past (e.g., at work or school) where your performance was compared to the performance of others. Please describe in detail how you personally perceived this experience.” and “Imagine the following: You receive information that you are performing worse than others on a task you have been working on. How do you think you would react?“) will be included.

We invite participants again after approximately one week to work on a real-effort task. For this second part of the study, participants are again compensated with a fixed wage. We will not invite participants to the second part of the study who did not seriously answer the open-ended questions in the survey (e.g. answer “abcdefg”).

In the real-effort task, participants have to identify mistakes in 10-row by 15-column grids with symbols, i.e., numbers, letters or emojis. A mistake is defined as a symbol that differs from the most commonly occurring symbol in the specific grid. In each grid, there are between one and five mistakes and participants have to provide the location of each mistake. They have 7.5 minutes to solve as many grids as possible. All participants receive information about the number of correctly identified mistakes on their screen.

After this second part, participants will be randomly assigned to either a RPI treatment or a control group. In the RPI treatment, participants will receive feedback in the form of relative performance information. That is, they receive feedback on their percentile ranks based on how many mistakes they correctly identified during the 7.5 minutes of the real effort task. The percentile rank is computed in comparison to participants from a pilot study in which no RPI was provided. In the control group, participants will not receive any relative performance feedback.

Afterwards, participants work again 7.5 minutes on the same real effort task and, after completing the task , receive information on the number of correctly identified mistakes on their screen as well as their relative performance if they were in the RPI treatment. At the end, they will complete a short post-survey consisting of control questions as well as questions regarding stress while working on the task, enjoyment with the task, whether they would have liked to receive (no) relative performance information, and how important it was for them to do well on the task . At the end of the experiment, participants will be redirected to Prolific and all receive the same fixed compensation, i.e., we will not award financial bonuses.
Experimental Design Details
Randomization Method
We will randomize participants into treatment or control group such that there is an equal number of subjects in each group.
Randomization Unit
Individual subject
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
The number of clusters is the same as the number of observations (please see below).
Sample size: planned number of observations
We invite 2000 individual subjects to participate in the study. If attrition is too high between the first part and the second part of the study, we plan to invite some more individual subjects.
Sample size (or number of clusters) by treatment arms
Ideally 1000 individual subjects in the control group, 1000 individual subjects in the RPI treatment group
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
University of Cologne Ethical Review Board
IRB Approval Date
2023-07-10
IRB Approval Number
230034SO

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials