A study of feedback aversion on a job interview task

Last registered on November 13, 2020

Pre-Trial

Trial Information

General Information

Title
A study of feedback aversion on a job interview task
RCT ID
AEARCTR-0006735
Initial registration date
November 12, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 13, 2020, 8:36 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2020-11-16
End date
2020-12-09
Secondary IDs
Abstract
This project examines individuals' preferences for receiving feedback on their performance on a mock job interview. We will explore how preferences for feedback depend on beliefs of relative performance, possibility of gender discrimination by the provider of the feedback, and gender. We will also elicit unincentivized data on how individuals interpret and generalize both positive and negative feedback.
External Link(s)

Registration Citation

Citation
Coffman, Katherine B. and David Klinowski. 2020. "A study of feedback aversion on a job interview task." AEA RCT Registry. November 13. https://doi.org/10.1257/rct.6735-1.0
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2020-11-16
Intervention End Date
2020-12-09

Primary Outcomes

Primary Outcomes (end points)
Main outcome measures: incentivized measure of preference for receiving information on relative performance on the mock job interview, derived from eleven questions that vary the real-effort price to receive or avoid the information; and, an unincentivized stated preference for receiving information about relative performance. Secondary outcome measures: believed relative performance on the interview, degree of uncertainty about relative performance, unincentivized self-reported interpretation and generalizability of hypothetical feedback, and unincentivized beliefs about average sex differences in performance in the study.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
This is an online study conducted on Amazon Mechanical Turk. Participants complete two sessions, three weeks apart. In between the two sessions, a different group of participants completes an evaluation session.

In Session 1, participants provide demographic information about themselves (sex, age bracket, and region of residence) and answer 3 questions that are commonly asked in job interviews, by typing their answers on the computer.

In the Evaluation Session, conducted between Session 1 and Session 2, a separate group of participants observes the answers to the interview questions given by 10 randomly-chosen participants of Session 1. Based on the answers they see, the participants of the Evaluation Session rate each participant whose answers they read on four different dimensions: intellectual curiosity, tendency to strive for achievement, assertiveness, and tolerance to stress.

In Session 2, participants of Session 1 are given the opportunity to learn their average overall rank on the four traits as determined by the evaluator in the Evaluation Session, if their answers were among those evaluated in the evaluation session. Additionally, participants are asked a series of questions about how they would interpret and generalize both positive and negative feedback, questions about experienced and anticipated discrimination based on sex, and some additional demographic information.

Each session is completed in one sitting.
Experimental Design Details
There are 2 treatments in the study, that vary in a between-subjects design whether the evaluator in the evaluation session is shown demographic information (sex, age bracket, and region of residence) of the participants of session 1. In one treatment, the evaluator is shown the answers to the job interview questions of the participants of session 1, but not the demographic information of the participants of session 1. In the other treatment, the evaluator is shown the answers to the job interview questions and the demographic information (sex, age bracket, and region of residence) of the participants who gave the answers. This varies whether discrimination on the basis of gender is possible across the two treatments, allowing us to test whether the possibility of discrimination impacts preferences for feedback.

For both treatments, the evaluators are Human Resources professionals recruited from UpWork to evaluate the answers to the job interview questions given by the random subset of participants of session 1.

Only participants of Session 1 whose answers to each interview question have at least 60 words, and who take at least 2 minutes to answer each interview question, will be invited to participate in Session 2 and will be in the sample from which 10 participants will be randomly selected to have their answers evaluated.

Restricting the analysis: We will restrict the analysis of the incentivized measure of preferences for feedback to participants who are monotonic in their choices on the eleven incentivized elicitation questions. In this subsample, we will examine the highest real-effort price the participant is willing to pay to receive the information (taking price to avoid as negative). Since it is possible that a nontrivial fraction of participants are nonmonotonic in their choices on the eleven elicitation questions, we will also consider the following analysis on the entire sample of participants of Session 2:
1. Whether the participant chooses to receive or avoid the information at 0 price for both options (i.e., the first elicitation question)
2. The highest price the participant is willing to pay to receive the information in the positive range of prices to receive the information
3. The highest price the participant is willing to pay to avoid the information in the positive range of prices to avoid the information

Session 1 of the study is expected to start on November 16, 2020. The Evaluation Session is expected to start on November 23, 2020. Session 2 of study is expected to start on December 7, 2020.
Randomization Method
We randomize treatment assignment at the individual level. Randomization by the computer.
Randomization Unit
Individual participant.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
1500 participants in Session 2
Sample size: planned number of observations
750 participants in Session 2 per treatment, for a total of 1500 participants in Session 2.
Sample size (or number of clusters) by treatment arms
750 participants in Session 2 in the condition in which the evaluation is blind to demographics, 750 participants in Session 2 in the condition in which the evaluation is not blind to demographics.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Harvard Business School
IRB Approval Date
2020-10-22
IRB Approval Number
IRB20-1818

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials