Attribution of Failure and Success in Strategic Settings

Last registered on November 14, 2023

Pre-Trial

Trial Information

General Information

Title
Attribution of Failure and Success in Strategic Settings
RCT ID
AEARCTR-0011372
Initial registration date
May 08, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 16, 2023, 2:28 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
November 14, 2023, 6:36 AM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
NYU Abu Dhabi

Other Primary Investigator(s)

PI Affiliation
Open Evience

Additional Trial Information

Status
In development
Start date
2023-05-09
End date
2026-05-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
During our educational and professional lives, we face failures and successes that we need to justify to ourselves and others. In most cases, failure and success are the results of an unknown combination of both internal factors (i.e., own ability and exerted effort) and external factors that are outside one own’s control (i.e., others or luck). The evidence suggests that when people attribute the causes of failure and success they often exhibit a “self-attribution bias” -attribute success to their own merit and failure to external sources- to maintain self-esteem. In this project, we study an additional reason for the self-attribution bias, a strategic one. We use an online experiment to test how individuals attribute noisy feedback when the source of the final outcome can be due to their own ability, someone else ability, or the computer's fault. In addition, following recent evidence on gender differences in attribution biases, we also test whether men and women use different failure and success justifications and study the consequences of it in a hiring context. Understanding the nature and economic consequences of gender differences in attribution of failures/successes is crucial, as it could be one of the causes of the observed gender gaps in the labor market such as the under-representation of women in top-level positions. Furthermore, following the recent literature on algorithm aversion, we also test whether people use different justifications when the source of the failure or success is due to another person versus when it is due to the computer, and study the perception of such justifications in a hiring context.
External Link(s)

Registration Citation

Citation
Lozano, Lina and Marcello Negrini. 2023. "Attribution of Failure and Success in Strategic Settings." AEA RCT Registry. November 14. https://doi.org/10.1257/rct.11372-2.0
Experimental Details

Interventions

Intervention(s)
During our educational and professional lives, we encounter both failures and successes that we need to justify to ourselves and others. In most cases, failure and success are the results of an unknown combination of both internal factors (i.e., our own ability and exerted effort) and external factors that are beyond our control (i.e., others or luck). In this study, we conducted an online experiment to test how individuals attribute noisy feedback to themselves and others when the source of the final outcome can be attributed to their own ability, someone else's ability, or a computer's fault.
Intervention Start Date
2023-05-09
Intervention End Date
2024-05-22

Primary Outcomes

Primary Outcomes (end points)
Worker's self-attribution justifications conditional on whether is a failure or success, source of noise, gender, and individual beliefs about performance.
Employer's hiring decisions based on the attribution messages sent by the workers conditional on the workers' treatment and the gender of the worker.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
At the end of the experiment, we elicit (i) demographics (gender, age, nationality, and education), (ii) worker's perception of lying behavior, risk, and social preferences, and the appropriateness associated with different justifications messages, (iii) worker's beliefs about the likelihood of being hired giving the message sent to the employer. We will engage in a heterogeneity analysis based on these characteristics.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Our experimental design consists of two experiments. We first conduct the worker experiment, where we measure the performance of the participants, labeled “workers”, in a general knowledge quiz and elicit their justifications of failure and success for their outcome in the quiz. In the employer experiment, we elicit the hiring behavior of participants, labeled “employers,” regarding the workers after observing how the workers justify the failure or success of their outcome in a knowledge quiz questionnaire.

For the worker, we use a 2x2 treatment between-subjects design. The first treatment variation consists of varying the source of the noise (i.e., the performance of the other matched player or the random number chosen by the computer). The second treatment variation consists in whether the justification message is displayed or not to the Employer and has payoff consequences for the Employer.
Experimental Design Details
Not available
Randomization Method
For the worker's experiment, Polific participants will be randomized to one of the three treatments by the randomization program in Qualtrics.
For the employer's experiment, we will use a strategic method for incentivized hiring decisions. Prolific participants will be randomly matched with the workers from the worker's experiment by the computer.
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
no clusters used
Sample size: planned number of observations
2600 prolific participants: 1500 workers, and 1100 employers.
Sample size (or number of clusters) by treatment arms
Workers experiment: 1500 subjects (500 per treatment)
Employers experiment: 500 subjects (250 per strategic worker's treatment) & 600 subjects (for the gender-revealed treatment)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
We will require a sample of at least 500 subjects per worker treatment, to obtain a difference of 0.25 between attribution justifications (by gender or treatment) with a power of 99% using a two-sample proportions test Fisher's exact test. For the Gender Revealed treatment, our goal is to recruit around 600 employers, aiming for an effect size of around 0.1 with a targeted estimated power of 85% at a significance level of 0.05 for a two-sample proportions test utilizing Fisher's exact test
IRB

Institutional Review Boards (IRBs)

IRB Name
New York University Abu Dhabi Review Board
IRB Approval Date
2023-03-27
IRB Approval Number
HRPP-2020-37
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information