Attribution of Failure and Success in Strategic Settings

Last registered on May 16, 2023

Pre-Trial

Trial Information

General Information

Title
Attribution of Failure and Success in Strategic Settings
RCT ID
AEARCTR-0011372
Initial registration date
May 08, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 16, 2023, 2:28 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
NYU Abu Dhabi

Other Primary Investigator(s)

PI Affiliation
Open Evience

Additional Trial Information

Status
In development
Start date
2023-05-09
End date
2023-07-15
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
During our educational and professional lives, we face failures and successes that we need to justify to ourselves and others. In most cases, failure and success are the results of an unknown combination of both internal factors (i.e., own ability and exerted effort) and external factors that are outside one own’s control (i.e., others or luck). The evidence suggests that when people attribute the causes of failure and success they often exhibit a “self-attribution bias” -attribute success to their own merit and failure to external sources- to maintain self-esteem. In this project, we study an additional reason for the self-attribution bias, a strategic one. We use an online experiment to test how individuals attribute noisy feedback when the source of the final outcome can be due to their own ability, someone else ability, or the computer's fault. In addition, following recent evidence on gender differences in attribution biases, we also test whether men and women use different failure and success justifications and study the consequences of it in a hiring context. Understanding the nature and economic consequences of gender differences in attribution of failures/successes is crucial, as it could be one of the causes of the observed gender gaps in the labor market such as the under-representation of women in top-level positions. Furthermore, following the recent literature on algorithm aversion, we also test whether people use different justifications when the source of the failure or success is due to another person versus when it is due to the computer, and study the perception of such justifications in a hiring context.
External Link(s)

Registration Citation

Citation
Lozano, Lina and Marcello Negrini. 2023. "Attribution of Failure and Success in Strategic Settings." AEA RCT Registry. May 16. https://doi.org/10.1257/rct.11372-1.0
Experimental Details

Interventions

Intervention(s)
During our educational and professional lives, we encounter both failures and successes that we need to justify to ourselves and others. In most cases, failure and success are the results of an unknown combination of both internal factors (i.e., our own ability and exerted effort) and external factors that are beyond our control (i.e., others or luck). In this study, we conducted an online experiment to test how individuals attribute noisy feedback to themselves and others when the source of the final outcome can be attributed to their own ability, someone else's ability, or a computer's fault.
Intervention Start Date
2023-05-09
Intervention End Date
2023-07-15

Primary Outcomes

Primary Outcomes (end points)
Worker's self-attribution justifications conditional on whether is a failure or success, source of noise, gender, and individual beliefs about performance.
Employer's hiring decisions based on the attribution messages sent by the workers conditional on the workers' treatment.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
At the end of the experiment, we elicit (i) demographics (gender, age, nationality, and education), (ii) worker's perception of lying behavior, risk, and social preferences, and the appropriateness associated with different justifications messages, (iii) worker's beliefs about the likelihood of being hired giving the message sent to the employer. We will engage in a heterogeneity analysis based on these characteristics.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Our experimental design consists of two experiments. We first conduct the worker experiment, where we measure the performance of the participants, labeled “workers”, in a general knowledge quiz and elicit their justifications of failure and success for their outcome in the quiz. In the employer experiment, we elicit the hiring behavior of participants, labeled “employers,” regarding the workers after observing how the workers justify the failure or success of their outcome in a knowledge quiz questionnaire.

For the worker, we use a 2x2 treatment between-subjects design. The first treatment variation consists of varying the source of the noise (i.e., the performance of the other matched player or the random number chosen by the computer). The second treatment variation consists in whether the justification message is displayed or not to the Employer and has payoff consequences for the Employer.
Experimental Design Details
I. Workers' experimental design
In the first stage, the worker starts by performing a knowledge test for which she is incentivized to answer correctly as many questions as possible; she then is incentivized to report beliefs about her absolute and relative performance. In the second stage, the worker is only informed about whether she fails or succeeds at the test, i.e., whether her final output in the previous task is ranked or not in a specific percentile. The novel aspect of our design is that the worker is truthfully informed that her final output depends on two elements: an internal one, which is the actual own ability to answer the knowledge test (i.e., the number of correct answers), and an external one, which is a component of luck (i.e., the performance of another player that previously performed the same knowledge task or a random number added by the computer). The worker knows that her final output is the average between her own true performance and the performance of another participant who previously performed the knowledge quiz (or the random number). A failure or success depends on whether this average is ranked or not in a specific percentile that is revealed to them. Our definition of success (failure) is based on a comparison to a group of selected participants who have already completed the knowledge quiz, with success defined as being ranked in the top 50% and failure as being ranked in the bottom 50%. However, we acknowledge that we may define a different cutoff value for defining success and failure to explore the effects of using different cutoff values on behavior.

For the worker, we use a 2x2 treatment between-subjects design. The first treatment variation consists of varying the source of the noise (i.e., the performance of the other matched player or the random number chosen by the computer). The second treatment variation consists in whether the distribution of responsibility points is displayed or not to the Employer and has payoff consequences. We will ensure there is no possibility for hedging by randomizing the payment across different parts of the experiment.

A. Treatment Baseline -Private
In this treatment, the worker's final output in the knowledge quiz is obtained as the average between her own true performance and the performance of another participant who previously performed the knowledge quiz. Furthermore, the message generated by the worker's distribution of responsibility points for her failure or success will not be displayed to the Employer or any other player.

B. Treatment Strategic - Other player
In this treatment, the worker's final output in the knowledge quiz is obtained as the average between her own true performance and the performance of another participant who previously performed the knowledge quiz. In contrast to the baseline treatment, the message generated by the distribution of responsibility points of failure or success is a signal that will be sent to another player who is randomly matched with her, the Employer. The worker is aware of this before providing distributing responsibility points between herself and the other player. Based on the generated message, the Employer decides whether to hire or not the worker. The worker receives a high bonus if the Employer hires her, and a low bonus otherwise. The Employer obtains a medium bonus if he does not hire the worker. If the Employer hires the worker, he receives a low bonus if the worker's true performance is not ranked in the top 50% and a high bonus if her true performance is ranked in the top 50%.

C. Treatment Strategic - Computer
This treatment is identical to the Treatment strategic - Other player, but the only difference is that the worker's final output in the knowledge quiz is obtained as the average between her own true performance and a random number generated by the computer.
The introduction of this noise on true performance allows for motivated beliefs to emerge (e.g., falsely attributing failure to bad luck/success to own or others’ ability). In the third stage, we aim to capture these beliefs in the form of taking or not taking “responsibility” for failure or success. That is, the worker is asked to justify why she thinks she failed or succeeded at the test. For doing so, the worker has to distribute responsibility points for her failure or success to her own ability and/or to the performance of the other matched player (or to the random number chosen by the computer). The chosen distribution of responsibility points will generate a justification message that will be displayed to the Employer.

II. Employer experimental design
The employer will make a series of hiring decisions and a final follow-up questionnaire. First, he is familiarized with the general knowledge quiz the workers performed. Afterward, the employer makes a series of hiring decisions after observing whether the worker's outcome is ranked or not in the top 50% and the message generated by the distribution of responsibility points of failure or success that the worker made. The payoffs are identical to the ones described in Treatment Strategic -Other player.

In each of the hiring decisions, the Employer must decide whether to hire a worker or not. After the Employer makes all hiring decisions, one decision is selected at random as the decision that counts for his payoffs.
Randomization Method
For the worker's experiment, Polific participants will be randomized to one of the three treatments by the randomization program in Qualtrics.
For the employer's experiment, we will use a strategic method for incentivized hiring decisions. Prolific participants will be randomly matched with the workers from the worker's experiment by the computer.
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
no clusters used
Sample size: planned number of observations
2000 prolific participants: 1500 workers, and 500 employers.
Sample size (or number of clusters) by treatment arms
Workers experiment: 1500 subjects (500 per treatment)
Employers experiment: 500 subjects (250 per strategic worker's treatment)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
We will require a sample of at least 500 subjects per worker treatment, to obtain a difference of 0.25 between attribution justifications (by gender or treatment) with a power of 99% using a two-sample proportions test Fisher's exact test.
IRB

Institutional Review Boards (IRBs)

IRB Name
New York University Abu Dhabi Review Board
IRB Approval Date
2023-03-27
IRB Approval Number
HRPP-2020-37
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials