Back to History Current Version

By chance or by choice: Biased attribution of others' outcomes (Online Experiments)

Last registered on September 14, 2021

Pre-Trial

Trial Information

General Information

Title
By chance or by choice: Biased attribution of others' outcomes (Online Experiments)
RCT ID
AEARCTR-0006519
Initial registration date
October 05, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 06, 2020, 7:29 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
September 14, 2021, 4:33 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
University of Exeter

Other Primary Investigator(s)

PI Affiliation
University of Melbourne
PI Affiliation
Monash University

Additional Trial Information

Status
Completed
Start date
2020-10-08
End date
2021-03-26
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Decision makers in positions of power often make unobserved choices under risk and uncertainty. In many cases, they face a trade-off between maximizing their own payoff and those of other individuals. What inferences are made in such instances about their choices when only outcomes are observable?

In a previous laboratory experiment, we investigated whether outcomes are attributed to luck or choices. Our results reveal that attribution biases exist in the evaluation of good outcomes. On average, good outcomes are attributed more to luck as compared to bad outcomes. This asymmetry in the way good and bad outcomes are attributed implies that decision makers get too little credit for their successes. Importantly, the biases detected tend to be driven by those individuals who make the selfish choice themselves when placed in the role of the decision maker.

In this follow-up study, we investigate some of the mechanisms that may be driving these biases.
External Link(s)

Registration Citation

Citation
Erkal, Nisvan, Lata Gangadharan and Boon Han Koh. 2021. "By chance or by choice: Biased attribution of others' outcomes (Online Experiments)." AEA RCT Registry. September 14. https://doi.org/10.1257/rct.6519-1.3
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2020-10-08
Intervention End Date
2021-03-26

Primary Outcomes

Primary Outcomes (end points)
1. DMs’ decisions to choose the high investment option.
2. Members’ prior beliefs that the DM has chosen the high investment option.
3. Members’ posterior beliefs that the DM has chosen the high investment option, conditional on the investment succeeding (i.e., good outcome) and failing (i.e., bad outcome).
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
In the main task of the experiment (investment task), participants are divided into groups of three and assume the role of either the decision maker or a group member. Each group has one decision maker (DM) and two members. [Note: In the experiment, we refer to the DM as the “Leader” of the group.]

The DM makes an investment choice (either low or high investment) that affects the payoffs of themselves and their group members. Both investment options can either succeed or fail. The DM and each group member receive a high payoff if the chosen investment succeeds and a low payoff if it fails. A high investment is more costly to the DM, but it increases the chance that the investment succeeds.

The group members do not observe the DM’s investment choice, but they are asked to report their beliefs about the investment chosen by the DM. Specifically, members are asked to report their prior belief of the likelihood that the DM has chosen the high investment. Then, they are asked to report their posterior beliefs under two scenarios: (i) assuming that the investment has succeeded (i.e., good outcome) and (ii) assuming that the investment has failed (i.e., bad outcome).

Participants will participate in three identical rounds of the investment task, where the payoffs of the investment options vary across each round. Participants remain in the same group and roles for all rounds of the task.
Experimental Design Details
INVESTMENT TASK:
In the main task of the experiment (investment task), participants are divided into groups of three and assume the role of either the decision maker or a group member. Each group has one decision maker (DM) and two members. [Note: In the experiment, we refer to the DM as the “Leader” of the group.]

The DM makes an investment choice (either low or high investment) that affects the payoffs of themselves and their group members. Both investment options can either succeed or fail. The DM and each group member receive a high payoff if the chosen investment succeeds and a low payoff if it fails. A high investment is more costly to the DM, but it increases the chance that the investment succeeds.

The group members do not observe the DM’s investment choice, but they are asked to report their beliefs about the investment chosen by the DM. Specifically, members are asked to report their prior belief of the likelihood that the DM has chosen the high investment. Then, they are asked to report their posterior beliefs under two scenarios: (i) assuming that the investment has succeeded (i.e., good outcome) and (ii) assuming that the investment has failed (i.e., bad outcome).

Participants will participate in three identical rounds of the investment task, where the payoffs of the investment options vary across each round. Participants remain in the same group and roles for all rounds of the task.

DICTATOR GAME:
At the end of the last round of the investment task, participants then participate in a dictator game in groups of two. Each participant is asked to divide an endowment of 300 ECU between themselves and another participant from the same session.

QUESTIONNAIRE:
At the end of the experiment, participants are asked to complete a questionnaire eliciting their demographic variables and asking them questions about the decisions they have made during the experiment. In the questionnaire, participants will also complete an incentivized risk task (Gneezy and Potters, 1997) which will enable us to elicit their risk preferences.

PARTICIPANT POOL AND PAYMENTS:
The experiments will be conducted online at the University of Melbourne. Participants will be recruited using the ORSEE recruitment system managed by the Experimental Economics Laboratory. Session sizes will range between 12 and 30, depending on the show-up rate for each session.

Within each session, all the participants will be paid for either one round of the investment task or the dictator game, randomly determined at the session level by the experimental software (oTree). If participants are paid for a given round of the investment game, then the DMs will be paid for their investment decisions while the members will be paid for either their DM’s decisions or their beliefs. Members’ beliefs are incentivized using the binarized scoring rule (BSR). If participants are paid for the dictator game, then, within each pair, one decision will be randomly chosen to determine the payoffs of both participants. All the participants will also receive payments from the risk task in the questionnaire.

TREATMENTS:
The between-subject treatments include:

Treatment 1: All participants first make investment choices in the role of the DM, and then all participants are asked to state their beliefs in the role of group members. In other words, decisions of both DMs and group members are elicited using the strategy method.

Treatment 2: Participants are informed of their roles at the beginning of the experiment. They remain in the same group and role for all three rounds of the investment task. They then make their decisions in their respective roles. At the end of each round, we also ask members to report (hypothetically) the investment they would have chosen if they were the DM of the group.
Randomization Method
Treatments are assigned at the session level. There are between 20 and 40 sessions in total (depending on session sizes), and treatment assignment for each session is pre-determined by the experimenters prior to the session. Participants are therefore assigned to treatments based on the session they have registered for. Since they do not know which treatments are assigned to each session, treatment assignment is essentially random.

Within a given session, participants are randomly divided into groups of three for the investment task and groups of two for the dictator game by the experimental software (oTree). Participants’ roles within each group (DM and members for the investment task; dictator and recipient for the dictator game) are also determined randomly by the experimental software.
Randomization Unit
The unit of randomization is at the individual participant level for both DMs and members. Participants are assigned to treatments individually based on the session they have registered for. Within each session, even though participants are divided into groups, they do not receive any feedback between any two rounds of the investment task, or between the investment task and the dictator game. They are also not given any information about the other participants in their group during the experiment. Hence, for our econometric analysis of participants’ behavior, we can assume that the unit of randomization is at the individual level.
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
500
Sample size: planned number of observations
500
Sample size (or number of clusters) by treatment arms
Treatment 1: 200
Treatment 2: 300
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Our sample sizes are selected based on power calculations and budgetary constraints. Power calculations are conducted for the analysis of beliefs. Our planned sample sizes will provide us with 200 members in each treatment. In treatment 1, each participant plays both roles and therefore beliefs are elicited from all participants. In treatment 2, 300 participants are required as each group consists of 1 leader and 2 members, and beliefs are elicited only from participants who are assigned the role of members. Note also that each participant will participate in three rounds of the investment task. In each round, each member will report a prior belief and two posterior beliefs. (1) For our analysis of prior beliefs, our main power calculation is based on: (i) average members’ prior belief = 0.459 and standard deviation = 0.259 as obtained from our previous study; (ii) a difference in members’ average prior beliefs of 0.05 to 0.1 between the two treatments; (iii) a two-tailed test of difference between two independent means; (iv) a cluster randomized design with three observations per cluster; (v) a Type I error rate of 0.05 and power of 0.80; and (vi) an intraclass correlation of 0.5. With these parameters, to detect a difference in average prior belief of 0.1 between treatment 1 and treatment 2, we require a minimum of 71 members in each treatment. To detect a difference of 0.05 between treatments, the minimum sample size is 282 per treatment. Our planned sample size of 200 members per treatment will therefore allow us to detect a minimum difference of 0.0593 in average prior belief between treatments. (2) For our analysis of posterior beliefs, we use an R-squared test of a subset of coefficients in a multiple linear regression model to compute statistical power. We consider a Type I error rate of 0.05, power of 0.80, and an effect size of 0.02 (small). We are primarily interested in our ability to test for (i) significant deviations from the theoretical Bayesian benchmark, and (ii) differences in estimated coefficients between treatments. (i) requires 3 predictors in the regression model, and 1-3 tested predictors. (ii) requires 6 predictors in the regression model, and 1-3 tested predictors. This implies that a minimum of between 66 and 96 members is required per treatment (with each member reporting six posterior beliefs). However, we need to account for inconsistent and non-updaters to be excluded from our main analysis. From our previous study, this constitutes about 24.6% of the sample. In order to obtain the minimum of 96 members in each treatment in the restricted sample, we need a minimum of 129 members per treatment prior to the exclusion of inconsistent and non-updaters. This implies that we need to recruit at least 129 participants in treatment 1 (all of whom will play the role of members), and 198 participants in treatment 2 (of which, only 132 will play the role of members). Finally, the planned samples for each treatment are higher to allow us to study heterogeneity in updating behavior, where the sample in each treatment will be further divided into two sub-groups.
IRB

Institutional Review Boards (IRBs)

IRB Name
Business and Economics & Melbourne Business School Human Ethics Advisory Group
IRB Approval Date
2020-09-16
IRB Approval Number
1544873.4
IRB Name
University of East Anglia School of Economics Research Ethics Committee
IRB Approval Date
2020-09-22
IRB Approval Number
0342
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
Yes
Intervention Completion Date
December 31, 2020, 12:00 +00:00
Data Collection Complete
Yes
Data Collection Completion Date
December 31, 2020, 12:00 +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
N/A
Was attrition correlated with treatment status?
No
Final Sample Size: Total Number of Observations
503
Final Sample Size (or Number of Clusters) by Treatment Arms
297, 203
Data Publication

Data Publication

Is public data available?
Yes

Program Files

Program Files
Yes
Reports, Papers & Other Materials

Relevant Paper(s)

Abstract
Decision makers in positions of power often make unobserved choices under risk and uncertainty. In many cases, they face a trade-off between maximizing their own payoff and those of other individuals. What inferences are made in such instances about their choices when only outcomes are observable? We conduct two experiments that investigate whether outcomes are attributed to luck or choices. Decision makers choose between two investment options, where the more costly option has a higher chance of delivering a good outcome (that is, a higher payoff) for the group. We show that attribution biases exist in the evaluation of good outcomes. On average, good outcomes of decision makers are attributed more to luck as compared to bad outcomes. This asymmetry implies that decision makers get too little credit for their successes. The biases are exhibited by those individuals who make or would make the less prosocial choice for the group as decision makers, suggesting that a consensus effect may be shaping both the belief formation and updating processes.
Citation
Erkal, N., Gangadharan, L. & Koh, B.H. By chance or by choice? Biased attribution of others’ outcomes when social preferences matter. Exp Econ 25, 413–443 (2022).

Reports & Other Materials