Outcome bias: Information complexity and domain specificity

Last registered on September 04, 2023

Pre-Trial

Trial Information

General Information

Title
Outcome bias: Information complexity and domain specificity
RCT ID
AEARCTR-0011607
Initial registration date
September 01, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
September 04, 2023, 6:58 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
University of Arkansas

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2023-09-05
End date
2023-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This project explores a principal-agent setting where the agent can exert costly effort to improve the likelihood of success for the principal. The principal then has the opportunity to punish the agent. The agent cannot, however, guarantee the outcome for the principal. There are always states of the world outside of the agent's control where the principal wins or loses regardless of the agent's effort level. Under two different treatments, these states of the world are presented in different ways. This project measures the effect of their presentation on effort and punishment. This project also measures how outcomes in the principal-agent interaction affect perceptions of agents in other domains.

External Link(s)

Registration Citation

Citation
Brownback, Andy. 2023. "Outcome bias: Information complexity and domain specificity." AEA RCT Registry. September 04. https://doi.org/10.1257/rct.11607-1.0
Experimental Details

Interventions

Intervention(s)
Principals and agents will interact under two treatments that affect the presentation of which states of the world the agent controls and which the agent does not. Principals' perceptions of agents across other domains will then be collected.
Intervention Start Date
2023-09-05
Intervention End Date
2023-12-31

Primary Outcomes

Primary Outcomes (end points)
For the first area of exploration, the key outcomes we will measure are:

Investment by the agent under different information regimes.
Punishment (in general) by the principal under different information regimes.
How each information regime affects the following:
How sensitive the principal's punishment choice is to the agent's effort.
I will regress punishment on effort.
How sensitive the principal's punishment choice is to their outcome (win or lose).
I will regress punishment on outcome after controlling for effort.
How sensitive the principal's punishment choice is to their outcome when that outcome is within the agent's control.
I will regress punishment on outcome interacted with whether the region is within the agent's control after controlling for effort and whether the region is within the agent's control.
How sensitive the principal's punishment choice is to their outcome when that outcome is outside of the agent's control.
I will regress punishment on outcome interacted with whether the region is within the agent's control and whether the principal was assigned to the Split Information treatment after controlling for effort, whether the region is within the agent's control, the information treatment, and the interactions of these control variables.

For the second area of exploration, the key outcomes we will measure are:

How an agent's effort affects the likelihood that a principal will select them to be a dictator in a subsequent dictator game.
How an agent's effort affects the likelihood that a principal will select them to perform a real-effort task on their behalf in a subsequent game.

For the second area of exploration, the key outcomes will be the perceptions of the principal that we elicit in each of four domains:

First, the principals will guess the average effort that the agent invested across all rounds. Their reward will increase as their guesses are more accurate.
I will regress the principal's guess on the outcome observed after controlling for the effort observed.

Second, the principals will select an agent to play the role of dictator in a dictator game. The selected dictator will determine any additional payments given to the principal.
I will regress the principal's selection of dictator on the outcome observed after controlling for the effort observed.
NOTE: variables will be standardized within the set of dictators a principal observes since they must make a selection from a set of four possible dictators

Third, the principals will select an agent to perform a real-effort task on their behalf. The principal will receive greater earnings if the selected agent performs the real-effort task accurately.
I will regress the principal's selection of agent for the real-effort task on the outcome observed after controlling for the effort observed.
NOTE: variables will be standardized within the set of dictators a principal observes since they must make a selection from a set of four possible dictators

Fourth, the principals will guess demographic characteristics of the agents. They will be paid for their accuracy.
I will regress the principal's guesses about demographic information on the outcome observed and on the effort observed to measure stereotyping in both dimensions.


Primary Outcomes (explanation)
For the first area of exploration, we care about how sensitive principals are to the presentation of information because the Separated Information treatment makes clear the limitations on the agent's control. Thus, if punishment diminishes in these states of the world in the Separated Information treatment, the principal may have previously misattributed their loss to the agent's effort under Combined Information but no longer does because of the clarity of information. If, however, principals continue to punish based on outcomes outside of the agent's control despite the clarity of information, it is less likely to result from confusion and is more likely the result of negative affect causing the principal to "blame" the agent.

For the second area of exploration, we care about how much behavior in the principal-agent context affects the principal's perceptions of agents in other domains. In particular, we care about how the principal's perceptions are colored by the luck that the agent experienced in prior interactions. Under attribution bias, we may expect that a principal will think that a lucky agent 1) will exert more effort, on average; 2) will allocate more to them in a dictator game; or 3) will exert more effort in a real-effort task. Conditioning on luck in this way is sub-optimal behavior for the principal. Additionally, this misattribution may cause stereotyping, so we care about whether a principal may think that a lucky agent is more or less likely to be 1) male or female; 2) old or young; or 3) high or low GPA.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Principals and agents will interact under two treatments that affect the presentation of which states of the world the agent controls and which the agent does not. Principals' perceptions of agents across other domains will then be collected.
Experimental Design Details
The principal will win based on the roll of an 8-sided dice. The principal will always win with a 1 or 2 and will always lose with a 7 or 8. The agent will control whether the principal wins with a 3, 4, 5, or 6. The agent must sacrifice their own money to ensure that the principal wins with these numbers. The cost per number will vary within a 3-cent band around $0.25. The principals can then destroy up to $4 of the agent's money at a cost of $0.25 per dollar.

The Combined Information treatment will present all 8 numbers as part of one random process. In announcing the outcome, we will simply announce the number and outcome (win or loss).
The Separated Information treatment will present the 4 numbers under the agent's control on the left side and the 4 numbers outside of the agent's control on the right side. In announcing the outcome, we will first announce the side (left or right) and then the number and outcome (win or loss).

After 10 rounds of this interaction, we will elicit principals' perceptions of agents based on one randomly-selected round that they observe. Each principal will first guess the average effort that the agent exerted across all 10 of their rounds based only on the one observed round. Principals will then select agents to play the role of dictator and to serve as their delegate in a real-effort task. They will, again, only observe one randomly-selected round of the agent. For each selection, principals will be given 100 "Tokens" to allocate across 4 agents. The more Tokens, the more shares of that agent's outcome (either their dictator choice or their real-effort reward). Thus, in each case, we will evaluate the principals' allocations of tokens relative to the 4 agents they were presented with. Finally, we will ask principals to guess the demographics of the agents based on the one randomly-selected round that the principal is shown. They will be incentivized to guess the agent's gender, age, and GPA.
Randomization Method
Principals and agents will be assigned to one information condition randomly with a 50% chance determined by the computer. They will stay in this condition throughout the study.

In each interaction, the agent's cost per number will vary randomly (again determined by the computer) within a 3-cent band around $0.25 (we will use this variation to estimate the local demand curve).

Next, within each interaction, the outcome of the principal will be (partly) randomly determined by the roll of the dice, which will be simulated by the computer.

Finally, each elicitation will feature randomly-selected observations of one round of different agents. Principals will only ever observe agents who are assigned to the same information treatment (Combined or Separated). But, otherwise, the computer will make these selections entirely at random among all rounds of all agents in the study session.
Randomization Unit
Each individual will experience the information randomization once. However, they will experience the cost randomization, the dice randomization, and the agent-selection randomization each round.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
160 subjects: 80 principals and 80 agents
40 subjects per role and information treatment
Sample size: planned number of observations
10 punishment decisions per principal --> 800 punishment decisions, 400 per information treatment 20 dictator choices and real-effort delegate choices per principal --> 1600 dictator and delegate choices, 800 per information treatment 5 guesses about average effort and guesses about demographics --> 400 guesses, 200 per treatment
Sample size (or number of clusters) by treatment arms
160 subjects: 80 principals and 80 agents
40 subjects per role and information treatment
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
From the pilot: The effect size for outcome bias in punishment choices (0.42 (SE=0.25)). The differential effect size for outcome bias in punishment choices across treatments (0.22 (SE=0.59)). The difference-in-differences in effect size for outcome bias in punishment choices across treatments and across agent-controlled and non-agent-controlled states (2.17 (SE=1.21)). I calculated a conservative 90% power using Stata's "Power" command. Testing the smallest effect size (differential punishment across treatments), the necessary sample is 806 punishment observations, corresponding to approximately 80 principals or 160 total subjects. This is more than sufficient for all other tests. The effect size for outcome bias in dictator selection (8.46 (SE=3.75)). The effect size for outcome bias in real-effort delegate selection (6.49 (SE=2.51)). The effect size for outcome bias in guesses about average contribution (0.07 (SE=0.09)). The effect size for outcome bias in guesses about demographics (Not available because of software error in pilot). Using the sample size from the first set of tests (N=160), I calculated power for each of the tests. Power for tests of outcome bias in dictator selection and real-effort agent selection approach 100%. Power is smallest for the test of outcome bias in guesses about the average contribution at 84%. Given the appropriateness of this sample size for all other tests, and that 80% is the typical benchmark for statistical power, I consider this sample size sufficient for these tests.
IRB

Institutional Review Boards (IRBs)

IRB Name
University of Arkansas Institutional Review Board
IRB Approval Date
2023-06-15
IRB Approval Number
2305471736

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials