Machine learning assisted advice
Last registered on March 22, 2019

Pre-Trial

Trial Information
General Information
Title
Machine learning assisted advice
RCT ID
AEARCTR-0003529
Initial registration date
March 15, 2019
Last updated
March 22, 2019 11:55 AM EDT
Location(s)
Primary Investigator
Affiliation
WHU - Otto Beisheim School of Management
Other Primary Investigator(s)
PI Affiliation
University of Cologne
Additional Trial Information
Status
In development
Start date
2019-03-17
End date
2019-03-24
Secondary IDs
Abstract
Algorithms based on enormous amounts of data enable new ways of supporting decision making by integrating and improving data-driven predictions. The present projects investigates whether this algorithmic assistance can suppress ethical considerations in decision making. We will focus on two important drivers of this phenomenon, namely diffusion of responsibility (e.g., Bartling and Fischbacher, 2011) and ethical fading (e.g., Tenbrunsel and Messick, 2004).
External Link(s)
Registration Citation
Citation
Irlenbusch, Bernd and Rainer Michael Rilke. 2019. "Machine learning assisted advice." AEA RCT Registry. March 22. https://www.socialscienceregistry.org/trials/3529/history/43804
Experimental Details
Interventions
Intervention(s)
Intervention Start Date
2019-03-23
Intervention End Date
2019-03-24
Primary Outcomes
Primary Outcomes (end points)
The fraction of subjects who overreport the outcome of a fair die roll.
Primary Outcomes (explanation)
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
We aim to utilize the infrastructure of an online labour market place to run a conflict of interest game where participants receive assistance from algorithms (Cain et al., 2005). Participants are randomly assigned the role of the sender or the role of the receiver. The sender observes the outcome of a random computerized six-sided die roll and has to send a message about the outcome of the die roll to the receiver. The receiver does not know the true outcome of the die roll; he gets the message and has to choose a number between one and six. The sender’s payment is increasing in the number chosen by the receiver, whereas the receiver only receives a payment when he chooses the number that matched the actual die roll. This game gives the sender the opportunity to over-report the actual die roll outcome in order to gain an additional payoff. Our four treatments vary the estimation senders get about the behaviour of the receivers.
Experimental Design Details
We will employ four treatments and one additional study. In the Baseline treatment (T1), senders receive no advice about the behaviour of the receiver. This treatment serves as a Baseline and as a data source for the other three treatments: Based on the behaviour of receivers in the Baseline treatment we will provide the sender with an estimation about the response of the receiver. In the second treatment (T2) sender will get an estimation of the average number chosen by the receiver for each potential action of the receiver. In the third treatment (T3) sender will solely get an estimation about what is the payoff-maximizing action. In the fourth treatment (T4) both elements from T2 and T3 are combined. One potential explanation of increased overreporting in the experimental treatments (T2-T4) is that the estimations about the behaviour of the receiver change senders beliefs. The reasoning goes as follows: Senders have a homegrown belief about the reaction of the receiver; providing the estimation about the receiver's behaviour changes their beliefs, which in turn, leads them to send different messages as compared to the situation with homegrown belief. If this would be the case overreporting would not be the result of ethical fading, but rather the result of a different utility maximization process. To tackles this issue, we plan to run a separate study. In this study, we will elicit senders homegrown beliefs in an incentivized way. We will invite participants to take part in this experiment and ask them what they think receivers will do for the respective message of a sender. We will compare these beliefs with receivers behaviour from Baseline (T1).
Randomization Method
Done by software (Questback).
Randomization Unit
Individual.
Was the treatment clustered?
No
Experiment Characteristics
Sample size: planned number of clusters
.
Sample size: planned number of observations
We plan to recruit 1000 individuals.
Sample size (or number of clusters) by treatment arms
We plan to recruit about 250 individuals per treatment (100 for additional study as described in the design).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
IRB Approval Date
IRB Approval Number
Post-Trial
Post Trial Information
Study Withdrawal
Intervention
Is the intervention completed?
No
Is data collection complete?
Data Publication
Data Publication
Is public data available?
No
Program Files
Program Files
Reports and Papers
Preliminary Reports
Relevant Papers