Please fill out this short user survey of only 3 questions in order to help us improve the site. We appreciate your feedback!
(Tho)roughly Explained: The Impact of Algorithmic Transparency on Beliefs and Decision Making
Last registered on January 07, 2021


Trial Information
General Information
(Tho)roughly Explained: The Impact of Algorithmic Transparency on Beliefs and Decision Making
Initial registration date
December 07, 2020
Last updated
January 07, 2021 6:42 AM EST
Primary Investigator
Leibniz Institute for Financial Research SAFE
Other Primary Investigator(s)
Additional Trial Information
Start date
End date
Secondary IDs
An increasing number of scholars and practitioners are raising alarm about the black-box nature of machine learning (ML) predictions and demand more explanation about algorithmic outputs. Specifically, providing individual-level explanations about why ML algorithms produce specific predictions and which inputs cause it. While more transparent ML models arguably improve the detection of opaque discrimination, help enhance trust in the machine, and allow for more accountability, little is known whether, and if so how, algorithmic transparency causally affects human stakeholders' beliefs and behavior. Answering this question is crucial to understand the potential downstream effects of employing transparent ML systems that may, for example, occur because algorithms change people's beliefs about real-world relationships for the worse. To shed light on this issue, we design a novel experimental protocol that allows us to address considerable endogeneity concerns that inevitably arise in a field setting. Specifically, our experiment allows us to examine whether algorithmic transparency in a strategic setting (i) induces people to change their system of beliefs and behavior, (ii) affects efficiency and social welfare, (iii) changes people' perception and usage of the ML system, and (iv) isolate treatment heterogeneities. In our online experiment, 600 participants have to make a series of strategic transfer decisions under uncertainty, affecting their own and other people's material income. Our key experimental variation is whether participants, in addition to a ML model's prediction about the material consequences of their decision, receive a human-interpretable, state-of-the-art explanation (using LIME) of why the model produces a specific prediction.
External Link(s)
Registration Citation
Bauer, Kevin. 2021. "(Tho)roughly Explained: The Impact of Algorithmic Transparency on Beliefs and Decision Making." AEA RCT Registry. January 07. https://doi.org/10.1257/rct.6854-1.1.
Experimental Details
The basic structure of the game we employ as our main workhorse is as follows. There are two players: a trustor and a trustee. Initially, we endow both players with 10 monetary units (MU). The trustor then needs to decide whether or not to transfer her 10 MU to the trustee. Before the trustor makes the decision, she observes a certain number of the trustees' personal traits. If the trustor does not make a transfer, the game ends and both players end receive a payoff of 10 MU. If she makes a transfer, the trustee learns about this decision and has to decide whether to transfer his endowment to the trustor or keep it. Whenever one player decides to make a transfer, we double the transferred amount so that the opponent receives 20 MU. One can think about this game as a reduced form of a sequential prisoner's dilemma where all trustees are either strong reciprocators or pure income maximizers. We employ this reduction to simplify the experimental structure so that it resembles real-life settings more closely, where transactions fail if trustors decide against a transfer. Figure \ref{game} illustrates the structure of the game.

Our key between-subject treatment variation is whether trustors, for a subset of the games, observe a machine learning model's prediction about trustees' propensity to reciprocate an initial transfer that is either intransparent or includes a human-interpretable explanation about why the model produces specific predictions.

The central feature of the experiment is that participants always take on the role of the trustor, playing against other subjects from a previous field study. This way, we gain tight control of trustees' personal characteristics and machine learning predictions while letting participants interact with real human beings whose material well-being they influence with their decision instead of using simulations. In the following, we explain the exact procedure of letting experimental and previous field study participants play against each other.
Intervention Start Date
Intervention End Date
Primary Outcomes
Primary Outcomes (end points)
Prior and posterior transfer decisions in stages 1 and 5.
Prior and posterior trait rankings from stage 2 and 4.
Ignorance of algorithmic output: Share of decisions that oppose algorithmic predictions stage 3
Optimality of decisions: Share of decisions that maximize participants' material-payoff
Assessment of algorithmic performance: Guesses about the algorithm's accuracy (note that we also use these measures as controls).
Primary Outcomes (explanation)
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
In our experiment, we harness data from a comprehensive incentivized field study that we conducted over three years between 2016 and 2019. The field study comprises an incentivized one-shot sequential prisoners' dilemma, i.e., a revealed preference paradigm, and a broad set of survey items on participants' demographics, socio-economic background, cognitive abilities, personality traits. For our experiment, we only use the subset of field study participants, who as player B decided not to transfer their endowment in case player A initially refrains from a transfer, i.e., strong reciprocators and pure material-income maximizers. Thereby, we reduce the game to the structure we employ in our experiment. Overall, the field data we use for our experiment comprises approximately 1104 distinct observations. We randomly split these observations into two representative subsets: a training set (n=1048) and a player set (n=56). We use the training set to train a Gradient Boosted Random Forest that, based on a subset of 10 socio-demographic traits, predicts trustees' behavior in case the trustor initially transfers her endowment. The player set serves as the representative population of trustees against which participants in our experiment play. To determine the outcomes and payoffs in a specific game, we match experimental participants' trustor decisions with the conditional decision of field study participants. Notably, we inform participants in the experiment that we recontact and pay field study participants according to the game outcomes so that participants are aware that their decisions impact the material well-being of real people.

Overall, our experiment consists of 2 treatments, each comprising 5 subsequent stages. In stage 1, we elicit a behavioral prior by letting participants make a series of one-shot transfer decisions when matched with random trustees from the player set. In stage 2, we elicit participants' prior belief about which of a trustee's personal traits are most informative with regards to their propensity to reciprocate an initial transfer. Stage 3 serves as our treatment manipulation. Participants make a series of one-shot transfer decisions, playing against random individuals from the player set. Conditional on the treatment we provide them with an opaque or transparent machine learning model prediction to augment their transfer choice. In stage 4, we elicit participants' posterior belief about trustees' personal traits they consider most informative regarding their propensity to reciprocate an initial transfer. In stage 5, participants play against the same trustees from the first stage, providing us with a behavioral posterior. The experiment ends with the elicitation of several potential covariates. In stage 1, participants play 10 rounds of the reduced sequential prisoner's dilemma against different individuals we randomly draw from the player set. For every transfer decision, we endow participants with 10 MU which they can either decide to transfer and keep for themselves, as explained above. Before they make their choice, participants observe the 10 personal traits of the individual they play against in the given round. Participants do not receive feedback about the outcome of the games between rounds. This way, we prevent idiosyncratic learning and the formation of experience based on outcomes and the opponents' personal traits participants observe.

Ultimately, transfer decisions elicited in this stage serve two purposes. First, they constitute behavioral priors, allowing us to identify participants' initial biases and choice patterns conditional on their opponents' traits in the absence of information produced by a decision support system. Put differently, participants' transfer decisions in stage 1 are an individual level baseline. Second, through making decisions, participants become familiarized with the task. For the second stage, we endow participants with 10 MU and match them with a random individual from the player set whom they have not encountered in stage 1. Participants learn that they again have to decide whether or not they want to transfer their endowment to their opponent.

In contrast to the previous stage, participants, before deciding upon the transfer, can only observe 3 out of the 10 personal traits of the other individual. Participants have to choose which traits they want to see. We ask them to select three distinct traits and mark them as first, second, and third choice. The trait they mark as first choice is shown to them when making their transfer decision with a probability of 1. The traits they mark as second and third choice are revealed with a respective probability of 0.9 and 0.8. With inverse probabilities of 0.1 and 0.2, they instead observe distinct traits of the trustee that we randomly draw from the remaining 7 traits that the participant does not select. This procedure allows us, in an incentive compatible way, to elicit an interpretable ordering of participants' prior beliefs about which traits of a trustee they consider most informative to project how this person will respond to initially being transferred 10 MU.
Once participants have decided upon a selection, we randomly determine which traits participants actually see, reveal them to participants, and let them make their transfer decision. Participants do not receive feedback on the outcome of the game at this point. In stage 3 of our experiment, participants play 20 rounds of the reduced sequential prisoner's dilemma against different individuals we randomly draw from the player set. There is no feedback on game outcomes between rounds. For every game, we endow participants with 10 MU and ask them to make a transfer decision. As in the first stage, participants observe all of the trustees' 10 personal traits before making a decision. In addition to observing their current opponent's traits, participants in stage 3 also receive a prediction, produced by a machine learning model (Gradient Boosted Random Forest), about this individual's propensity to reciprocate an initial transfer. In order to mitigate participants' potential initial skepticism towards the model's predictions, we explain to them in detail how the model operates and reveal its performance on a representative test set. Notably, we explicitly inform participants that the model produces the prediction only using the opponent's 10 personal traits which they also observe, i.e., we emphasize that the model does not have access to any additional information about the opponent. Our between-subject treatment variation is whether or not participants, in addition to the prediction as such, also receive a human-interpretable explanation about why the system makes a specific prediction. Specifically, in our \textit{Transparent System} treatment (TS), we reveal all 10 individual feature importances of a prediction as a visual illustration and intuitive explanation about how to interpret the corresponding values. We employ LIME (Local Interpretable Model-Agnostic Explanation) a state-of-the-art algorithm to produce the explanations. This way, participants always learn the influence and weight each of the trustee's trait has on the specific prediction. Put differently, we inform them, on an individual level, which traits the machine considers most meaningful to forecast a given trustee's response to an initial transfer. In contrast, participants in our Opaque System treatment (OS) merely observe the predictions without any additional explanation. After participants have made their first 10 and second 20 transfer decisions, we ask them in both treatments to make incentivized guesses about the machine's predictive performance. They have to guess the percentage of times the system's prediction is correct (Accuracy score). Subjects receive a payoff of 3 MU for every guess that is off by less or equal than 20 percentage points. Participants' guesses provide us with an incentive compatible measure of their assessment of the system's reliability and performance. In stage 4, we match participants with a random individual from the player set whom they have not played against in previous stages and replicate the procedure that we employ in stage 2. Thereby we elicit participants' posterior belief about which three traits of a trustee they consider most informative to project whether or not this individual reciprocates a transfer. This is, participants need to choose and rank three distinct traits that they want to observe before deciding upon a transfer of 10 MU we endow them with.
We emphasize that participants will not observe the prediction of a machine learning model before they decide upon a transfer, but only three traits of the opponent. In other words, after participants could use a decision support system's output to augment their transfer decisions in the previous stage, we now remove the system again. This enables us to identify whether participants internalized the predictions (together with explanations), i.e., learned from the system's output and updated their belief about the informativeness of trustees' traits about their propensity to behave reciprocally. Finally, in stage 5, participants play 10 rounds of the reduced sequential prisoner's dilemma without feedback against the same 10 individuals that they have encountered in the first stage. We randomize the order in which participants play against the 10 trustees from stage 1. Participants only observe trustees' 10 personal traits before making their transfer decision, but no machine learning model prediction. By letting participants again play against the same individuals as in stage 1, we can observe any individual-level changes in their behavior entailed by the exposure to a (transparent) decision support system's output.
Experimental Design Details
Randomization Method
Randomization occurs on the session or individual level using a computer.
Randomization Unit
Randomization occurs on the session and individual level.
Was the treatment clustered?
Experiment Characteristics
Sample size: planned number of clusters
600 Participants in the experiment elicited in 12 sessions.
Sample size: planned number of observations
600 Participants in the experiment.
Sample size (or number of clusters) by treatment arms
300 participants per treatment.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
There are two treatments (TS and OS) into which we assign participants with an equal probability. We assume a normal distribution of the outcome variables with equal variance. For our power analyses, we use the standard criteria of alpha = 0.05, beta = 0.20 and a two-sided t-test. If the effect sizes are around 30\% of a standard deviation, we would need a sample size of 175 per experimental condition. Assuming an effect size of 25\%, we would require 250 observations per condition. Figure \ref{power} depicts a graphical illustration of our analyses. We opt to employ a more conservative strategy and collect 300 observation per treatment condition.
Supporting Documents and Materials

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
IRB Name
Gemeinsame Ethikkommission Wirtschaftswissenschaften der Goethe-Universit├Ąt Frankfurt und der Johannes Gutenberg-Universit├Ąt Mainz
IRB Approval Date
IRB Approval Number
Post Trial Information
Study Withdrawal
Is the intervention completed?
Intervention Completion Date
December 17, 2020, 12:00 AM +00:00
Is data collection complete?
Data Collection Completion Date
December 17, 2020, 12:00 AM +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
603 participated, the randomization occurred both at the individual and at the cluster level (20 clusters)
Was attrition correlated with treatment status?
Final Sample Size: Total Number of Observations
Final Sample Size (or Number of Clusters) by Treatment Arms
20 clusters
Data Publication
Data Publication
Is public data available?

This section is unavailable to the public. Use the button below to request access to this information.

Request Information
Program Files
Program Files
Reports, Papers & Other Materials
Relevant Paper(s)