Back to History Current Version

The Impact of Explanatory Strategies on Employee Trust in Algorithmic Forecasts: An Experimental Investigation

Last registered on March 04, 2025

Pre-Trial

Trial Information

General Information

Title
Explaining AI-Generated Information: Implications for AI Adoption and Employee Preference for Supervisor Advice
RCT ID
AEARCTR-0014040
Initial registration date
August 14, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 14, 2024, 3:52 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
March 04, 2025, 5:49 AM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Primary Investigator

Affiliation
Ulm University

Other Primary Investigator(s)

PI Affiliation
Ulm University

Additional Trial Information

Status
In development
Start date
2025-03-04
End date
2025-03-24
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Algorithmically generated information is playing an increasingly central role in managers' decision-making processes. However, the value of these insights is only fully realized when they are effectively utilized. Consequently, the literature on algorithm aversion has explored the factors that influence their adoption. One underexplored phenomenon is managers' tendency to seek reassurance from their superiors. In our experiment, we investigate whether such advice-seeking behavior occurs and whether it can be influenced by providing explanations through explainable AI (XAI). We argue that the effectiveness of XAI in reducing excessive advice seeking depends on several factors: whether the algorithmically generated information aligns with the manager's expectations, and the type and quality of the explanation provided.
External Link(s)

Registration Citation

Citation
Röder, Andreas and Mischa Seiter. 2025. "Explaining AI-Generated Information: Implications for AI Adoption and Employee Preference for Supervisor Advice." AEA RCT Registry. March 04. https://doi.org/10.1257/rct.14040-2.0
Experimental Details

Interventions

Intervention(s)
Using a publicly available dataset from Kaggle (https://www.kaggle.com/c/rossmann-store-sales/data), we develop an AI model to forecast store sales of a German convenience store chain. Additionally, we implement two different XAI methods producing i) Feature Importance Explanations and ii) Counterfactual Explanations along the AI forecasts for users. Forecasts are displayed in a format which is similar to contemporary business intelligence tools. Based on our experimental conditions, we deliberately select one store to present to our participants throughout the experiment. During the experiment, participants can interact with the dashboard to inform themselves about the store using the information provided by the system, the AI prediction and the explanation associated with their experimental condition. We will collect data from online crowd workers located in the US and Europe who are at least 18 years old. Each participant will be randomly assigned to one of our three experimental conditions.
Intervention (Hidden)

The intervention is designed to investigate the following Hypotheses:

H1A: XAI methods reduce advice seeking.
H1B: This decrease is more pronounced when AI predictions deviate from user expectations.

H2: Counterfactual explanations are more effective in reducing advice seeking than feature importance explanations.
Intervention Start Date
2025-03-04
Intervention End Date
2025-03-24

Primary Outcomes

Primary Outcomes (end points)
Advice seeking from supervisors
Primary Outcomes (explanation)
Advice seeking is a binary variable beeing one if participants seek advice from supervisors or zero otherwise.

Secondary Outcomes

Secondary Outcomes (end points)
Confidence in participants own prediction
Confidence in and weight of supervisor advice
Confidence in and weight of AI advice
Secondary Outcomes (explanation)
Confidence is measured using two 7-point-Likert-Scales adapted from Dietvorst et al. (2015).
Weight of supervisor advice and weight of AI advice is constructed using participants adjustments after receiving advice from supervisor and AI.

Experimental Design

Experimental Design
Participants will be setting sales targets (i.e., 10000€) for a fictious convenience store. They will be randomly divided into two experimental conditions, each subject to different conditions formulated by our independent variable:

• Algorithmic Advice (AI + Feature Importance Explanation vs. AI + Counterfactual Explanation)

On the welcome page of the experiment, we will inform participants about the procedure and potential exclusions through attention checks. Participant will be presented with a fictious target setting task. They are instructed to immerse themselves into the situation of setting targets for a convenience store. To further incentivize participants, they receive a bonus for setting accurate sales targets.

The experiment can be sectioned into the following steps:
1. Participants receive general information of their store, the stores past sales performance.
2. Participants indicate their sales expectations for the upcoming month.
3. Participants are provided an AI forecast (and a corresponding explanation) of expected sales for the upcoming month.
4. Based on the information provided beforehand, participants must set a sales target for their store.
5. The participants can choose to ask their supervisor for advice using an autonomous or dependent advice seeking logic.
6. Participants can readjust their previously selected sales target based on the information provided by their supervisor.

After completing the target setting process, participants will be asked to answer a questionnaire, to query their experience and general preferences. Finally, participants are asked to provide sociodemographic information.
Experimental Design Details
Randomization Method
Participants will be randomly assigned to the experimental conditions by a designated function of the website.
Randomization Unit
Randomization will be done at the participant level
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
-
Sample size: planned number of observations
Our sample size will be calculated based on the results of our experimental pre-test using G-Power Analysis. Expected sample size: 200 participants
Sample size (or number of clusters) by treatment arms
-
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Ethikkommission der Universität Ulm
IRB Approval Date
2024-07-29
IRB Approval Number
N/A

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials