Investigating the Influence of Explainable AI (XAI) on Employee Preference for Supervisor Advice

Last registered on August 14, 2024

Pre-Trial

Trial Information

General Information

Title
Investigating the Influence of Explainable AI (XAI) on Employee Preference for Supervisor Advice
RCT ID
AEARCTR-0014040
Initial registration date
August 14, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 14, 2024, 3:52 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
Ulm University

Other Primary Investigator(s)

PI Affiliation
Ulm University
PI Affiliation
Ulm University
PI Affiliation
Ulm University
PI Affiliation
Ulm University

Additional Trial Information

Status
In development
Start date
2024-08-16
End date
2024-09-06
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
With their extensive experience and expertise, supervisors serve as a vital resource for employee development, team cohesion, and organizational performance. However, employees may be reluctant to seek advice from their supervisors due to a significant fear of appearing incompetent and wasting their supervisors' time. Thus, it becomes important to guide employee’s advice seeking behavior to ask for advice when needed while refraining to seek advice when there is no reason to do so.
Artificial intelligence (AI) can significantly impact advice-seeking behavior by providing non-judgmental, accessible, and efficient information. However, research shows that humans tend to over-rely on AI systems due to cognitive biases and increased levels of trust. We argue that explanations along AI forecasts – produced by explainable artificial intelligence (XAI) methods – can guide employees’ advice seeking behavior to seek advice even though they feel overconfident with their decision while decreasing advice seeking behavior in situations where they feel less confident.
We explore the role of XAI in influencing advice-seeking behavior to enhance productivity and facilitate decision-making. We employ an online experiment involving crowd workers to isolate the effects of different XAI methods (AI vs. Feature Importance vs. Counterfactual Explanations) on advice-seeking behavior.
External Link(s)

Registration Citation

Citation
Förster, Maximilian et al. 2024. "Investigating the Influence of Explainable AI (XAI) on Employee Preference for Supervisor Advice." AEA RCT Registry. August 14. https://doi.org/10.1257/rct.14040-1.0
Experimental Details

Interventions

Intervention(s)
Using a publicly available dataset from Kaggle (https://www.kaggle.com/c/rossmann-store-sales/data), we develop an AI model to forecast store sales of a German convenience store chain. Additionally, we implement two different XAI methods producing i) Feature Importance Explanations and ii) Counterfactual Explanations along the AI forecasts for users. Forecasts are displayed in a format which is similar to contemporary business intelligence tools. Based on our experimental conditions, we deliberately select one store to present to our participants throughout the experiment. During the experiment, participants can interact with the system to inform themselves about the store using the information provided by the system, the AI prediction and the explanation associated with their treatment. We will collect data from online crowd workers located in the US and Europe who are at least 18 years old. Each participant will be randomly assigned to one of our six experimental groups.
Intervention Start Date
2024-08-16
Intervention End Date
2024-09-06

Primary Outcomes

Primary Outcomes (end points)
Advice seeking from supervisors
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Advice Type (dependent vs. autonomous)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Participants will be setting sales targets (i.e., 10000€) for a fictious convenience store. They will be randomly divided into two experimental conditions, each subject to different conditions formulated by our independent variable:

• Algorithmic Advice (AI + Feature Importance Explanation vs. AI + Counterfactual Explanation)

On the welcome page of the experiment, we will inform participants about the procedure and potential exclusions through attention checks. Participant will be presented with a fictious target setting task. They are instructed to immerse themselves into the situation of setting targets for a convenience store. To further incentivize participants, they receive a bonus for setting accurate sales targets.

The experiment can be sectioned into the following steps:
1. Participants receive general information of their store, the stores past sales performance.
2. Participants indicate their sales expectations for the upcoming month.
3. Participants are provided an AI forecast (and a corresponding explanation) of expected sales for the upcoming month.
4. Based on the information provided beforehand, participants must set a sales target for their store.
5. The participants can choose to ask their supervisor for advice using an autonomous or dependent advice seeking logic.
6. Participants can readjust their previously selected sales target based on the information provided by their supervisor.

After completing the target setting process, participants will be asked to answer a questionnaire, to query their experience and general preferences. Finally, participants are asked to provide sociodemographic information.
Experimental Design Details
Randomization Method
Participants will be randomly assigned to the experimental groups by a designated function of the website.
Randomization Unit
Randomization will be done at the participant level
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
-
Sample size: planned number of observations
Our sample size will be calculated based on the results of our experimental pre-test using G-Power Analysis. Expected sample size: 200 participants
Sample size (or number of clusters) by treatment arms
-
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Ethikkommission der Universität Ulm
IRB Approval Date
2024-07-29
IRB Approval Number
N/A

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials