|
Field
Trial Title
|
Before
Investigating the Influence of Explainable AI (XAI) on Employee Preference for Supervisor Advice
|
After
Explaining AI-Generated Information: Implications for AI Adoption and Employee Preference for Supervisor Advice
|
|
Field
Abstract
|
Before
With their extensive experience and expertise, supervisors serve as a vital resource for employee development, team cohesion, and organizational performance. However, employees may be reluctant to seek advice from their supervisors due to a significant fear of appearing incompetent and wasting their supervisors' time. Thus, it becomes important to guide employee’s advice seeking behavior to ask for advice when needed while refraining to seek advice when there is no reason to do so.
Artificial intelligence (AI) can significantly impact advice-seeking behavior by providing non-judgmental, accessible, and efficient information. However, research shows that humans tend to over-rely on AI systems due to cognitive biases and increased levels of trust. We argue that explanations along AI forecasts – produced by explainable artificial intelligence (XAI) methods – can guide employees’ advice seeking behavior to seek advice even though they feel overconfident with their decision while decreasing advice seeking behavior in situations where they feel less confident.
We explore the role of XAI in influencing advice-seeking behavior to enhance productivity and facilitate decision-making. We employ an online experiment involving crowd workers to isolate the effects of different XAI methods (AI vs. Feature Importance vs. Counterfactual Explanations) on advice-seeking behavior.
|
After
Algorithmically generated information is playing an increasingly central role in managers' decision-making processes. However, the value of these insights is only fully realized when they are effectively utilized. Consequently, the literature on algorithm aversion has explored the factors that influence their adoption. One underexplored phenomenon is managers' tendency to seek reassurance from their superiors. In our experiment, we investigate whether such advice-seeking behavior occurs and whether it can be influenced by providing explanations through explainable AI (XAI). We argue that the effectiveness of XAI in reducing excessive advice seeking depends on several factors: whether the algorithmically generated information aligns with the manager's expectations, and the type and quality of the explanation provided.
|
|
Field
Trial Start Date
|
Before
August 16, 2024
|
After
March 04, 2025
|
|
Field
Trial End Date
|
Before
September 06, 2024
|
After
March 24, 2025
|
|
Field
Last Published
|
Before
August 14, 2024 03:52 PM
|
After
March 04, 2025 05:49 AM
|
|
Field
Intervention (Public)
|
Before
Using a publicly available dataset from Kaggle (https://www.kaggle.com/c/rossmann-store-sales/data), we develop an AI model to forecast store sales of a German convenience store chain. Additionally, we implement two different XAI methods producing i) Feature Importance Explanations and ii) Counterfactual Explanations along the AI forecasts for users. Forecasts are displayed in a format which is similar to contemporary business intelligence tools. Based on our experimental conditions, we deliberately select one store to present to our participants throughout the experiment. During the experiment, participants can interact with the system to inform themselves about the store using the information provided by the system, the AI prediction and the explanation associated with their treatment. We will collect data from online crowd workers located in the US and Europe who are at least 18 years old. Each participant will be randomly assigned to one of our six experimental groups.
|
After
Using a publicly available dataset from Kaggle (https://www.kaggle.com/c/rossmann-store-sales/data), we develop an AI model to forecast store sales of a German convenience store chain. Additionally, we implement two different XAI methods producing i) Feature Importance Explanations and ii) Counterfactual Explanations along the AI forecasts for users. Forecasts are displayed in a format which is similar to contemporary business intelligence tools. Based on our experimental conditions, we deliberately select one store to present to our participants throughout the experiment. During the experiment, participants can interact with the dashboard to inform themselves about the store using the information provided by the system, the AI prediction and the explanation associated with their experimental condition. We will collect data from online crowd workers located in the US and Europe who are at least 18 years old. Each participant will be randomly assigned to one of our three experimental conditions.
|
|
Field
Intervention Start Date
|
Before
August 16, 2024
|
After
March 04, 2025
|
|
Field
Intervention End Date
|
Before
September 06, 2024
|
After
March 24, 2025
|
|
Field
Primary Outcomes (Explanation)
|
Before
|
After
Advice seeking is a binary variable beeing one if participants seek advice from supervisors or zero otherwise.
|
|
Field
Randomization Method
|
Before
Participants will be randomly assigned to the experimental groups by a designated function of the website.
|
After
Participants will be randomly assigned to the experimental conditions by a designated function of the website.
|
|
Field
Intervention (Hidden)
|
Before
The intervention is designed to investigate the following Hypotheses:
H1A: XAI methods increase advice-seeking behavior when AI-predictions align with users' expectations.
H1B: When AI-predicitions align with users’ expectations, counterfactual explanations increase the likelihood of seeking advice more than the use of feature importance explanations.
H2A: XAI methods reduce advice-seeking behavior when AI-predictions deviatefrom users' expectations.
H2B: When AI-predictions deviate from expected outcomes, counterfactual explanations decrease the likelihood of seeking advice more than the use of feature importance.
H3: The use of XAI methods increase the likelihood of individuals to seeking autonomous advice rather than dependent advice.
|
After
The intervention is designed to investigate the following Hypotheses:
H1A: XAI methods reduce advice seeking.
H1B: This decrease is more pronounced when AI predictions deviate from user expectations.
H2: Counterfactual explanations are more effective in reducing advice seeking than feature importance explanations.
|
|
Field
Secondary Outcomes (End Points)
|
Before
Advice Type (dependent vs. autonomous)
|
After
Confidence in participants own prediction
Confidence in and weight of supervisor advice
Confidence in and weight of AI advice
|
|
Field
Secondary Outcomes (Explanation)
|
Before
|
After
Confidence is measured using two 7-point-Likert-Scales adapted from Dietvorst et al. (2015).
Weight of supervisor advice and weight of AI advice is constructed using participants adjustments after receiving advice from supervisor and AI.
|
|
Field
Pi as first author
|
Before
No
|
After
Yes
|