Back to History

Fields Changed

Registration

Field Before After
Trial Title Explaining AI-Generated Information: Implications for AI Adoption and Employee Preference for Supervisor Advice The Impact of Explanatory Strategies on Employee Trust in Algorithmic Forecasts: An Experimental Investigation
Abstract Algorithmically generated information is playing an increasingly central role in managers' decision-making processes. However, the value of these insights is only fully realized when they are effectively utilized. Consequently, the literature on algorithm aversion has explored the factors that influence their adoption. One underexplored phenomenon is managers' tendency to seek reassurance from their superiors. In our experiment, we investigate whether such advice-seeking behavior occurs and whether it can be influenced by providing explanations through explainable AI (XAI). We argue that the effectiveness of XAI in reducing excessive advice seeking depends on several factors: whether the algorithmically generated information aligns with the manager's expectations, and the type and quality of the explanation provided. Algorithmically generated information is playing an increasingly central role in managers' decision-making processes. However, the value of these insights is only fully realized when they are effectively utilized. Consequently, the literature on algorithm aversion has explored the factors that influence their adoption. One underexplored phenomenon is managers' tendency to seek reassurance from their superiors. In our experiment, we investigate whether such advice-seeking behavior occurs and whether it can be influenced by providing explanations through explainable AI (XAI). We employ an online experiment involving crowd workers to isolate the effects of different XAI methods (Feature Importance Explanations vs. Counterfactual Explanations) on advice-seeking behavior.
Trial Start Date March 04, 2025 March 11, 2025
Trial End Date March 24, 2025 March 22, 2025
JEL Code(s) M C91, D91, M41
Last Published March 04, 2025 05:49 AM March 11, 2025 06:30 AM
Intervention (Public) Using a publicly available dataset from Kaggle (https://www.kaggle.com/c/rossmann-store-sales/data), we develop an AI model to forecast store sales of a German convenience store chain. Additionally, we implement two different XAI methods producing i) Feature Importance Explanations and ii) Counterfactual Explanations along the AI forecasts for users. Forecasts are displayed in a format which is similar to contemporary business intelligence tools. Based on our experimental conditions, we deliberately select one store to present to our participants throughout the experiment. During the experiment, participants can interact with the dashboard to inform themselves about the store using the information provided by the system, the AI prediction and the explanation associated with their experimental condition. We will collect data from online crowd workers located in the US and Europe who are at least 18 years old. Each participant will be randomly assigned to one of our three experimental conditions. Participants assume the role of a store manager in a convenience store chain. They are tasked with forecasting sales of the upcoming month using a sales development dashboard, which is similar to contemporary business intelligence tools. The assumed store chain uses AI to assist the store manager in the forecasting task. We implement two different explanatory strategies i) Feature Importance Explanations and ii) Counterfactual Explanations along the AI forecasts for users. During the experiment, participants can interact with the sales development dashboard to learn about the store using the information provided by the system, the AI prediction and the explanation associated with their experimental condition.
Intervention Start Date March 04, 2025 March 11, 2025
Intervention End Date March 24, 2025 March 22, 2025
Primary Outcomes (Explanation) Advice seeking is a binary variable beeing one if participants seek advice from supervisors or zero otherwise.
Experimental Design (Public) Participants will be setting sales targets (i.e., 10000€) for a fictious convenience store. They will be randomly divided into two experimental conditions, each subject to different conditions formulated by our independent variable: • Algorithmic Advice (AI + Feature Importance Explanation vs. AI + Counterfactual Explanation) On the welcome page of the experiment, we will inform participants about the procedure and potential exclusions through attention checks. Participant will be presented with a fictious target setting task. They are instructed to immerse themselves into the situation of setting targets for a convenience store. To further incentivize participants, they receive a bonus for setting accurate sales targets. The experiment can be sectioned into the following steps: 1. Participants receive general information of their store, the stores past sales performance. 2. Participants indicate their sales expectations for the upcoming month. 3. Participants are provided an AI forecast (and a corresponding explanation) of expected sales for the upcoming month. 4. Based on the information provided beforehand, participants must set a sales target for their store. 5. The participants can choose to ask their supervisor for advice using an autonomous or dependent advice seeking logic. 6. Participants can readjust their previously selected sales target based on the information provided by their supervisor. After completing the target setting process, participants will be asked to answer a questionnaire, to query their experience and general preferences. Finally, participants are asked to provide sociodemographic information. Participants will be setting sales targets for a fictious convenience store. They will be randomly divided into three experimental conditions, each subject to different conditions formulated by our independent variable: • Algorithmic Advice (AI vs. AI & Feature Importance Explanation vs. AI & Counterfactual Explanation) On the welcome page of the experiment, we will inform participants about the procedure and potential exclusions through attention checks. They are instructed to immerse themselves into the situation of setting targets for a convenience store. To further incentivize participants, they receive a bonus for setting accurate sales targets. The experiment can be sectioned into the following steps: 1. Participants receive general information of their store and the stores past sales performance. 2. Participants indicate their sales expectations for the upcoming month. 3. Participants are provided an AI forecast (and a corresponding explanation depending on the experimental condition) of expected sales for the upcoming month. 4. Based on the information provided beforehand, participants must set a sales target for their store. 5. The participants can choose to ask their supervisor for advice. However, advice is costly and reduces their potential bonus payment 6. Participants can readjust their previously selected sales target (based on the information provided by their supervisor). After completing the target setting process, participants will be asked to answer a post experimental questionnaire, to query their experience and general preferences. Finally, participants are asked to provide sociodemographic information.
Planned Number of Observations Our sample size will be calculated based on the results of our experimental pre-test using G-Power Analysis. Expected sample size: 200 participants Our sample size will be calculated based on the results of a pre-test using Stata. Expected sample size: 540 participants.
Additional Keyword(s) Advice-Seeking Explainable Artificial Intelligence (XAI) Target Setting Explainable Artificial Intelligence (XAI) Explanatory Strategies Algorithmic Forecast Advice Seeking
Intervention (Hidden) The intervention is designed to investigate the following Hypotheses: H1A: XAI methods reduce advice seeking. H1B: This decrease is more pronounced when AI predictions deviate from user expectations. H2: Counterfactual explanations are more effective in reducing advice seeking than feature importance explanations. The intervention is designed to investigate the following Hypotheses: Explanations reduce advice-seeking behavior. Explanations increase advice-seeking behavior. Counterfactual Explanations are more effective as compared to Feature Importance Explanations.
Secondary Outcomes (End Points) Confidence in participants own prediction Confidence in and weight of supervisor advice Confidence in and weight of AI advice Weight of AI advice
Secondary Outcomes (Explanation) Confidence is measured using two 7-point-Likert-Scales adapted from Dietvorst et al. (2015). Weight of supervisor advice and weight of AI advice is constructed using participants adjustments after receiving advice from supervisor and AI.
Back to top