AI Advice Depth and Breadth in Technical Customer Support

Last registered on June 13, 2025

Pre-Trial

Trial Information

General Information

Title
AI Advice Depth and Breadth in Technical Customer Support
RCT ID
AEARCTR-0015965
Initial registration date
June 10, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
June 13, 2025, 8:07 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2025-04-01
End date
2025-09-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study investigates the impact of integrating AI advice on technical customer support, a critical and resource-intensive function that minimizes delays and costs. The field experiment will be conducted with 54 technicians from a technical customer support team solving approximately 180 problems weekly, focusing on their problem-solving efficiency and effectiveness. Technicians currently face limitations in accessing stored knowledge and prefer varied advice styles, leading to unclear impacts on their problem-solving performance.
The research aims to demonstrate how AI advice influences problem-solving compared to existing practices. It involves two subexperiments in the field. In the first one, it examines different two AI chatbot response styles - concise and detailed - in a general chat not directly related to a specific problem and their effects on technician satisfaction, working modes and learning. In a second one, it examines four different chatbot settings - one specific or three suggestions based on two different information retrieval options - and their effects on resolution time, technician satisfaction, resolution quality, working modes and learning. The study further considers different problem types and technicians' backgrounds.
The interventions build on an AI system that integrates knowledge from internal databases, e.g., previously resolved incident, manuals, wikis, using retrieval-augmented generation. In the first experiment, via the system prompt two different AI chatbots that vary in their information depth - concise and detailed - will be randomly be available for each problem-solving task. The second experiment, directly connects problems to the internal databases and initiates the chat with suggestions of potential solutions randomly generated.
Previous studies in the same context and feedback from technicians brought up unclear implications on which AI advice style will be more helpful and effective. We expect different impacts on resolution time, chatbot interaction patterns and working modes depending on the collaboration with the different chatbot versions. We also expect different impact depending on the problem type and technician's experience.
External Link(s)

Registration Citation

Citation
Just, Julian. 2025. "AI Advice Depth and Breadth in Technical Customer Support." AEA RCT Registry. June 13. https://doi.org/10.1257/rct.15965-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
The intervention builds on an AI system that integrates knowledge from internal databases, e.g., previously resolved incident, manuals, wikis, using retrieval-augmented generation. In the first field experiment, via the system prompt two different AI chatbots - concise or verbose - will be randomly be available for each problem-solving task. The second experiment, inserted problems are directly connected to the internal databases and through four different inititating prompts different suggestion types for potential solutions are randomly generated.
Intervention (Hidden)
Intervention Start Date
2025-06-05
Intervention End Date
2025-08-31

Primary Outcomes

Primary Outcomes (end points)
DV: Resolution time, advice quality, and working modes.
IV: Different AI advice treatment groups, problem types (complexity, familiarity, clarity) and technicians' backgrounds (experience, product group, attitude).
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Resolution quality, technician satisfaction, learning
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The experiment introduces an AI chatbot that provides real-time advice while technicians resolve incoming problems. The chatbot offers two different ways to obtain problem-solving support:
1. General knowledge chat unrelated to a specific problem: Advice styles - concise or verbose (depth) - and integrates knowledge from internal databases. Technicians will be asked to evaluate the quality of the advice, as well as share information on the problem-solving processess ex-post. In each chat they will be randomly assigned to one AI chatbot condition.
2. Problem-specific chat linked to a specific problem: Initial suggestions with for different styles (2x2) - either one suggestions or three (breadth of alternatives), either based on information just from the system where the error occured or from all related systems (breadth of information retrieved). Technicians will be asked to evaluate the quality of the advice, as well as share information on the problem-solving processess ex-post. Depending on the problem-id they will be randomly assigned to one initial AI chatbot suggestion condition.
Experimental Design Details
Randomization Method
Randomized by a computer in the chatbot backend (Experiment 1: per chat randomization; Experiment 2: per problem randomization)
Randomization Unit
Chatbot interactions or initial advice to support general knowledge acquisition or distinct problem resolution tasks
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
180 problem-solving tasks expected per week, 54 technicians are part of the customer support team; we will run the experiment for 1-2 months minimum
Sample size: planned number of observations
2400 chats
Sample size (or number of clusters) by treatment arms
400 control/no AI (shortly before chatbot introduction), 400 concise AI, 400 verbose AI, 400 one single system suggestion, 400 one related systems suggestion, 400 three single system suggestions, 400 three related systems suggestions
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials