Experimental Evidence of Agentic AI at Work

Last registered on February 19, 2026

Pre-Trial

Trial Information

General Information

Title
Experimental Evidence of Agentic AI at Work
RCT ID
AEARCTR-0017902
Initial registration date
February 16, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
February 19, 2026, 7:30 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Harvard business school

Other Primary Investigator(s)

PI Affiliation
Harvard Business School

Additional Trial Information

Status
In development
Start date
2026-02-17
End date
2026-02-27
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study investigates how the development of AI agent design and governance choices shape quality and productivity outcomes in an enterprise setting. We have advised a large enterprise running a `lab-in-the-field' style experiment to measure how different agentic AI approaches affect work. The experiment compares the effectiveness of developing bespoke agents using local knowledge, using a centrally developed agent, and using standard, out-of-the-box generative AI tools (Microsoft Copilot M365). This research will provide insights into how organizations can effectively leverage agentic AI in complex knowledge workflows.
External Link(s)

Registration Citation

Citation
Bojinov, Iavor and Iavor Bojinov. 2026. "Experimental Evidence of Agentic AI at Work." AEA RCT Registry. February 19. https://doi.org/10.1257/rct.17902-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2026-02-17
Intervention End Date
2026-02-27

Primary Outcomes

Primary Outcomes (end points)
Our primary outcome variable is quality of task output.
Primary Outcomes (explanation)
The construction of the primary outcomes are described in detail in the document attached to the analysis plan.

Secondary Outcomes

Secondary Outcomes (end points)
In addition to quality, we will explore additional outcomes such as timing, delegation, and self-reported AI quality assessments.
Secondary Outcomes (explanation)
The construction of these outcomes are described in detail in the document attached in the analysis plan.

Experimental Design

Experimental Design
Participants will take part in a lab experiment. The lab experiment will include a pre- and post-survey designed to capture demographic information, generative AI usage and attitudes, and job attitudes. After completing the pre-survey, all participants will receive training on generative AI generally and an introduction to agentic AI. They will then complete experimental tasks using generative AI.
Experimental Design Details
Not available
Randomization Method
The company will perform a Bernoulli randomization using the Qualtrics randomization feature, where each participant will be randomly assigned to one of four treatments with equal probability (1/4).
Randomization Unit
Individual participant.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
N/A
Sample size: planned number of observations
The company estimates that approximately 2,000 participants from a population of about 7,000 will take place. The primary factor for determining sample size is the participant opt-in rate of the employee population of roughly 7,000. Due to the dynamic nature of the enterprise environment, the employee population may be expanded.
Sample size (or number of clusters) by treatment arms
The company will perform a Bernoulli randomization, where each participant will be randomly assigned to one of four treatments with equal probability (1/4).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
Harvard Business School
IRB Approval Date
2025-09-29
IRB Approval Number
IRB25-1145
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information