Incentive Structures for AI Technology Adoption: A Cluster-Randomized Trial in Engineering Organizations

Last registered on February 04, 2026

Pre-Trial

Trial Information

General Information

Title
Incentive Structures for AI Technology Adoption: A Cluster-Randomized Trial in Engineering Organizations
RCT ID
AEARCTR-0017768
Initial registration date
January 29, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
February 04, 2026, 9:53 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Siemens AG

Other Primary Investigator(s)

PI Affiliation
Harvard Business School
PI Affiliation
Harvard Business School
PI Affiliation
LMU Munich
PI Affiliation
Siemens AG
PI Affiliation
Siemens AG
PI Affiliation
Harvard Business School
PI Affiliation
Siemens AG

Additional Trial Information

Status
In development
Start date
2026-01-12
End date
2027-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We study how organizational incentive structures affect the adoption of AI-powered productivity tools among engineers. Despite growing evidence that AI assistants can enhance individual productivity, firms report substantial variation in adoption rates during technology rollouts. This gap between potential and realized adoption may stem from misaligned incentives: employees bear short-term learning costs while long-term benefits accrue to the organization.

We conduct a cluster-randomized controlled trial with engineering teams at a large multinational manufacturing firm during the rollout of an AI-powered engineering assistant. The tool assists engineers with code generation, interface design, and hardware configuration tasks. During an internal rollout of a AI-powered productivity tools, we regularly offer incentives to encourage experimentation with the new tools and encourage knowledge diffusion within the team. The managers of these teams will be encouraged to support the experimentation and knowledge diffusion.

We measure adoption through platform telemetry (usage frequency, feature utilization), productivity metrics (task completion rates, efficiency), and survey measures (perceived usefulness, intention to continue use). The trial runs for three months with data collection at baseline and monthly intervals. We will also continue data collection 3 months after the end of the treatment period.

This study contributes to our understanding of how firms can accelerate technology adoption by addressing incentive misalignments at both individual and team levels. Results will inform organizational strategies for deploying AI tools at scale.
External Link(s)

Registration Citation

Citation
Berndt, Johannes et al. 2026. "Incentive Structures for AI Technology Adoption: A Cluster-Randomized Trial in Engineering Organizations." AEA RCT Registry. February 04. https://doi.org/10.1257/rct.17768-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
During an internal rollout of a AI-powered productivity tools, we regularly offer incentives to encourage experimentation with the new tools and encourage knowledge diffusion within the team. The managers of these teams will be encouraged to support the experimentation and knowledge diffusion.
Intervention Start Date
2026-02-01
Intervention End Date
2026-04-30

Primary Outcomes

Primary Outcomes (end points)
Primary Outcome 1: AI Tool Adoption
Primary Outcome 2: Feature Breadth
Primary Outcome 3: Sustained Adoption
Primary Outcomes (explanation)
Primary Outcome 1: AI Tool Adoption
Definition: Weekly active usage of the AI-powered engineering assistant
Measurement: Platform telemetry capturing:
- Number of events registered sessions initiated per week
- Number of queries/prompts submitted per week
Unit: Continuous (events/week, queries/week)
Timing: Measured continuously via telemetry; analyzed at weeks 4, 8, and 12 post-rollout

Primary Outcome 2: Feature Breadth
Definition: Diversity of AI tool features utilized
Measurement: Count of distinct feature categories used (out of 3 total: code generation, interface design, hardware configuration)
Unit: Count (0-3)
Timing: Measured at weeks 4, 8, and 12 post-rollout

Primary Outcome 3: Sustained Adoption
Definition: Continued usage in the final month of the trial
Measurement: Binary indicator for whether the individual used the tool at least once per week during weeks 9-12
Unit: Binary (0/1)
Timing: Measured at week 12

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcome 1: Self-Reported Productivity Impact
Secondary Outcome 2: Telemetry-tracked Productivity
Secondary Outcome 3: Knowledge Sharing
Secondary Outcome 4: Intention to Continue Use


Secondary Outcomes (explanation)
Secondary Outcome 1: Self-Reported Productivity Impact
Definition: Perceived impact of AI tool on work productivity
Measurement: Survey item: "To what extent has [the AI tool] improved your productivity?"
Scale: 5-point Likert (1 = Not at all, 5 = Very much)
Timing: Monthly surveys (months 1, 2, 3)

Secondary Outcome 2: Telemetry-tracked Productivity
Definition: Change in time to complete standard engineering tasks, e.g., how often engineerings compile the code
Measurement: Objective measures: telemetry-tracked time between compilations, compile errors, telemetry-tracked number of project opens/closes
Unit: Percentage change from baseline
Timing: Monthly, compared to pre-rollout baseline

Secondary Outcome 3: Knowledge Sharing
Definition: Diffusion of AI tool knowledge within teams
Measurement: Survey items on frequency of discussing AI tool with colleagues and whether the participant taught others
Unit: Count (discussions per week) and binary (taught others: yes/no)
Timing: Monthly surveys

Secondary Outcome 4: Intention to Continue Use
Definition: Stated intention to use the AI tool after the trial period
Measurement: Survey item: "How likely are you to continue using [the AI tool] after the trial period?"
Scale: 5-point Likert (1 = Very unlikely, 5 = Very likely)
Timing: End-of-trial survey (month 3)

Experimental Design

Experimental Design
This is a three-arm cluster-randomized controlled trial comparing incentive structures for AI technology adoption. Engineering teams are the unit of randomization, with all members of a team assigned to the same treatment arm.
Experimental Design Details
Not available
Randomization Method
Computer-generated random assignment using pre-specified random seeds for reproducibility. Randomization is conducted by the research team using a randomization tool that assigns teams to arms based on random number generation.
Randomization Unit
Team (cluster): All members of a team are assigned to the same treatment condition.
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
Anticipated total: 1500 teams (approx.: 2000-4500 engineers)
Approached total: 4700 teams (represents all teams who were informed about release of new AI-powered productivity tool but likely majority not affected)

Average cluster size: 10
Range of cluster: 2-121
Sample size: planned number of observations
Anticipated total: 2000-4500 engineers Approached total: 38000 engineers (represents all individuals who were informed about release of new AI-powered productivity tool but likely majority not affected)
Sample size (or number of clusters) by treatment arms
Arm 1 (Control): ~500 teams (1000-1500 individuals)
Arm 2 (Individual Incentives): ~500 teams (1000-1500 individuals)
Arm 3 (Manager + Team Incentives): ~500 teams (1000-1500 individuals)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Design: Three-arm cluster-randomized trial with pairwise comparisons (Arm 1 vs. Control, Arm 2 vs. Control) Unit: Engineering Productivity (in %) Standard Deviation: 0.4 (assumed) MDE: ~6% (assuming an intra-class correlation of 0.10)
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number