Artificial Intelligence (AI)-augmented Culture and Leadership

Last registered on September 17, 2022


Trial Information

General Information

Artificial Intelligence (AI)-augmented Culture and Leadership
Initial registration date
September 05, 2021

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
September 08, 2021, 2:42 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
September 17, 2022, 2:55 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.


Primary Investigator

University of Notre Dame

Other Primary Investigator(s)

PI Affiliation
Arizona State University
PI Affiliation
Santa Clara University

Additional Trial Information

In development
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
We conduct a field experiment in a large, public firm to test whether AI-based recommendations for workers and leaders can improve productivity individually and in a team setting. The AI-based recommendations are customized from comprehensive and granular data on employee activity throughout the day. We randomize the content and the nature of the recommendations, and aim to study how these interventions influence workers' productivity, work patterns, and cooperative behavior.
External Link(s)

Registration Citation

Grennan, Jillian, Christos Makridis and Michal Zator. 2022. "Artificial Intelligence (AI)-augmented Culture and Leadership." AEA RCT Registry. September 17.
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details


We randomize the nature and content of AI-based recommendations that are shown to workers.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
Worker productivity, measured with company-specific proprietary measure, as well as with sales efficiency, and various measures of task completion.
Worker cooperation, measured by worker e-mail activity.
Worker job satisfaction and self-assessed measures of training, development, etc., based on pre and post-experiment survey.
Primary Outcomes (explanation)
The exact outcomes that will be available will depend on data availability and decisions of the company with which we are partnering, and thus may be subject to change.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We randomize teams of workers in a large public firm into several treatments that receive AI-based recommendations related to best practices and worker productivity. We randomize the content of recommendations by excluding some categories of recommendations for some workers.
Experimental Design Details
Randomization Method
Randomization done in office by a computer. Each worker is assigned to treatment based on the value of pseudo-random ID generated with standard statistical software. The randomization is conducted in a way that achieves desire size of treatment cells.
Randomization Unit
Team of workers
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
103 teams
Sample size: planned number of observations
792 workers
Sample size (or number of clusters) by treatment arms
On each level of randomization, around 1/3 of the sample is included in a single treatment arm.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
We estimate that we need a sample of at least 128. We use the "Dynamic Treatment Effects of Job Training" by Jorge Rodriguez, Fernando Saltiel, and Sergio Urzua forthcoming in Journal of Applied Econometrics as an analogue to our setting. They focus on earnings as a means of recovering the returns to job training. Given a desired power of 80% and p-value of 0.05, applying their sample logged earnings average with and without the treatment of 6.23 and 6.14, respectively, together with their given standard deviation of 0.57 and treatment effect of 1.7%, implies an N of 630. However, this estimate is highly sensitive to the standard deviation and treatment effect. First, their upper bound on the treatment effect is 3.4%, so exchanging that gives a much smaller N = 128. Second, their standard deviation of earnings is large and focused on a much less developed country where earnings are more heterogeneous. The American Time Use Survey from 2020 suggests that the standard deviation of logged earnings for full-time workers is 0.57. If we furthermore assume that our sample will be much more heterogeneous, closer to 0.40 (and a treatment effect of 3.7%), then we only need N = 68. For these reasons, we believe upwards of N = 128 is a reasonable view.

Institutional Review Boards (IRBs)

IRB Name
Duke University
IRB Approval Date
IRB Approval Number


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials