AI Adoption, Manager Beliefs, and Hiring Intentions

Last registered on November 25, 2025

Pre-Trial

Trial Information

General Information

Title
AI Adoption, Manager Beliefs, and Hiring Intentions
RCT ID
AEARCTR-0017306
Initial registration date
November 21, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 25, 2025, 8:04 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
University of Notre Dame

Other Primary Investigator(s)

PI Affiliation
University of Notre Dame
PI Affiliation
University of Notre Dame
PI Affiliation
University of Melbourne

Additional Trial Information

Status
In development
Start date
2025-11-21
End date
2025-12-25
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Artificial intelligence (AI) is expected to reshape how firms create value and organize work, yet adoption inside organizations remains uneven, and managers face uncertainty about productivity gains, labor impacts, and implementation barriers. Middle managers, who control day-to-day processes and staffing, play a central role in determining whether interest in AI translates into actual adoption or changes in workforce planning. Understanding how managers form views about AI and how those views translate into intended organizational actions is essential for anticipating the labor-market impacts of AI.

This study evaluates whether exposure to targeted information influences managers’ beliefs about AI and their stated intentions related to technology use and workforce planning. We will conduct a randomized survey with managers in the United States and the United Kingdom who meet baseline screening criteria. Participants will complete questions on their team, role, decision authority, and current use of technology, followed by a short information module.

Outcomes will be measured using post-module survey items covering intended adoption of AI-related tools, intended hiring or retention decisions, and related belief measures. The study will also assess heterogeneity by organizational structure, function, industry, firm size, and country context, contributing new evidence on how information shapes managerial decision-making around AI and labor demand.
External Link(s)

Registration Citation

Citation
Dominski, Jacob et al. 2025. "AI Adoption, Manager Beliefs, and Hiring Intentions." AEA RCT Registry. November 25. https://doi.org/10.1257/rct.17306-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
The intervention consists of exposing managers to different informational modules about artificial intelligence (AI). Respondents are randomly assigned to view one of several brief information modules embedded within the survey. These modules vary the content of the information presented about AI, allowing us to study how different informational frames influence managerial beliefs, intended technology adoption, and planned staffing decisions.
Intervention (Hidden)
After completing baseline questions, respondents are randomly assigned to one of three information treatments:

(1) Control module: a short neutral video listing major AI companies.
(2) Productivity module: a short video summarizing empirical evidence on AI’s productivity-enhancing effects across tasks and industries.
(3) Labor-market module: a short video summarizing empirical evidence on AI’s labor-displacing effects and associated workforce reductions.

Each video lasts approximately two minutes and is followed by a content-based attention check. These variations constitute the intervention and are designed to shift respondents’ beliefs about AI in different directions. Post-video survey items measure intended AI adoption, intended hiring or layoffs, and belief mechanisms, including perceived benefits, perceived risks, productivity expectations, and ethical considerations.
Intervention Start Date
2025-11-21
Intervention End Date
2025-12-25

Primary Outcomes

Primary Outcomes (end points)
AI Adoption Intentions
AI Advocacy
Hiring Intentions
Hiring Advocacy
Primary Outcomes (explanation)
We construct four standardized indices capturing managers’ stated intentions regarding AI adoption and workforce decisions. Each index is built from multiple survey items: all component items are standardized, averaged (requiring at least half of the items to be non-missing), and re-standardized so that higher values reflect more pro-AI or more pro-hiring orientations. We will also examine the individual component items that make up each index to understand how specific survey questions contribute to the aggregated patterns.

The AI Adoption Index measures the intention to adopt new AI tools or expand existing use within the team. The AI Advocacy Index captures the respondent’s stated willingness to advocate for increasing or decreasing the use of AI tools within their team. The Hiring Intentions Index summarizes managers’ expectations about increasing, maintaining, or reducing headcount for the team’s primary occupation groups. The Hiring Advocacy Index captures respondents’ stated willingness to advocate for hiring additional employees or to advocate against layoffs or workforce reductions within their team.

Secondary Outcomes

Secondary Outcomes (end points)
Benefit Beliefs
Risk Beliefs
Cost Beliefs
Ethical Considerations
Information Evaluation
Open-Ended Reflections
Heterogeneous Effects
Secondary Outcomes (explanation)
Secondary outcomes measure the belief channels and qualitative reasoning patterns that may accompany shifts in managerial intentions. Belief-related items (Benefit Beliefs, Risk Beliefs, Cost Beliefs, Ethical Considerations, and Information Evaluation) are collected using post-treatment agreement scales and are summarized both individually and through standardized indices. Each index is constructed using the same procedure as for the primary outcomes: all component items are standardized across the full sample, averaged, and then standardized again so that higher values consistently represent stronger orientation along the relevant belief dimension.

The Benefit Index summarizes beliefs about AI’s usefulness and expected productivity improvements. The Risk Index reflects perceived harmfulness, including concerns about AI-driven job disruption. The Cost Index captures respondents’ perceptions of the organizational requirements for effective AI adoption, including expected financial investment, time to realize returns, and the degree of operational or role-based changes needed. The Ethics Index measures the importance respondents assign to workforce welfare and ethical considerations when making AI-related decisions. The Information Impact Index captures how respondents evaluate the information module, including perceived credibility, informativeness, and bias.


We also analyze several open-ended responses in which managers describe their reasoning about AI adoption, staffing decisions, and their reactions to the information module. These qualitative data will be examined using thematic coding and standard text-analysis approaches, such as keyness analysis, topic modeling, or vector-based text embeddings, to contextualize the quantitative results.

In addition, we examine heterogeneous treatment effects across pre-specified moderator dimensions, including managerial authority, organizational structure, baseline AI use and attitudes, firm size, industry, country, and demographic characteristics. These analyses identify which types of managers or organizational contexts exhibit stronger or weaker responses to the information module.

Experimental Design

Experimental Design
We conduct an online survey experiment with managers in the United States and the United Kingdom. Participants are recruited through a professional survey platform and must satisfy screening criteria confirming that they currently supervise employees. The survey begins with baseline questions on demographics, managerial responsibilities, team composition, decision authority, and current use of workplace technologies.

Following these baseline questions, respondents are shown a brief information module embedded within the survey. They are then asked a series of follow-up questions about their views on AI in the workplace and their intended decisions related to technology use and team staffing. Random assignment to different versions of the information module enables us to estimate how exposure to information relates to managerial beliefs and stated intentions.
Experimental Design Details
After completing baseline questions on demographics, managerial responsibilities, team characteristics, firm size, decision authority, and current AI use, respondents are randomized (blocked by country, gender, and age) into one of several information-module conditions, as detailed in the Intervention section. Immediately after viewing their assigned module and completing an attention check, respondents answer post-treatment questions measuring intended AI adoption, intended staffing changes, and belief mechanisms.

The survey also includes several open-ended prompts in which managers describe their reasoning about AI adoption, hiring and layoff decisions, and their reactions to the information module.
Randomization Method
Randomization is implemented by the survey software (Qualtrics) and is blocked by country, gender, and age.
Randomization Unit
Individual respondents.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
N/A
Sample size: planned number of observations
2,000 respondents (1,000 in the United States and 1,000 in the United Kingdom)
Sample size (or number of clusters) by treatment arms
Control: 40% of the sample (~800 respondents)
Treatment 1: 30% of the sample (~600 respondents)
Treatment 2: 30% of the sample (~600 respondents)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
We report all power calculations and minimum detectable effects in the attached Pre-Analysis Plan. These calculations are based on variance estimates from our pilot and the planned sample of 2,000 managers.
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
University of Notre Dame Institutional Review Board
IRB Approval Date
2025-08-25
IRB Approval Number
N/A
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials