Making Workers AI Ready: Behavioral Interventions and AI Adoption Among Workers

Last registered on April 27, 2026

Pre-Trial

Trial Information

General Information

Title
Making Workers AI Ready: Behavioral Interventions and AI Adoption Among Workers
RCT ID
AEARCTR-0018144
Initial registration date
April 20, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 27, 2026, 11:02 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
University of Milan

Other Primary Investigator(s)

PI Affiliation
University of Pittsburgh
PI Affiliation
Ohio State University
PI Affiliation
University of Milan
PI Affiliation
University of Pittsburgh

Additional Trial Information

Status
In development
Start date
2026-04-20
End date
2027-05-05
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Many workers report feeling stressed or threatened by AI, even though existing evidence suggests limited negative effects on overall employment or mental health. Such perceptions can slow the adoption of AI tools and reduce engagement with reskilling opportunities. This study asks whether simple informational interventions and transparent AI use policies can immediately increase AI adoption and influence both task outcomes and workers' experience of using AI in a workplace setting.
Participants first complete a screening Survey (S0) designed to identify valid and attentive respondents. In Survey 1 (S1), participants are randomly assigned to one of four experimental conditions in a 2x2 factorial design crossing two independent interventions. The first is an informational intervention highlighting evidence that AI tools can increase productivity, designed to encourage hands-on engagement with AI in a low-stakes context. The second is a transparent AI policy statement, presented before the writing task, which explicitly informs participants that they are free to use or not use AI tools and that this choice has no consequences for their payment or future study participation. This statement aims to reduce concerns about being monitored or judged for their AI use decisions, thereby lowering social and psychological barriers to experimentation.
Primary outcomes are AI adoption during the task, productivity, time spent on the task, time allocated to different task components (brainstorming, drafting, and editing), self-efficacy, realism, and enjoyment.
As secondary outcomes, we will also analyze participants' perceptions of AI's broader effects on work, productivity, health, and well-being. We also measure expected changes in participants' use of generative AI tools over the next six months.
To explore heterogeneity in treatment effects, we additionally measure AI-related attitudes (ATTARI-12 scale), trust in AI, growth mindset, reskilling intentions, empowerment, sense of agency, perceived difficulty, guilt, perceived cheating, loss and ambiguity aversion, ethical concerns about AI, attitudes towards AI privacy and transparency, beliefs about AI's labor market impacts by age and gender, and collaboration at work. We also collect a rich set of sociodemographic and employment characteristics.
By crossing an informational intervention with a transparency statement, the study provides early experimental evidence on whether clear communication about the permissibility of AI use, combined with positive information about AI capabilities, can immediately increase AI adoption and support more proactive adaptation to technological change in the workplace.
External Link(s)

Registration Citation

Citation
Giuntella, Osea et al. 2026. "Making Workers AI Ready: Behavioral Interventions and AI Adoption Among Workers." AEA RCT Registry. April 27. https://doi.org/10.1257/rct.18144-1.0
Experimental Details

Interventions

Intervention(s)
This study examines whether informational interventions and transparency can reduce resistance toward artificial intelligence (AI) in the workplace and increase openness to AI adoption. Many workers report feeling stressed or threatened by AI, even though existing evidence suggests limited negative effects on overall employment or mental health. Such perceptions can slow the adoption of AI tools and reduce engagement with reskilling opportunities.

Following the approach of Celebi et al. (2026), participants complete an initial screening Survey 0 (S0) designed to identify valid, attentive respondents before entering the main study. We will focus on US based workers aged 18-65. This screening stage includes attention checks, comprehension questions, and basic demographic measures to ensure that participants are reading instructions carefully and engaging thoughtfully with the survey content. Only participants who successfully meet the screening criteria are invited to continue to Survey 1 (S1).

This prescreening approach helps improve data quality and ensures that subsequent intervention effects are measured among participants who are genuinely engaged with the study materials.

Participants who pass the data quality checks are invited to S1, where they are randomly assigned to one of four conditions in a 2x2 factorial design crossing two independent interventions.

The first is an informational intervention highlighting evidence that the use of AI tools can increase productivity, designed to encourage hands-on engagement with generative AI tools in a low-stakes context.

The second is an AI transparency policy statement, presented before participants complete a productivity task, which explicitly informs participants that they are free to use or not use AI tools and that this choice has no consequences for their payment or future study participation. This transparency statement is designed to reduce concerns about being monitored or judged for their AI use decisions, thereby lowering social and psychological barriers to experimentation with AI tools. Participants assigned to the transparency condition also complete a comprehension question to verify their understanding of the policy statement.

Participants will also be invited to complete a follow-up Survey 2 (S2), contingent on funding. The purpose of S2 is to measure the longer-run effects of the interventions on AI-related attitudes, AI adoption, perceptions, well-being, health, and productivity-related outcomes.
Intervention Start Date
2026-04-21
Intervention End Date
2026-05-26

Primary Outcomes

Primary Outcomes (end points)
1) Task performance
Productivity in the writing or coding task, measured by quality ratings and completion time. A standardized productivity index will be constructed from these measures.
2) Task experience
Task-specific self-efficacy, perceived realism, and enjoyment.
3) Behavioral outcomes
AI adoption during the task, measured by whether participants choose to use a generative AI tool when its use is optional; persistence and engagement in completing the task, including time spent on the task and time allocated across different task components (brainstorming, drafting, and editing); and self-reported AI use outside the study.
We will adjust for multiple hypothesis testing by organizing outcomes into pre-specified families and controlling the false discovery rate (FDR) within each family using the Benjamini-Hochberg procedure (Benjamini and Hochberg, 1995; Anderson, 2008).
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
As secondary outcomes, we will also analyze participants' perceptions of AI's broader effects on work, productivity, health, and well-being. We also measure expected changes in participants' use of generative AI tools over the next six months. Post intervention, we also measure perceived benefits of AI (Humlum and Vestergaard, 2025) and predictions for labor market effects (e.g., displacement, if any).
We will also explore heterogeneous treatment effects across a set of pre-specified baseline characteristics. These include psychological and attitudinal measures such as general attitudes toward AI (ATTARI-12), AI reliability, trust in AI, growth mindset, reskilling intentions, feelings of guilt, ethical, privacy, and transparency concerns, imposter syndrome, and sense of agency. We will also examine heterogeneity by beliefs about AI's labor market impacts across workers of different ages and genders, as well as by prior work experiences, including displacement, working hours, and collaboration at work. In addition, we will consider heterogeneity across individual characteristics such as, for instance, occupation, industry, education, field of degree, age, gender, socioeconomic background, prior AI exposure and familiarity, loss aversion, risk preferences, ambiguity aversion, time preferences, interpersonal trust, trust in science, Big Five personality traits, cognitive skills, work experience, decision involvement, firm type, and firm size.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
This study uses a randomized online experiment to test whether informational interventions and transparent AI-use policies can increase AI adoption and influence both task outcomes and workers’ experience with AI.

Stage 0 (S0): Screening for valid subjects

Following the approach of Celebi et al. (2026), participants first complete a screening survey designed to identify valid and attentive respondents before entering the main study. This stage includes attention checks, comprehension questions, and basic demographic measures to ensure that participants are reading instructions carefully and engaging thoughtfully with the survey content. Only participants who successfully pass the screening criteria are invited to continue to Survey 1 (S1). Based on previous pilots, we expect approximately 3,000 participants to pass the S0 screening and complete S1.

Survey 1 (S1): 2x2 factorial design

In S1, participants are randomly assigned to one of four conditions generated by crossing two independent interventions in a 2x2 factorial design. Randomization is implemented automatically when participants begin the survey.

The first factor is an informational AI-productivity intervention. Participants assigned to this treatment receive a brief message highlighting evidence from prior research showing that AI tools can increase productivity. This intervention is designed to encourage hands-on engagement with AI in a low-stakes context. Participants in the control condition do not receive this information.

The second factor is a transparent AI-use policy statement. Participants assigned to the transparency condition see the following statement before completing the task: “In this task, you may choose whether or not to use artificial intelligence (AI) tools (e.g., ChatGPT, Claude, Copilot, Gemini, Perplexity, or similar AI tools).

In many workplaces today, employees are allowed to use AI tools to support their work, while remaining responsible for the final output. Similarly, in this task, you are free to decide whether to use AI tools or not.

Your choice will not affect your payment or your participation in this study. “
This statement is designed to reduce concerns about being monitored or judged for AI-use decisions, thereby lowering social and psychological barriers to experimentation. Participants in the control condition do not receive this statement. Participants in the transparency condition also complete a comprehension question to verify that they have read and understood the policy statement.

The crossing of these two factors produces four experimental conditions: (A) Information + Transparency, (B) Information only, (C) Transparency only, and (D) Control, with approximately 750 participants per cell.

Survey structure and outcome measurement

The survey is structured to distinguish pre-treatment moderators from post-treatment outcomes. Before treatment, participants complete a battery of attitudinal, behavioral, and psychological measures, with question order randomized across participants. These include AI-related attitudes (ATTARI-12), baseline self-efficacy, growth mindset, trust in AI, trust in technology, trust in science, empowerment, sense of agency, feelings of guilt, ethical concerns, privacy and transparency attitudes, beliefs about AI's labor market impacts by age and gender, loss aversion, risk preferences, ambiguity aversion, time preferences, Big Five personality traits, cognitive skills, and collaboration and interdependence. These variables are measured before treatment to reduce priming and are used as pre-specified moderators in the heterogeneity analysis.Following exposure to the assigned intervention(s), participants complete an occupation-specific incentivized productivity task, such as a writing or coding task, with optional access to a generative AI tool.Primary outcomes are organized into three families. First, task performance outcomes include productivity, measured by quality ratings and completion time, combined into a standardized productivity index. Second, task experience outcomes include task-specific self-efficacy, perceived realism, and enjoyment. Third, behavioral outcomes include AI adoption during the task, persistence and engagement, time spent, time allocation across different task components (brainstorming, drafting, and editing), and self-reported AI use outside the study.Post-intervention, we also measure perceived benefits of AI (Humlum and Vestergaard, 2025) and predictions for labor market effects (e.g., displacement, if any). As secondary outcomes, we will analyze participants' perceptions of AI's broader effects on work, productivity, health, and well-being, as well as self-reported well-being and health (life satisfaction, job satisfaction, mental health) measured post-treatment in S1 and again at S2. We also measure expected changes in participants' use of generative AI tools over the next six months.Sociodemographic and employment characteristics are also collected after treatment, as these are stable individual attributes that cannot be affected by the intervention.Following S1, participants will be invited to complete a follow-up Survey 2 (S2), contingent on funding. The purpose of S2 is to measure the longer-run effects of the interventions on AI-related attitudes, AI adoption, perceptions, well-being, health, and productivity-related outcomes. S2 will also assess whether the effects observed in S1 persist over time.
Experimental Design Details
Not available
Randomization Method
Participants will be individually randomized using the randomization functions embedded in the online survey platform (e.g., Qualtrics or Prolific assignment tools). Randomization will be implemented automatically at the time participants begin Survey 1 (S1) to ensure allocation concealment.
In Survey 1 (S1), participants will be randomly assigned to one of four conditions resulting from the 2x2 factorial design crossing two independent factors: (1) whether they receive the informational AI-productivity intervention or not, and (2) whether they receive the AI policy transparency statement or not. Assignment uses even-presentation block randomization to ensure roughly equal allocation across the four conditions.
Randomization Unit
In Survey 1 (S1), each participant is independently assigned to one of four conditions resulting from the 2x2 factorial design crossing of the informational intervention and the AI policy transparency statement.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
N/A
Sample size: planned number of observations
We expect approximately 3,000 participants to pass the S0 screening and proceed to complete S1.
Sample size (or number of clusters) by treatment arms
Stage 0 (S0):

Initial screening: ~ up to 9,000 participants targeting 3000 in S1
Expected valid participants proceeding to and completing S1 (based on previous pilots): ~3,000

Survey 1 (S1) – Four experimental conditions (2x2 factorial design):

Condition A (Information + Transparency): ~750 participants
Condition B (Information only): ~750 participants
Condition C (Transparency only): ~750 participants
Condition D (Pure Control): ~750 participants
Total: ~3,000 participants
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Power calculations focus on detecting treatment effects in Survey 1, using main effects from the 2x2 factorial design. We set α = 0.05 (two-sided) and a target power of 0.80. For main effects, each treatment arm is compared against its pooled counterpart (e.g., all information participants vs. all no-information participants), yielding N = 1,500 per comparison. For interaction effects (e.g., the effect of the information within a given transparency condition), the relevant comparison involves N = 750 per arm. Power calculations assume AI use at around 50% at baseline and a standard deviation of 1.2 points on Likert-scale outcomes, both in line with previous pilots. Under these assumptions, the minimum detectable effect (MDE) for main effects is 5.1 percentage points for binary outcomes and 0.123 points on a 6-point Likert scale. For interaction comparisons, the MDE is 7.23 percentage points for binary outcomes and 0.174 points on the Likert scale.
IRB

Institutional Review Boards (IRBs)

IRB Name
University of Pittsburgh
IRB Approval Date
2025-04-04
IRB Approval Number
STUDY25030116
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information