Rewiring Readiness: Mindsets and Reskilling for the AI Age

Last registered on February 18, 2026

Pre-Trial

Trial Information

General Information

Title
Rewiring Readiness: Mindsets and Reskilling for the AI Age
RCT ID
AEARCTR-0016998
Initial registration date
February 10, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
February 18, 2026, 5:55 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
University of Pittsburgh

Other Primary Investigator(s)

PI Affiliation
University of Milan
PI Affiliation
University of Milan
PI Affiliation
Ohio State University

Additional Trial Information

Status
In development
Start date
2026-02-11
End date
2026-07-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study examines whether brief, scalable online interventions can reduce anxiety and resistance toward artificial intelligence (AI) in the workplace and increase openness to reskilling. Many workers report feeling stressed or threatened by AI, even though existing evidence suggests limited negative effects on overall employment or mental health. Such perceptions can slow adoption of AI tools and reduce engagement with reskilling opportunities.

The experiment consists of multiple components implemented across survey waves. In an initial survey, participants receive an information intervention designed to nudge them toward using AI to complete a productive task. This component provides participants with basic information about AI capabilities and encourages hands-on engagement with an AI tool in a low-stakes context, establishing initial exposure and experience before subsequent interventions.

Following this initial intervention, the main experiment combines a psychological mindset intervention with a practical reskilling intervention, each paired with a clearly defined control condition. The mindset intervention builds on Yeager et al. (2022), which shows that combining a growth mindset, defined as the belief that abilities can improve with effort, with a stress-is-enhancing mindset, defined as the view that stress responses can support learning and performance, can improve motivation, well-being, and resilience. Participants assigned to this condition complete a short online module conveying these ideas, along with reflective saying-is-believing writing exercises. The control condition for the mindset intervention consists of a video containing neutral neuroscience content, matched in format and length but unrelated to mindsets, stress, or learning.

The reskilling intervention consists of a short, structured guide to effective prompting, designed to teach participants how to interact productively with an AI assistant, for example by framing tasks clearly, providing relevant context, and iterating on outputs. This component is intended to lower barriers to AI use, increase self-efficacy, and encourage hands-on experimentation with AI tools during simple productivity tasks such as writing or coding. The control condition for the reskilling intervention is a neutral educational video on the history of AI prior to the 21st century, which provides general background information but does not involve skill acquisition or guidance on AI use.

Participants are randomly assigned across conditions following the initial information intervention. Outcomes include, for instance, changes in AI-related attitudes measured using the ATTARI-12 scale, self-efficacy, trust in AI, AI use, and performance on productivity tasks completed with and without AI assistance. By combining an initial information nudge with psychological mindset framing and prompt-based reskilling, each evaluated relative to appropriate control conditions, the study provides early experimental evidence on whether a layered approach to brief interventions can increase acceptance of AI and support healthier, more proactive adaptation to technological change in the workplace.
External Link(s)

Registration Citation

Citation
Giuntella, Osea et al. 2026. "Rewiring Readiness: Mindsets and Reskilling for the AI Age ." AEA RCT Registry. February 18. https://doi.org/10.1257/rct.16998-1.0
Experimental Details

Interventions

Intervention(s)
This study examines whether brief, scalable online interventions can reduce anxiety and resistance toward artificial intelligence (AI) in the workplace and increase openness to reskilling. Many workers report feeling stressed or threatened by AI, even though existing evidence suggests limited negative effects on overall employment or mental health. Such perceptions can slow adoption of AI tools and reduce engagement with reskilling opportunities.

Following the approach of Celebi et al. (2025), participants complete an initial screening Survey 0 (S0) designed to identify valid, attentive respondents before entering the main study. This screening stage includes attention checks, comprehension questions, and basic demographic measures to ensure participants are reading instructions carefully and engaging thoughtfully with survey content. Only participants who successfully complete the screening criteria are invited to continue to Survey 1 (S1). This pre-screening approach helps improve data quality and ensures that subsequent intervention effects are measured among participants who are genuinely engaged with the study materials.

The study follows a longitudinal, two-survey design, with measurements collected at two points in time, approximately two weeks apart.

Survey 1 (S1)
In Survey 1, participants complete baseline measures of AI-related attitudes, self-efficacy, trust in AI, and other relevant outcomes. Participants are then randomly assigned to receive either a brief informational nudge or no nudge. The nudge highlights findings from prior research showing that the use of AI tools can increase productivity. After this manipulation, participants complete a short productivity task with optional access to an AI assistant. This initial component tests whether the informational nudge immediately increases AI use and task engagement.

Survey 2 (S2)
In Survey 2, conducted approximately two weeks after S1, participants are randomly allocated to one of four conditions corresponding to two interventions and their respective control conditions.

Participants assigned to the synergistic mindset intervention complete a brief online module adapted from Yeager et al. (2022). The module combines two evidence-based components: a growth mindset, defined as the belief that abilities and skills can be developed with effort, practice, and learning (Dweck, 2006), and a stress-is-enhancing mindset, defined as the idea that the body's stress response can be reframed as a source of energy and focus that supports performance and growth. The intervention consists of a short video, interactive content, and reflective writing exercises that help participants apply these ideas to challenges related to AI and workplace change, emphasizing that adapting to new technologies such as AI can be stressful but also an opportunity for learning and skill development.

Participants assigned to the reskilling intervention complete a short, structured guide to effective prompting. This guide introduces practical strategies for interacting productively with an AI assistant, including how to formulate clear prompts, provide relevant context, and iteratively refine outputs. The goal of the reskilling intervention is to lower barriers to AI use, increase self-efficacy, and encourage hands-on experimentation with AI tools during simple productivity tasks.

Participants assigned to the mindset-control condition complete a module of similar length, tone, and engagement presenting neutral neuroscience content about brain anatomy and basic functions, without motivational or learning-related framing. Participants assigned to the reskilling control condition view a neutral educational video on the history of AI prior to the 21st century, which provides general background information but does not involve skill acquisition or guidance on AI use.

All S2 modules are designed to be completed online in approximately 7 minutes. After completing the assigned module, participants again perform short productivity tasks, some of which include optional access to an AI assistant. Outcomes measured in S2 include AI-related attitudes (measured using the ATTARI-12 scale), trust in AI, self-efficacy, AI use during tasks, and task performance.

By combining an initial informational nudge with psychological mindset framing and prompt-based reskilling, each evaluated relative to appropriate control conditions across two survey waves, this study provides early experimental evidence on whether brief interventions can increase acceptance of AI and support healthier, more proactive adaptation to technological change in the workplace.

Sample and Implementation
The study will be implemented in phases. An initial pilot wave will recruit approximately 300 participants for S0 screening, targeting 200 valid participants to enter at baseline (S1). In Survey 2, these participants will be allocated across the four experimental groups, two treatments (67 per arm) and two neutral sub-control arms (pooled with 66 total control observations). This pilot phase will test the feasibility of the design and assess preliminary treatment effects. Pending funding, the sample will be expanded to increase statistical power and provide more robust estimates of intervention effects. Ideally, the full-scale study aims to collect up to 500 observations per experimental treatment group and follow-up with participants in a final longitudinal survey 3.
Intervention Start Date
2026-02-11
Intervention End Date
2026-07-01

Primary Outcomes

Primary Outcomes (end points)
The study will measure how a brief synergistic-mindset and reskilling intervention affects workers' attitudes, confidence, and performance related to artificial intelligence (AI).

1) Task performance:
Performance (productivity) in short writing or coding tasks completed with and without access to an AI assistant, measured by accuracy, quality ratings, and completion time.

2) Attitudinal outcomes:
General attitudes toward AI, measured using the ATTARI-12 scale (cognitive, affective, and behavioral subcomponents).
Trust and perceived reliability of AI tools (e.g., "I can rely on AI to improve my work").
Self-efficacy for learning and using new digital tools (12-item SECS scale).
Growth mindset items (e.g., "I can get better at using AI through practice and effort").

3) Behavioral outcomes:
AI adoption and use: Willingness to engage with AI tools, measured by whether participants choose to use the AI assistant when optional during productivity tasks.

4) Learning and reskilling intentions:
Intentions to invest in AI-related learning or training, measured by stated money willingness to spend on improving growth mindset and AI skills.

5) Well-being and health:
Life satisfaction, job satisfaction, health satisfaction, physical and mental health.

Outcomes will be measured both in Survey 1 and immediately after the intervention in Survey 2, in order to assess the immediate impact of the mindset and reskilling interventions.
We will adjust for multiple hypothesis testing by organizing outcomes into pre-specified families and controlling the false discovery rate (FDR) within each family using the Benjamini-Hochberg procedure (Benjamini and Hochberg, 1995; Anderson, 2008).
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
The study will also explore a set of secondary and exploratory outcomes to better understand the mechanisms through which the mindset and reskilling interventions may influence adaptation to AI-driven change.

Psychological mechanisms and beliefs: Feelings of empowerment, enjoyment, difficulty, guilt, cheating, imposter syndrome, sense of agency, loss aversion, ethical/privacy/transparency concerns when working with or alongside AI. Beliefs about AI's labor market impacts on workers by age and gender and working experiences (displacement, working hours/days, collaboration at work).
Behavioral engagement: Engagement and persistence in completing AI-related tasks (e.g., time spent, number of attempts, dropout). Open-ended reflections coded for mindset-consistent language (e.g., references to learning, effort, or reappraising stress). Self-reported AI use outside the study (e.g., frequency of AI-assisted work or learning activities during the follow-up period).
Heterogeneity: Differences in effects on the outcomes by occupation, education, age, gender, socioeconomic characteristics, baseline AI attitudes (i.e., feelings of guilt, ethical/privacy/transparency concerns, ATTARI, self-efficacy), prior AI exposure/familiarity (Humlum and Vestergaard, 2025), loss aversion, risk preferences, trust, time preferences, Big Five personality traits, cognitive skills, collaboration at work, and planned retention with current employer.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
This study uses a randomized online experiment to test whether brief psychological and educational interventions can improve workers' performance, attitudes and behaviors toward artificial intelligence (AI).

Stage 0 (S0): Screening for Valid Subjects
Following the approach of Celebi et al. (2025), participants first complete an initial screening survey (S0) designed to identify valid, attentive respondents before entering the main study. This screening stage includes attention checks, comprehension questions, and basic demographic measures to ensure participants are reading instructions carefully and engaging thoughtfully with survey content. Only participants who successfully complete the screening criteria are invited to continue to Survey 1. This pre-screening approach helps improve data quality and ensures that subsequent intervention effects are measured among participants who are genuinely engaged with the study materials. Approximately 300 participants will be recruited for S0 screening, with a target of 200 valid participants proceeding to Survey 1.

Longitudinal Study Design
The study follows a longitudinal, two-survey design. In Survey 1 (S1), participants complete baseline measures and are randomly assigned to receive either a brief informational nudge highlighting evidence that AI use can increase productivity, or no nudge. After this manipulation, participants complete a short productivity task with optional access to an AI assistant, allowing us to assess whether the nudge immediately affects self-reported and objective measures of AI use during task performance.

In Survey 2 (S2), conducted approximately two weeks after S1, participants are randomly assigned to one of four groups, corresponding to the two interventions (i.e., growth mindset and reskilling) and their respective control conditions. Randomization in S2 occurs at the individual level within occupation and is stratified by the nudge condition received in S1.

1) Synergistic Mindset Intervention (Treatment 1):
Participants assigned to this condition complete a brief online synergistic mindset video adapted from Yeager et al. (2022). The video combines two scientific ideas: a growth mindset, defined as the belief that abilities can improve through effort and learning; and a stress-is-enhancing mindset, defined as the belief that stress can be reframed as energy that supports focus and performance. Participants complete short readings and afterwards complete a reflective writing exercise applying these ideas to challenges related to AI and workplace change. The duration of the video is approximately 7 minutes.

2) Neuroscience Video (Control 1):
Participants in this condition complete a module of similar length, tone, and engagement containing neutral neuroscience content, such as facts about brain anatomy and function, without motivational framing or reference to AI.

3) Reskilling Intervention (Treatment 2):
Participants assigned to this condition watch a short video: a structured guide to effective prompting, which provides practical guidance on how to interact productively with a generative AI tool, including how to formulate clear prompts, provide relevant context, and iteratively refine outputs. The duration of the video is approximately 7 minutes.

4) History of AI prior to the 21st Century Video (Control 2):
Participants in this condition view a neutral educational video on the history of AI prior to the 21st century, which provides general background information but does not involve skill acquisition or guidance on AI use.

Outcome Measures:
After completing the assigned module in S2, all participants perform an occupation-specific incentivized productivity task, where they complete a writing or coding tasks with optional use of an AI assistant. We then measure outcomes as described above (primary and secondary outcomes).

Sample and Implementation
The study will be implemented in phases. An initial pilot wave will recruit approximately 300 participants for S0 screening, targeting 200 valid participants to enter at baseline (S1). In Survey 2, these participants will be allocated across the four experimental groups, two treatments (67 per arm) and two neutral sub-control arms (pooled with 66 total control observations). This pilot phase will test the feasibility of the design and assess preliminary treatment effects.

Pending funding, the sample will be expanded to increase statistical power and provide more robust estimates of intervention effects. Ideally, the full-scale study aims to collect up to 500 observations per experimental treatment group and follow-up with participants in a final longitudinal survey 3.
Experimental Design Details
Not available
Randomization Method
Participants will be individually randomized using the randomization functions embedded in the online survey platform (e.g., Qualtrics or Prolific assignment tools). Randomization will be implemented automatically at the time participants begin each survey to ensure allocation concealment.

In Survey 1 (S1), participants will be randomly assigned with equal probability to receive either the informational AI-productivity nudge or no nudge. In Survey 2, these participants will be randomly allocated across the four experimental groups, two treatments (reskilling and mindset interventions) and two neutral sub-control arms (neutral history of AI and neuroscience content). The control groups will be pooled, resulting in three groups of equal size.

Randomization in S2 will be stratified by the nudge condition received in S1.
Randomization Unit
The unit of randomization is the individual participant.
In Survey 1 (S1), each participant is independently assigned to either receive the informational nudge or no nudge (control).
In Survey 2 (S2), each participant is independently assigned to one of four conditions: Mindset Treatment, Mindset Control, Reskilling Treatment, or Reskilling Control. Randomization in S2 is stratified by S1 nudge status
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
N/A
Sample size: planned number of observations
Sample Size by Treatment Arms Pilot Phase: Stage 0 (S0): Initial screening: ~300 participants Expected valid participants proceeding to S1: ~200 Survey 1 (S1): Nudge condition: ~100 participants No nudge condition: ~100 participants Total: ~200 participants Survey 2 (S2) – Four experimental arms: Synergistic Mindset Intervention: ~67 participants Reskilling Intervention: ~67 participants Neuroscience Control: ~33 participants History of AI before the 21st Century Control: ~33 participants Total: ~200 participants This pilot phase will test the feasibility of the design and assess preliminary treatment effects. Full-Scale Implementation (Pending Funding): Stage 0 (S0): Initial screening: ~2,250 participants Expected valid participants proceeding to S1: ~1,500 Survey 1 (S1): Nudge condition: ~750 participants No nudge condition: ~750 participants Total: ~1,500 participants Survey 2 (S2) – Four experimental arms: Synergistic Mindset Intervention: ~500 participants Reskilling Intervention: ~500 participants Neuroscience Control: ~250 participants History of AI before the 21st Century Control: ~250 participants Total: ~1,500 participants The full-scale study will provide adequate statistical power to detect meaningful treatment effects and test heterogeneous effects across subgroups (e.g., by occupation, education, age, gender, or baseline AI attitudes). Survey 3 (S3) (Pending Funding): Pending funding, Survey 3 will measure outcomes longitudinally to assess the robustness and persistence of intervention effects over time. We do not anticipate differential attrition by treatment group.
Sample size (or number of clusters) by treatment arms
Sample Size by Treatment Arms
Stage 0 (S0):
Initial screening: ~300 participants

Expected number of valid participants proceeding to S1: ~200

Survey 1 (S1):
Nudge condition: ~100 participants
No nudge condition: ~100 participants
Total: ~200 participants

Survey 2 (S2) – Four experimental arms:
Synergistic Mindset Intervention: ~67 participants
Reskilling Intervention: ~67 participants
Neuroscience Control : ~33 participants
History of AI before the 21st Century Control: ~33 participants
Total: ~200 participants

Pending funding, the study will recruit additional participants to increase statistical power and provide more robust estimates of intervention effects and test heterogeneous treatment effects across subgroups (e.g., by occupation, education, age, gender, or baseline AI attitudes).

Full-Scale Implementation (Pending Funding):
The target aims for up to 500 observations per experimental group in Survey 2:
Synergistic Mindset Intervention: ~500 participants
Reskilling Intervention: ~500 participants
Neuroscience Control : ~250 participants
History of AI before the 21st Century Control: ~250 participants
Total: ~1,500 participants

Pending funding, Survey 3 (S3) will measure outcomes longitudinally to assess the robustness and persistence of intervention effects over time. We do not anticipate differential attrition by treatment group.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Minimum Detectable Effect Size Power calculations focus on detecting treatment effects in Survey 2 (S2). Power analysis: We set α = 0.05 (two-sided) and target power of 0.80. Calculations assume equal allocation across arms and focus on pairwise comparisons between each treatment arm and the pooled control group. Pilot: The pilot includes a target of N = 200 participants in a four-arm design. For a binary outcome, under the conservative assumption that the baseline proportion is 0.5, the minimum detectable effect (MDE) is 28 percentage points. For the main Likert outcomes (6/7-point scale), we report the MDE in standardized units: 0.56 SD. Main study: The main study includes N = 1,500 participants (500 Mindset Treatment, 500 Reskilling Treatment, 500 pooled control). For a binary outcome (baseline proportion 0.1-0.5), the MDE is 6-9 percentage points; for continuous outcomes (0-100 scale), the MDE is 8 points; and on the 6-point Likert scale outcomes, the MDE is 0.18 SD.
IRB

Institutional Review Boards (IRBs)

IRB Name
University of Pittsburgh
IRB Approval Date
2025-04-04
IRB Approval Number
STUDY25030116
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information