Generative AI, the Future of Work, and Policy Preferences

Last registered on December 05, 2025

Pre-Trial

Trial Information

General Information

Title
Generative AI, the Future of Work, and Policy Preferences
RCT ID
AEARCTR-0017244
Initial registration date
December 02, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
December 05, 2025, 9:32 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
George Washington University

Other Primary Investigator(s)

PI Affiliation
George Washington University

Additional Trial Information

Status
In development
Start date
2025-12-02
End date
2026-07-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This project examines how individuals in low- and middle-income countries perceive and respond to the rapid diffusion of generative artificial intelligence across economic, social, and governance domains. While generative AI offers substantial potential productivity gains, uncertainty about its implications for employment, privacy, fairness, and global power dynamics may change adoption patterns. The uncertainty and risks associated with its adoption also raises concerns about data governance, unequal global distribution of AI capabilities, and appropriate roles of governments in regulating or promoting AI.
To identify the mechanisms shaping adoption, beliefs, and governance preferences, we conduct a randomized survey experiment with participants across diverse demographic and occupational groups. Respondents are assigned to one of six informational treatments: (1) practical instruction on AI use and prompt engineering; (2) labor-market disruption risks; (3) labor-market opportunities; (4) algorithmic bias; (5) privacy concerns; and (6) geopolitical competition and global inequality in AI capacity. The experiment measures effects on willingness to reskill, willingness to use and invest in AI, peer contact and social effects, and willingness to supply data. It also captures policy preferences, including attitudes toward AI-related redistribution, desired government promotion and regulation of AI, the use of AI in allocation decisions, and preferences over domestic versus international AI development and governance. These outcomes allow us to assess not only behavioral adjustments but also how individuals think governments should respond to the opportunities and risks of AI.
The study aims to provide causal evidence on the belief channels shaping AI adoption, skill adaptation, and support for national and global AI governance. The results inform policy interventions aimed at expanding access to AI’s benefits while addressing emerging socioeconomic, ethical, and geopolitical concerns.
External Link(s)

Registration Citation

Citation
Betai, Neha and Maggie Chen. 2025. "Generative AI, the Future of Work, and Policy Preferences ." AEA RCT Registry. December 05. https://doi.org/10.1257/rct.17244-1.0
Experimental Details

Interventions

Intervention(s)
The intervention consists of a randomized provision of short informational modules designed to influence participants’ perceptions of generative artificial intelligence. Each participant is randomly assigned to one of six treatment conditions or a control condition. All interventions are delivered within an online survey with no deception being used.
Each informational module highlights a distinct dimension of AI’s potential impact:
1. AI Skills and Usage Tutorial
2. Labor-Market Disruption
3. Labor-Market Opportunities
4. Algorithmic Bias
5. Privacy Risks
6. Geopolitics and Global Inequality in AI Capacity
7. Control Group

After receiving the assigned module, respondents complete a standardized set of questions measuring willingness to use or invest in AI tools, willingness to reskill, peer contact and social effects, willingness to supply data, trust in AI technologies, preferences for domestic AI regulation and promotion, preferences for AI’s role in public decision-making, preferences for national vs. international AI governance.

The intervention is minimal-risk and informational in nature.
Intervention Start Date
2025-12-02
Intervention End Date
2026-02-28

Primary Outcomes

Primary Outcomes (end points)
Primary outcomes are grouped into related families.
1.AI Adoption and Investment: Self-reported willingness to use generative AI tools, Willingness to invest in AI, Intention to integrate AI into current or future work tasks.
2.Skill Adaptation and Reskilling: Willingness to acquire AI-related or AI-complementary skills, Planned time investment in learning or practicing AI-related skills, Perceived importance of reskilling in response to AI.
3.Human Contact vs. AI Mediation: Openness to AI replacing human interaction in various domains, Preferred balance between human and AI involvement in personal and professional interactions.
4.Data Sharing and Data Monetization: Willingness to share personal or behavioural data with AI systems, Comfort with data being used to train AI models.
5.Domestic AI Policy and Governance Preferences: Support for government promotion of AI, Support for government regulation of AI, Support for using AI in public allocation and decision-making, Preferences for AI-related redistribution policies
6.International AI Governance and Cooperation: Preference for national vs. foreign / international AI , Support for global cooperation on AI
Primary Outcomes (explanation)
The primary outcomes will be constructed as indices, combining multiple survey items into standardized measures.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The study conducts an online randomized survey experiment to examine how different types of information about artificial intelligence can influence individuals’ beliefs, adoption intentions, and policy preferences. Participants are recruited from two populations, general online workers (e.g., MTurk) and skilled freelancers on a major freelancing platform, with an emphasis on respondents residing in low- and middle-income countries.
After providing baseline demographic information, each participant is randomly assigned to one of six informational treatment conditions that highlight the impact of AI or a control group. Following exposure to the assigned module, all participants complete a standardized set of outcome questions on AI-related adoption intentions, willingness to reskill, attitudes toward data sharing, and preferences for domestic and international AI governance.
Randomization is conducted at the individual level within the survey platform. Total planned sample size is approximately 3,500-4000 respondents.
Experimental Design Details
Not available
Randomization Method
Randomization is conducted automatically by the survey platform using a computer-generated random number assignment. Each participant is assigned to one of the treatment arms or the control group with equal probability. The process is fully automated and free of any influence from the researchers.
Randomization Unit
Randomization is conducted is at the individual level.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Approximately 3,500-4000 individuals (no higher-level clusters; randomization at the individual level).
Sample size: planned number of observations
Approximately 3,500-4000 individuals.
Sample size (or number of clusters) by treatment arms
Even distribution of participants across all treatment arms and control groups
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
The George Washington University Office of Human Research
IRB Approval Date
2025-11-21
IRB Approval Number
NCR256825