Framing Artificial Intelligence and Labor Displacement: A Survey Experiment on Policy Preferences in Latin America

Last registered on August 25, 2025

Pre-Trial

Trial Information

General Information

Title
Framing Artificial Intelligence and Labor Displacement: A Survey Experiment on Policy Preferences in Latin America
RCT ID
AEARCTR-0016600
Initial registration date
August 21, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 25, 2025, 8:39 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region
Region
Region
Region

Primary Investigator

Affiliation
Inter-American Development Bank

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2025-08-20
End date
2025-09-26
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We conducted a series of survey experiments to identify individual's public policy preferences regarding AI regulation and AI deployment as a security tool.
External Link(s)

Registration Citation

Citation
Scartascini, Carlos. 2025. "Framing Artificial Intelligence and Labor Displacement: A Survey Experiment on Policy Preferences in Latin America." AEA RCT Registry. August 25. https://doi.org/10.1257/rct.16600-1.0
Experimental Details

Interventions

Intervention(s)
Survey experiments embedded in an online survey conducted by the Latin American Public Opinion Project (LAPOP). The module on artificial intelligence (AI) was embedded in the national questionnaires as a randomized survey experiment, with respondents assigned to different framing conditions.
Intervention (Hidden)
Intervention Start Date
2025-08-20
Intervention End Date
2025-09-26

Primary Outcomes

Primary Outcomes (end points)
1. Policy preferences for labor regulation
2. Policy preferences for the deployment of AI tools for the detection of crime
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The experiment investigates how different framings of the impact of artificial intelligence (AI) on employment affect individual preferences over public policy responses. Building on prior work on automation and technological anxiety, we assess whether respondents’ support for government intervention varies when the disruptive potential of AI is presented with different narratives.

All respondents are first introduced to a neutral explanation of AI, described as a technology capable of learning and making decisions independently. Two follow-up questions measure baseline perceptions: whether AI may replace their job and which social group they believe would be most affected by AI-induced unemployment. Following this, respondents are randomly assigned to one of four treatment arms with equal probability (1/4). Each treatment group is exposed to a brief vignette that frames the impact of AI on employment in one of three ways—an alarmist tone emphasizing mass job loss, a delayed scenario minimizing urgency, or a reallocation narrative stressing transition support (see questions in Appendix A.1). These framings were drawn from real-world headlines and media discourse.

The fourth group serves as a control and receives no framing. The Control group received no additional prompt and proceeded directly to the outcome questions. The Alarmist group was exposed to a headline citing reports from international consulting firms that warned up to 60% of current jobs could be lost to automation. This framing emphasizes the immediacy and severity of AI-related job displacement. In contrast, the Delayed Threat group read a headline which acknowledged the potential for technological disruption but emphasized that such effects are unlikely to materialize in the short term, projecting instead a gradual transformation over the next decade, with limited automation
of existing tasks. Finally, the Job Reallocation group was presented with a headline arguing that AI might eliminate “pointless” or low-value jobs, suggesting that the technology could reallocate labor away from routine tasks and toward more meaningful work, without
necessarily reducing the overall demand for human labor.

All respondents were then asked the same key outcome question: if AI reduces the amount of human labor needed, what public policy should the government pursue? The response options included (1) imposing regulations requiring firms to retain workers, (2) raising taxes on those who remain employed in order to subsidize the unemployed, (3) not intervening in the labor market, or (4) expressing no opinion. This outcome captures respondents’ policy preferences regarding redistribution and regulation in the face of technological change.

By randomly assigning informational framings and holding the outcome measure constant, the design permits identification of causal effects of different narratives on policy attitudes. Moreover, the design allows us to test for heterogeneous effects by education, occupation,
income, and baseline perceptions of job risk.
Experimental Design Details
Randomization Method
All respondents are first introduced to a neutral explanation of AI, described as a technology capable of learning and making decisions independently. Two follow-up questions measure baseline perceptions: whether AI may replace their job and which social group they believe would be most affected by AI-induced unemployment. Following this, respondents are randomly assigned to one of four treatment arms with equal probability (1/4). Each treatment group is exposed to a brief vignette that frames the impact of AI on employment in one of three ways—an alarmist tone emphasizing mass job loss, a delayed scenario minimizing urgency, or a reallocation narrative stressing transition support (see questions in Appendix A.1). These framings were drawn from real-world headlines and media discourse.
Randomization done by LAPOP (computer)

Control (p ∼ 1/4): Respondents receive only the baseline definition of AI.
Alarmist Framing (p ∼ 1/4): Respondents read a headline citing estimates from PwC, McKinsey, and the World Economic Forum warning that up to 60% of current jobs could be eliminated by AI-driven automation.
Delayed Threat (p ∼ 1/4). Respondents read a headline from Goldman Sachs emphasizing that AI-driven changes will be gradual, with only 5% of tasks being automatable and effects not expected for at least a decade.
Job Reallocation (p ∼ 1/4). Respondents read a headline suggesting that AI will eliminate ”meaningless” jobs,
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
The final analytic sample includes 4,147 respondents: 1,006 in Argentina, 1,021 in Brazil,
1,016 in Colombia, and 1,104 in Mexico
Sample size: planned number of observations
The final analytic sample includes 4,147 respondents: 1,006 in Argentina, 1,021 in Brazil, 1,016 in Colombia, and 1,104 in Mexico
Sample size (or number of clusters) by treatment arms
1. Experiment 1: 1/4 in each arm from each country
2. Experiment 2: 1/2 in each arm from each country
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
See attached document
IRB

Institutional Review Boards (IRBs)

IRB Name
University of Vanderbilt Review Board
IRB Approval Date
2025-08-21
IRB Approval Number
N/A
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials