Automation and Public Policy Preferences in the US

Last registered on August 03, 2022


Trial Information

General Information

Automation and Public Policy Preferences in the US
Initial registration date
August 01, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 03, 2022, 3:05 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.



Primary Investigator


Other Primary Investigator(s)

Additional Trial Information

In development
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Automation has had significant consequences on the labor market in the US. For instance, Acemoglu and Restrepo (2021) estimate that automation accounts for 50 to 70% of the surge in wage inequality in the US over the last decades. Yet, robot displacement does not seem to trigger any robot backlash and generates limited demand for redistribution/government intervention to address its labor market implications (Jeffrey and Matakos, 2021; Zhang, 2019). When identified, adverse reactions result in rising anti-trade and anti-immigration sentiments (Anelli et al., 2021; Frey et al., 2018).
I test these findings in a survey experiment on a representative sample of the US population. In particular, I try to understand why there has not been more of a push for public policies that would directly address the labor market consequences of automation in the US. I investigate whether treating respondents with the story of an automation shock changes their support for a set of social, nationalistic, and regulatory policies. I then ask them who they think should be responsible for the displaced workers. I also focus on the mechanisms and explore whether the experience of economic hardship, trust in government, views on government efficiency, and views on fairness play a role in explaining reactions to displacement by automation.
Finally, I zoom in on the role played by government effectiveness. Previous literature highlights how low levels of trust in government partially explain muted support for redistribution in some contexts (Alesina et al., 2018; Kuziemko et al., 2015). I add a treatment arm that anchors views of government effectiveness at a high level, and I study whether it leads to a different reaction to automation.
External Link(s)

Registration Citation

Ladreit, Colombe. 2022. "Automation and Public Policy Preferences in the US." AEA RCT Registry. August 03.
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details


My intervention consists in a survey experiment.

Respondents are randomly assigned to one of three treatment arms.

- Treatment 1: Automation shock
Treatment 1 assigns respondents a short newspaper-style article to read. The article describes a food manufacturer undergoing restructuring, which decided to lay off a significant share of its labor force. The article specifies that the company will "automate part of its production with machines," making it clear that robot adoption is causing job displacement.

- Treatment 2: Government effectiveness + Automation shock
Treatment 2 begins by anchoring the level of government effectiveness in the US. Respondents read a short text about a government effectiveness ranking conducted by the World Bank every year. The text describes the different items taken into account to build the ranking. Respondents are then asked to estimate how well the US ranked compared to the rest of the world in 2020, the last year of data available. They choose among four options: among the best, better than most, worse than most, among the worst. After answering, the correct answer is shown to them, which is that the US ranked as one of the most effective governments in the world in 2020.
After this "government effectiveness treatment", Respondents are shown the same automation newspaper-style article as in Treatment 1.

- Control
The control group is what Haaland et al. (Forthcoming) describe as an active control group. Respondents are shown a newspaper article similar to treatment 1's, except it does not specify the cause of the lay-off. Hence, respondents are treated with a negative labor market shock but do not know the reason for it.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
My outcome variables are self-reported policy preferences.
I measure people's preferences on a five-point Likert scale for six policies:
1. Providing re-training programs
2. Providing financial assistance to workers who lose their jobs
3. Raising tariffs on foreign goods
4. Restricting immigration to the US
5. Increasing taxes for firms
6. Regulating the ways in which firms operate
I group policies 1 and 2 as social policies, 3 and 4 as nationalistic ones, and 5 and 6 as regulatory ones.

I also measure people's views on who should be responsible for helping displaced workers. I follow the OECD's Risk that Matters survey (2021) and measure it on a four-point Likert scale for the following entities:
1. Firms, businesses, and employers
2. Civil society groups, such as professional associations, non-profit organizations, and charitable organizations
3. The national government
4. Individual workers themselves

Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
I start the survey by asking respondents to complete a questionnaire on demographics, including gender, age, marital status, number of children, ethnicity/race, whether they were born in the US, geography, education, employment, occupation, and income. I also ask participants if they have ever been unemployed before.

I then elicit participants' pre-treatment beliefs regarding trust in government, interpersonal trust, and views on government efficiency and fairness. After eliciting these prior beliefs, I ask them about their political affiliation.

Respondents are then randomly assigned to one of three treatment arms, according to what is described above (see Intervention).

My outcome variables are self-reported policy preferences, and described above (see primary outcomes)

Detailed information about the experimental design are available in the pre-analysis plan attached to this application.
Experimental Design Details
Randomization Method
Randomization to the treatment is done via Qualtrics
Randomization Unit
The randomization unit is at the individual level, each survey participant is assigned to one of the three treatment arms.
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
3,000 U.S. residents
Sample size: planned number of observations
3,000 U.S. residents
Sample size (or number of clusters) by treatment arms
1,000 individual per treatment arm.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
With 3000 respondents, I will be able to detect an effect size of 0.13 of standard deviation between the control and each treatment with a power of 0.8 and a 0.05 significance level

Institutional Review Boards (IRBs)

IRB Name
Bocconi Ethics Committee Review
IRB Approval Date
IRB Approval Number
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials