Justifying Privacy Intrusions through AI Policy Trade-offs: Evidence from a Conjoint Experiment

Last registered on March 10, 2026

Pre-Trial

Trial Information

General Information

Title
Justifying Privacy Intrusions through AI Policy Trade-offs: Evidence from a Conjoint Experiment
RCT ID
AEARCTR-0018064
Initial registration date
March 09, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 10, 2026, 10:36 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Peking University

Other Primary Investigator(s)

PI Affiliation
Tsinghua University

Additional Trial Information

Status
In development
Start date
2026-03-09
End date
2026-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study examines how exposure to policy trade-offs affects individuals’ value priorities regarding artificial intelligence (AI) governance. Many public policies involving AI require balancing multiple normative objectives, such as efficiency, safety, and privacy protection. While prior studies often measure stated preferences over these values, less is known about whether exposure to explicit policy trade-offs can change individuals’ value prioritization.
We conduct a survey experiment focusing on AI-assisted traffic management systems. Respondents first report the importance they attach to three policy objectives: traffic efficiency, traffic safety, and personal data privacy. Participants are then randomly assigned to either a control group or a treatment group. The treatment group completes a conjoint choice experiment in which respondents repeatedly choose between alternative AI traffic management systems that vary in efficiency improvements, safety outcomes, and data collection practices. The control group reads a neutral description of AI traffic management without exposure to policy trade-offs.
After the intervention, respondents again report the importance they assign to the same three policy objectives. This design allows us to examine whether exposure to policy trade-offs influences individuals’ value priorities. The study contributes to research on public attitudes toward AI governance and provides evidence on how policy trade-offs shape normative preferences.
External Link(s)

Registration Citation

Citation
An, Yunyi and Xufeng Zhu. 2026. "Justifying Privacy Intrusions through AI Policy Trade-offs: Evidence from a Conjoint Experiment." AEA RCT Registry. March 10. https://doi.org/10.1257/rct.18064-1.0
Experimental Details

Interventions

Intervention(s)
The intervention consists of exposure to policy trade-offs through a conjoint choice experiment.
In the conjoint task, respondents evaluate alternative AI-assisted traffic management systems that differ along three attributes: traffic efficiency, safety and data collection (pivacy)
Each respondent in the treatment group completes two forced-choice tasks in which two system profiles are presented simultaneously. Respondents are asked to select the system they prefer.
The control group does not receive this conjoint task and instead reads a neutral description of AI traffic management systems without explicit discussion of trade-offs.
Intervention Start Date
2026-03-10
Intervention End Date
2026-12-31

Primary Outcomes

Primary Outcomes (end points)
policy choice, policy supportive, value importance change
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The study uses a randomized survey experiment with a pre–post measurement design. Participants first complete a baseline survey measuring the importance they assign to three policy objectives related to transportation policy: traffic efficiency, traffic safety, personal data privacyEach objective is rated on a 1–9 scale indicating perceived importance.
After the baseline measurement, participants are randomly assigned to one of two groups:
Control group: Participants read a neutral description of AI-based traffic management systems and proceed directly to the post-treatment value measurement.
Treatment group: Participants complete a conjoint choice experiment in which they evaluate alternative AI traffic management system designs. Each system profile is described along three attributes: traffic efficiency improvement, traffic safety outcomes, data collection practices
Each attribute has two levels, creating a 2* 2* 2 factorial design. Profiles are randomly paired into two forced-choice tasks. In each task, respondents select the system they prefer.
After completing the conjoint tasks, respondents again rate the importance of the same three policy objectives (efficiency, safety, privacy).
The design allows us to measure both policy preferences in the conjoint choices and potential changes in value priorities following exposure to explicit policy trade-offs.
Experimental Design Details
Not available
Randomization Method
randomized by the survey platform
Randomization Unit
individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
2000 individuals
Sample size: planned number of observations
2000 individuals
Sample size (or number of clusters) by treatment arms
200 in control, 1800 in treatment split into 56 scenarios
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information