Obfuscatory language, information acquisition, and opinions: a 2×2 online experiment

Last registered on November 26, 2025

Pre-Trial

Trial Information

General Information

Title
Obfuscatory language, information acquisition, and opinions: a 2×2 online experiment
RCT ID
AEARCTR-0017269
Initial registration date
November 24, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 26, 2025, 7:03 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
UFRJ

Other Primary Investigator(s)

PI Affiliation
UERJ

Additional Trial Information

Status
In development
Start date
2025-12-08
End date
2025-12-12
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We run a pre-registered online experiment to study how obfuscatory language affects (i) participants’ decision to acquire information and (ii) downstream opinions. Participants read a brief news-style description of a fatal traffic-stop incident and then choose whether to view a short facts summary before answering opinion questions. We independently randomize (1) the syntax of the initial description—Active voice (clear agent) vs Intransitive (more obfuscatory)—and (2) a content-neutral salience cue on the “See facts” option (Z=0: “See facts”; Z=1: “See facts (~10–15 seconds)”). Z changes text only (no content, order, size, or placement).

Primary outcomes are: (1) information acquisition (clicked “See facts”, 0/1) and (2) an Opinion Index (mean of z-scores of participants' opinions about moral responsability and legal penalties to the perpetrator). We estimate the impact of language on acquisition and opinions. To identify the causal effect of acquisition on opinions, we use 2SLS with the salience cue as an instrument. We then decompose the total language effect on opinions into the component mediated by acquisition. Secondary outcomes include recall and perceived agency clarity (mechanism checks), as well as responses to additional opinion questions.
External Link(s)

Registration Citation

Citation
Hemsley, Pedro and Lynda Pavão. 2025. "Obfuscatory language, information acquisition, and opinions: a 2×2 online experiment." AEA RCT Registry. November 26. https://doi.org/10.1257/rct.17269-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
We run an online, individual-level randomized experiment with a 2×2 design. Factor 1 is Language: participants read a brief news-style description written in either Active voice (clear agent; control) or Intransitive syntax (more obfuscatory; treatment). Factor 2 is a salience cue (Z) applied to the “See facts” option that allows respondents to view a short facts summary before answering opinion questions. In Z=1, the label reads “See facts (~10–15 seconds)”; in Z=0, it reads “See facts.” Z changes text only (not option order/size/placement or the facts content). We measure (i) whether the respondent clicks to see facts (information acquisition) and (ii) opinions in a Likart scale about the presented news.
Intervention (Hidden)
Language arm (50/50): After consent and demographics, respondents view a news-style vignette about a fatal traffic stop. In the Active arm, the opening lines and facts panel use active voice (e.g., “A police officer shot and killed Jordan Reyes, 29…”). In the Intransitive arm, the text uses intransitive/nominalized phrasing (e.g., “Jordan Reyes, 29, died in an officer-involved shooting…”). Within each language arm, the facts panel content is identical across Z; only syntax differs by arm.

Salience Z (50/50, independent): Immediately after the vignette, respondents choose between two options presented with identical order/size/placement: (i) “See facts” and (ii) “Continue to next section.” In Z=1, the first option includes the parenthetical “(~10–15 seconds)” to describe expected reading time; in Z=0, it does not. Clicking “See facts” opens the panel (brief bullet points; typical read time ~10–15s); otherwise the respondent proceeds directly to opinion questions.

Randomization unit: individual; 2×2 design with independent randomization.

Note: Z is content-neutral (text-only); it does not alter content, order, size, placement, or page flow aside from labeling the “See facts” option.

Ethics: Exempt determination pending from an independent U.S. social/behavioral IRB. Data collection will begin only after approval/exemption.
Intervention Start Date
2025-12-08
Intervention End Date
2025-12-12

Primary Outcomes

Primary Outcomes (end points)
Primary outcomes: (a) Acquire = 1 if “See facts” was clicked, 0 otherwise; (b) Opinion Index = mean of z-scores of moral-responsibility and legal-penalty items (higher = more punitive), along with average response for each question.
Primary Outcomes (explanation)
The opinion questions are the following:

Q1. Moral responsibility of the shooter (7-point Likert): 1 = Not at all responsible … 7 = Completely responsible.
Q2. Should the shooter face legal penalties (e.g., criminal charges)? (7-point Likert): 1 = Strongly oppose … 7 = Strongly support.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary/mechanism outcomes: Jail recommendation (binary), recall items, perceived agency clarity, polarization.
Secondary Outcomes (explanation)
Questions are the following (excluding verification questions):

Q3. “How clear was it who caused the death in the description you read? 1=Not at all clear … 7=Completely clear.”
Q4. Jail recommendation (binary): 0 = Do not recommend jail; 1 = Recommend jail.
Q6. According to the description, what happened during the traffic stop? (multiple choice; one correct).
A. During a late-evening traffic stop in Midtown, a bystander was killed while Jordan Reyes was uninjured.
B. During a late-evening traffic stop in Midtown, Jordan Reyes, 29, was taken to a hospital and later died.
C. During a late-evening traffic stop in Midtown, Jordan Reyes was treated at the scene and released.
D. During an afternoon traffic stop in Midtown, Jordan Reyes suffered minor injuries but survived.
E. During a late-evening traffic stop in Midtown, property damage occurred but there were no injuries.
F. Not sure.
Q7. Was any body-camera footage mentioned?
A. Yes
B. No
C. Not sure

Polarization for a given participant, for a given response to Q1 or Q2, is measured as the distance (absolute value) of their response to the average. Variations include distance to the median of responses; and to the median of the Likert scale.

Experimental Design

Experimental Design
Individual-level randomized online experiment with a 2×2 design. Factor 1 (Language): Active voice (control) vs Intransitive (treatment). Factor 2 (Salience Z): text-only cue on the “See facts” option (Z=1 “See facts (~10–15 seconds)” vs Z=0 “See facts”). After reading the vignette, respondents choose whether to view a brief facts summary before answering opinion questions.
Experimental Design Details
Flow. Consent → demographics → vignette (Language assigned) → choice page (Z assigned) → optional facts panel (if clicked) → outcomes.
Language (50/50). Opening lines and facts panel use Active vs Intransitive syntax; content is otherwise parallel across arms.
Salience Z (50/50, independent). Choice page shows two options with identical order/size/placement: “See facts” and “Continue to next section.” In Z=1, the “See facts” label includes “(~10–15 seconds)”; in Z=0, it does not. Z changes text only; it does not alter content, order, size, placement, or page flow.
Randomization Method
Randomization is implemented at the individual level using SurveyMonkey’s A/B testing / random assignment features. Language (Active vs Intransitive) is randomized 50/50, and Salience Z (0 vs 1) is randomized 50/50 independently, producing four cells with equal expected proportions.
Randomization Unit
Individual respondent. No clustering; no stratification.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
None — individual-level randomization (no clustering)
Sample size: planned number of observations
Target sample: N = 2500 participants (online sample)
Sample size (or number of clusters) by treatment arms
2×2 design, equal allocation (expected):
• Active × Z=0: n ≈ 625
• Active × Z=1: n ≈ 625
• Intransitive × Z=0: n ≈ 625
• Intransitive × Z=1: n ≈ 625
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials