Polarization and Belief Entrenchment Through Memory

Last registered on April 24, 2026

Pre-Trial

Trial Information

General Information

Title
Polarization and Belief Entrenchment Through Memory
RCT ID
AEARCTR-0018416
Initial registration date
April 20, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 24, 2026, 8:51 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
University of Birmingham

Other Primary Investigator(s)

PI Affiliation
University of Macau
PI Affiliation
Purdue University

Additional Trial Information

Status
In development
Start date
2026-04-21
End date
2027-10-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study examines how memory processes, specifically rehearsal and cue-triggered retrieval, shape belief formation about a salient U.S. policy. Participants complete a two-part online survey conducted approximately 24 hours apart. In Part 1, they read and evaluate a set of opinion headlines about the policy. Across randomly assigned conditions, participants either (a) complete a brief writing task that directs rehearsal of previously viewed headlines, or (b) are exposed to a brief visual cue at the beginning of Part 2. Part 2 elicits recall of the Part 1 content and participants' current views on the policy. The design holds the information environment constant across conditions and varies only the retrieval environment, allowing the study to isolate the role of memory in belief formation. The study contributes to research on belief formation and political disagreement.
External Link(s)

Registration Citation

Citation
Kuang, Pei, Li Tang and Michael Weber. 2026. "Polarization and Belief Entrenchment Through Memory." AEA RCT Registry. April 24. https://doi.org/10.1257/rct.18416-1.0
Experimental Details

Interventions

Intervention(s)
The study implements two sets of interventions across separate experiments:

(1) Rehearsal. Between the two survey waves, participants are randomly assigned to one of three short writing tasks. Two direct retrieval toward different subsets of the Part 1 content; the third is unrelated to the policy (neutral control).

(2) Cue. At the start of Part 2, participants are randomly assigned to one of three conditions. Two display a brief visual stimulus linked to the Part 1 content; the third proceeds without any additional display (no-cue control).

The information environment is identical across conditions; only the retrieval environment is manipulated.
Intervention Start Date
2026-04-22
Intervention End Date
2027-09-30

Primary Outcomes

Primary Outcomes (end points)
Primary outcomes are measured at Part 2, 24 hours after Part 1:

(1) Recall. Self-reported recall of Part 1 content.

(2) Policy sentiment. Self-reported sentiment toward the policy.

(3) Policy support. Self-reported support for the policy.

(4) Posterior uncertainty. Self-reported uncertainty about the Part 1 information.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
(1) Recognition performance. Accuracy on a Part 2 recognition test.

(2) Encoding accuracy. Accuracy on the Part 1 per-headline classification task.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The study uses a two-part online experimental design implemented across two separate experiments. In Part 1, participants view a balanced set of 20 opinion-article headlines about a salient U.S. policy, with each headline accompanied by a shape indicator. Across all conditions, participants observe the same 20 headlines in randomized order. After each headline, participants classify it according to its stance on the policy, which produces an individual-level record of encoding accuracy. Participants also report an overall emotional reaction to the set of headlines and complete an instructed-response attention check. The information environment is held constant across all conditions within each experiment.

In Experiment 1 (Rehearsal), participants are randomly assigned to one of three groups that differ in a short writing task administered at the end of Part 1. In one group, participants are asked to write about as many headlines as they can remember from Part 1 that are most consistent with their own view on the policy. In a second group, participants are asked to write about headlines that are most inconsistent with their own view. A third, neutral-control group is asked to write about an unrelated topic. All three groups face similar cognitive effort, and writing-task responses are quality-incentivized.

In Experiment 2 (Cue), there is no treatment in Part 1. Random assignment occurs at the start of Part 2, immediately before elicitation. Two groups see a brief visual display associated with the Part 1 shape indicators; a third no-cue-control group proceeds directly to elicitation. The cue contains no new information about the policy itself.

Part 2 of both experiments is conducted approximately 24 hours after Part 1. Participants are recontacted, receive a brief neutral reminder that the Part 1 headlines are relevant for the follow-up, and complete a common battery of questions eliciting recall of the Part 1 content, policy sentiment, policy support, and subjective uncertainty about the Part 1 information environment. Participants also complete a recognition test consisting of the Part 1 headlines presented alongside additional decoys. Accuracy bonuses are attached to the recall and recognition tasks.

The experimental design allows for comparisons of recall, beliefs, and uncertainty across participants exposed to the same information environment but assigned to different retrieval conditions, as well as comparisons within self-reported political-identity groups measured at the end of Part 1.
Experimental Design Details
Not available
Randomization Method
Done within qualtrics using javascript.
Randomization Unit
Individual participant.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
N/A
Sample size: planned number of observations
Around 4200 individuals.
Sample size (or number of clusters) by treatment arms
Around 700 per treatment arm.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
University Research School Ethics Committee
IRB Approval Date
2025-10-24
IRB Approval Number
N/A