Attentional Persuasion

Last registered on December 16, 2022

Pre-Trial

Trial Information

General Information

Title
Attentional Persuasion
RCT ID
AEARCTR-0010434
Initial registration date
December 16, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
December 16, 2022, 4:30 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Harvard University

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2022-12-19
End date
2022-12-23
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
I use an online experiment to study how people make complex decisions and how information can both change beliefs but also distort attention.
External Link(s)

Registration Citation

Citation
Conlon, John. 2022. "Attentional Persuasion." AEA RCT Registry. December 16. https://doi.org/10.1257/rct.10434-1.0
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2022-12-19
Intervention End Date
2022-12-23

Primary Outcomes

Primary Outcomes (end points)
The primary outcome is how much attention participants pay to each dimension of the problem, depending on whether they have been or are being informed of it.
Primary Outcomes (explanation)
To measure attention to each attribute, I will estimate a simple linear equation regressing choice (Option A or Option B) on the value of attributes. The rational full-attention benchmark is that all attributes receive equal weight.

Secondary Outcomes

Secondary Outcomes (end points)
We will also look at whether participants chose the higher-value option.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Participants repeatedly choose whether to give up money for a complex "good" whose total value is difficult to assess. Information provided during the experiment can both change their beliefs about the value of some of these attributes or simply draw more attention to them.
Experimental Design Details
Participants repeatedly choose between two options. Option A includes a listed amount of money (provided via a bonus payment) plus the opportunity to earn more in a risky lottery. Option B includes six components: three different types of coins (dimes, nickels, and pennies) and three colored boxes, each of which adds a certain amount of money depending on their color (which participants are informed of ahead of time). The values of all these attributes, except the lottery, change from decision to decision. Some participants are then reminded about the value of randomly selected attributes for a subset of decisions.
Randomization Method
Randomization is done within Qualtrics.
Randomization Unit
Participants are randomly assigned to different treatments. The value of attributes for each choice are randomly generated within each survey (and so differ across participants).
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
800 Individuals
Sample size: planned number of observations
800
Sample size (or number of clusters) by treatment arms
For each participant, two attributes of Option B are randomly chosen within the survey code. Call these the "target" and "alternative" attributes. There are 80 total decisions, which are divided into 4 intervals of 20 choices each. In each interval (except the first), participants can be informed about the value of at most one attribute.

Experiment 1 has 5 treatments:
Treatment 1 (N = 120): No information in intervals 2 and 3.
Treatment 2 (N = 120): Informed about the target attribute in interval 2, nothing in interval 3.
Treatment 3 (N = 120): Informed about the alternative attribute in interval 2, nothing in interval 3.
Treatment 4 (N = 120): Informed about the target attribute in interval 2 and the target attribute in interval 3
Treatment 5 (N = 120): Informed about the target attribute in interval 2 and the alternative attribute in interval 3

Independently, everyone is randomized in interval 4 to see no information, a reminder that Option A comes with a lottery, or a reminder about the lottery but with additional information about the objective odds of winning it. Also independently, one attribute (other than the target and alternative attribute) is "frozen" at its starting value for the whole experiment.

Experiment 2 is similar to Experiment 1, except for three things. First, there is no lottery associated with Option A. Second, no attribute is frozen. Third, the value of Option A is chosen such that the target attribute is always decisive in determining whether Option A or Option B pays more. Within this experiment, participants are randomly and independently informed (in each of intervals 2, 3, and 4) about nothing, the value of the target attribute, or the value of the alternative attribute. We plan to recruit N = 200 people for experiment 2.

Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Harvard
IRB Approval Date
2022-11-29
IRB Approval Number
IRB22-1564

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials