Selective Exposure and Political Polarization: An Online Field Experiment on News Consumption

Last registered on August 20, 2019

Pre-Trial

Trial Information

General Information

Title
Selective Exposure and Political Polarization: An Online Field Experiment on News Consumption
RCT ID
AEARCTR-0004582
Initial registration date
August 14, 2019

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 20, 2019, 10:59 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Northeastern University

Other Primary Investigator(s)

Additional Trial Information

Status
Completed
Start date
2016-02-01
End date
2019-05-01
Secondary IDs
Abstract
This study empirically investigates the causal link between news consumers’ self-selective exposure to like-minded partisan media and political polarization. I create a South Korean mobile news application for this study. Users of the app are given access to curated articles on key political issues and were regularly asked about their views on those issues. Some randomly selected users were allowed to choose the news sources from which to read articles; others were given randomly selected articles. To test the mechanism of familiarity, I temporarily boost users' familiarity with some of the news sources during the initial stage of the experiment.

Note that this registry is made after the conclusion of the experiment and its analysis, so that the trial is searchable in the AEA registry. For the pre-analysis plan, see the IRB document for this project (MIT COUHES # 1511295047), attached in this registry. The current working paper is circulated under the title: "Better the Devil You Know: an Online Field Experiment on News Consumption."
External Link(s)

Registration Citation

Citation
Jo, Donghee. 2019. "Selective Exposure and Political Polarization: An Online Field Experiment on News Consumption." AEA RCT Registry. August 20. https://doi.org/10.1257/rct.4582-1.0
Experimental Details

Interventions

Intervention(s)
People read an article every day about a randomly selected issue for approximately 2 weeks. I exploit two sources of randomness in this experiment. First, an article is randomly selected for each user in a subset of the experimental period. Second, each user is assigned to one of the treatment groups. Some randomly selected users were allowed to choose the news sources from which to read articles; others were given randomly selected articles. See "Experimental Design" below for more details.
Intervention Start Date
2016-02-01
Intervention End Date
2016-11-30

Primary Outcomes

Primary Outcomes (end points)
Position on eight policy issues covered in the app. The positions are continuous measures between 0 and 1, and if the reported position is exactly 0 or 1 the position is considered extreme. For users' positions, the app asks the user to “move the scroll bar to denote your view on <IssueName>.” The app provides a rough cardinal benchmark to the positions on the horizontal attitude bar.

For comparison between the treatment groups, two main outcome variables are: (i) the distance between prior and posterior positions and (ii) whether the user has an extreme policy view.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Upon installation and baseline survey, each user has a five-day pre-exposure period. During the pre-exposure period, a randomly selected article from a randomly selected news source about a randomly selected issue is provided each day. Pre-exposure to randomly chosen news sources during this period serves as an important exogenous variation for the test of mechanisms. This period also serves as a grace period, alleviating the attrition problem by screening out those who were going to drop out early.

After pre-exposure period, each user reads an article about a randomly selected issue each day. For analysis, I include one article reading per issue per user after pre-exposure period, taking into account the rapid dropout of users.

I assign the users who finish the pre-exposure period into one of seven treatment groups. The everyday experience of the treatment groups is slightly different. First—a common step for every treatment group—the issue of the day for each user is randomly selected by the app. The first three groups are allowed to select the news source from which to read an article about the issue of the day. The remaining four groups are given randomly selected articles. When making source selections (G1-G3), the first two groups (G1-G2) can see the names of the news sources while group G3 cannot. The average positions of the sources on the issue at hand are shown only to G2 and G3, and not to G1. For those who are not allowed to select the news source (G4-G7), there is a screen, immediately before reading the randomly chosen article, showing relevant information about the news source that published the article. The information about the news source shown to the readers varies (name only, name and position, position only, nothing). Regardless of the treatment status, everyone can determine the news source at the time of reading the article with a minimal effort because the articles always indicate the source’s name at the end of the text. Most articles also indicate the source’s name at the beginning and in the middle of the text.

Note: unfortunately, the average positions of the sources—shown in the app for G2, G3, G5, and G6—had coding errors unnoticed by the researchers until the very end of the research period: they almost always indicated source positions very close to the center of the scale. Anecdotally, users mostly ignored this scale due to this problem. Therefore, the experiences of G1 and G2 were roughly similar, as were the experiences of G4 and G5, and G6 and G7. Furthermore, the names of the news sources were always salient in the reading screen, making the experience of G4-G7 highly indistinguishable.

Given this, I aggregate the groups into three larger groups to maximize the statistical power, taking into account the similar experiences of the finer subgroups within these larger groups. The Source-Name Group includes G1 and G2—they are allowed to select their news source based on source names. The Source-Position group consists of only G3, which cannot see the names (thus it is harder for them to identify the media that were likely to advocate the supported party’s view) but can see source positions and select based on this information. The remaining groups (G4-G7) are not allowed to choose news sources and were given randomly selected articles. They are merged together as the No-Choice Group. All group comparisons in the paper are based on three larger groups.
Experimental Design Details
Randomization Method
Randomization done by a remote server's random number generation
Randomization Unit
Individual for group level comparison, each article reading for position updating and selection results (this is clustered in user level to be conservative)
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
1,500 users for reading session level regressions (position updating pattern analysis), 450 users for comparisons between treatment groups
Sample size: planned number of observations
Approximately 2,000 for comparisons between treatment groups, approximately 10,000 for reading session level regressions
Sample size (or number of clusters) by treatment arms
100 Source-Name Group, 50 Source-Position Group, 200 No-Choice Group
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
MIT Committee On the Use of Humans as Experimental Subjects
IRB Approval Date
2015-11-18
IRB Approval Number
1511295047

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
Yes
Intervention Completion Date
November 30, 2016, 12:00 +00:00
Data Collection Complete
Yes
Data Collection Completion Date
November 30, 2016, 12:00 +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
367 users for group comparisons, 1420 users for reading session level analysis (position updating patterns)
Was attrition correlated with treatment status?
No
Final Sample Size: Total Number of Observations
1774 (user×issue) for group comparisons, 7792 for position updating patterns (user×issue×round)
Final Sample Size (or Number of Clusters) by Treatment Arms
97 Source-Name Group, 42 Source-Position Group, 228 No-Choice Group
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
No
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials