The Effects of Social Media Comments Section Moderation on Political Attitudes and Beliefs

Last registered on December 17, 2022

Pre-Trial

Trial Information

General Information

Title
The Effects of Social Media Comments Section Moderation on Political Attitudes and Beliefs
RCT ID
AEARCTR-0010337
Initial registration date
November 04, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 08, 2022, 4:25 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
December 17, 2022, 11:15 AM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Stanford University

Other Primary Investigator(s)

PI Affiliation
Stanford University
PI Affiliation
University of California San Diego

Additional Trial Information

Status
In development
Start date
2022-11-11
End date
2023-01-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This project focuses on the intersection of propaganda and censorship in China, where state media and other accounts post propaganda content on social media and censor undesirable comments under these posts. The experimental part of this project will evaluate to what extent comment moderation affects public opinion and political attitudes. We will conduct survey experiments based on real propaganda posts and comment censorship behavior exercised by state-sponsored social media accounts.

External Link(s)

Registration Citation

Citation
Cao, Thomas, Yiqing Xu and Leo Yang. 2022. "The Effects of Social Media Comments Section Moderation on Political Attitudes and Beliefs." AEA RCT Registry. December 17. https://doi.org/10.1257/rct.10337-2.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Treatment and control arms will see the same social media posts with different comments sections, or different background knowledge before they see the posts.
Intervention Start Date
2022-11-11
Intervention End Date
2023-01-31

Primary Outcomes

Primary Outcomes (end points)
Our main experiment is the first experiment: We will examine the respondents' second-order beliefs of 1) what proportion of people in society agrees with the regime's position represented in the propaganda social media posts and 2) what proportion of people in society will report anti-regime/critical content.
Primary Outcomes (explanation)
The two second-order belief questions will be self-reported on a scale of 1~5.

Secondary Outcomes

Secondary Outcomes (end points)
Our secondary outcomes are 1) the extent to which the respondents agree with the regime's position represented in the propaganda social media posts and 2) whether they themselves will report anti-regime/critical content. We will also conduct two additional experiments in parallel to see how we may or may not be able to mitigate the effects of the first experiment.
Secondary Outcomes (explanation)
The two secondary outcomes will also be self-reported on a scale of 1~5. See Experimental Design for details on the second and third experiments.

Experimental Design

Experimental Design
Respondents will be recruited via Qualtrics and Lucid in China. Each respondent will see six social media posts and will be asked to answer four questions after each post: 1) to what extent they agree or disagree with the post's position; 2) how many people in society they think agree with the post's position; 3) would you report content that criticizes the post's position; and 4) how many people in society they think would report such critical content.
Experimental Design Details
Not available
Randomization Method
Done by the Qualtrics randomizer
Randomization Unit
In the first experiment, randomization occurs on the unit of posts. In the second and third experiments, randomization occurs on the unit of respondents.

Update (Dec 17, 2022): In the new experiment, randomization also occurs on the unit of respondents.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
N/A
Sample size: planned number of observations
We will recruit approximately 1,800 ~ 2,000 respondents for the three experiments in total. Each respondent will see six different Weibo posts. Update (Dec 17, 2022): Due to difficulty in recruiting a sufficient number of respondents in China by Qualtrics, we were only able to collect approximately 1,000 responses for the first three experiments. Hence, we will conduct another experiment and use Lucid to recruit another 1,000 respondents for the new experiments (see Update in the Experimental Design section).
Sample size (or number of clusters) by treatment arms
In the first experiment, each post will be randomly assigned into one of the four versions (Treatment Arms 1~3 and Control) with equal probability, so we will have approximately 600 respondents * 6 posts/respondents * .25 = 900 posts in each arm.

In the second experiment, respondents will be randomly assigned into one of the four arms, so each arm will have approximately 600 respondents * 0.25 = 150 respondents.

In the third experiment, approximately 1/3 of the respondents will be assigned to Control and 2/3 of the respondents will be assigned to Treatment, so we will have approximately 600 * 1/3 = 200 respondents in Control and 600 * 2/3 = 400 respondents in Treatment.

Update (Dec 17, 2022): Due to difficulty in recruiting a sufficient number of respondents in China by Qualtrics, we were only able to collect approximately 1,000 responses for the first three experiments. Hence, we will conduct another experiment and use Lucid to recruit another 1,000 respondents for the new experiments (see Update in the Experimental Design section). The new experiment will have approximately 200 respondents in each arm.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
In the first experiment, we will be able to detect an effect size of 0.12 at alpha = 0.05 level with 80% probability (Control = 3.9; Treat = 3.78, SD = 0.9).
IRB

Institutional Review Boards (IRBs)

IRB Name
Stanford University Institutional Review Board
IRB Approval Date
2022-06-15
IRB Approval Number
54133