The Effects of Social Media Comments Section Moderation on Political Attitudes and Beliefs

Last registered on December 17, 2022

Pre-Trial

Trial Information

General Information

Title
The Effects of Social Media Comments Section Moderation on Political Attitudes and Beliefs
RCT ID
AEARCTR-0010337
Initial registration date
November 04, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 08, 2022, 4:25 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
December 17, 2022, 11:15 AM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
Stanford University

Other Primary Investigator(s)

PI Affiliation
Stanford University
PI Affiliation
University of California San Diego

Additional Trial Information

Status
In development
Start date
2022-11-11
End date
2023-01-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This project focuses on the intersection of propaganda and censorship in China, where state media and other accounts post propaganda content on social media and censor undesirable comments under these posts. The experimental part of this project will evaluate to what extent comment moderation affects public opinion and political attitudes. We will conduct survey experiments based on real propaganda posts and comment censorship behavior exercised by state-sponsored social media accounts.

External Link(s)

Registration Citation

Citation
Cao, Thomas, Yiqing Xu and Leo Yang. 2022. "The Effects of Social Media Comments Section Moderation on Political Attitudes and Beliefs." AEA RCT Registry. December 17. https://doi.org/10.1257/rct.10337-2.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Treatment and control arms will see the same social media posts with different comments sections, or different background knowledge before they see the posts.
Intervention Start Date
2022-11-11
Intervention End Date
2023-01-31

Primary Outcomes

Primary Outcomes (end points)
Our main experiment is the first experiment: We will examine the respondents' second-order beliefs of 1) what proportion of people in society agrees with the regime's position represented in the propaganda social media posts and 2) what proportion of people in society will report anti-regime/critical content.
Primary Outcomes (explanation)
The two second-order belief questions will be self-reported on a scale of 1~5.

Secondary Outcomes

Secondary Outcomes (end points)
Our secondary outcomes are 1) the extent to which the respondents agree with the regime's position represented in the propaganda social media posts and 2) whether they themselves will report anti-regime/critical content. We will also conduct two additional experiments in parallel to see how we may or may not be able to mitigate the effects of the first experiment.
Secondary Outcomes (explanation)
The two secondary outcomes will also be self-reported on a scale of 1~5. See Experimental Design for details on the second and third experiments.

Experimental Design

Experimental Design
Respondents will be recruited via Qualtrics and Lucid in China. Each respondent will see six social media posts and will be asked to answer four questions after each post: 1) to what extent they agree or disagree with the post's position; 2) how many people in society they think agree with the post's position; 3) would you report content that criticizes the post's position; and 4) how many people in society they think would report such critical content.
Experimental Design Details
Respondents will be recruited via Qualtrics and Lucid in China. Respondents will be randomly assigned to one of the three experiments. In all three experiments, each respondent will see six social media posts and will be asked to answer four questions after each post: 1) to what extent they agree or disagree with the post's position; 2) how many people in society they think agree with the post's position; 3) would you report content that criticizes the post's position; and 4) how many people in society they think would report such critical content.

In the first experiment, randomization occurs on the level of posts: Each respondent will randomly see one of the four comments sections for each post. For each post, Treatment Arm 1 will only display the filtered comments that have been allowed by the account manager to appear; Treatment Arm 2 will display the same comments as Treatment Arm 1, together with a notice that the comments section has been filtered (one realistic scenario that users actually see on Weibo); Treatment Arm 3 will display no comments at all and only a note that the comments section has been filtered (another realistic scenario); Control arm will display both comments that have been allowed to appear and comments that have been hidden by the account manager (i.e., what the comments section looks like before moderation). All comments are authentic, but their associated usernames have been anonymized.

In the second experiment, randomization occurs on the level of respondents. Respondents in Treatment Arm A will see a quiz question on the proportion of comments that are hidden by state-media Weibo accounts, and then be given the correct answer. Respondents in Treatment Arm B will be asked to select comments that have been hidden for a real Weibo post, and then be given the correct answer. Respondents in Treatment Arm C will see both quiz questions and be given the correct answers. Respondents in Control Arm will see neither. Then, all respondents will see the six posts with moderated comments sections and the notice of filtering (same as Treatment Arm 2 in the first experiment).

In the third experiment, randomization also occurs on the level of respondents. Respondents in the Treatment Arm will first see two posts with a direct comparison of their comments sections before and after moderation (i.e. Treatment Arm 2 and Control in the first experiment), and then see four posts with moderated comments sections and the notice of filtering (same as Treatment Arm 2 in the first experiment). Respondents in the Control Arm will see all six posts with moderated comments sections and the notice of filtering (same as Treatment Arm 2 in the first experiment). Respondents in Control Arm here will be analyzed together with the posts in Control Arm in the second experiment.

Update (Dec 17, 2022): We will conduct another experiment in which respondents will be randomly assigned to one of the five arms: The Control Arm will see all six posts with no comments section moderation. Treatment Arm A will see all six posts with moderated comments sections but with no notice of moderation. Treatment Arm B will see all six posts with moderated comments sections and a notice that the comments have been moderated for each post. Treatment Arm C will all six posts with no comments and a notice that the comments have been moderated for each post. The Comparison Arm will be the same as the Treatment Arm in the third experiment above.

Posts will be displayed in random order.

Our main results will be based on data collected from respondents who pass attention checks. Missing data will be filled using conventional multiple imputation methods (such as “Amelia” in R). We will use the method proposed by Lin (2013) to control for pretreatment covariates. Robust standard errors clustered at the individual level will be used. We will analyze heterogenous treatment effects along demographic and social-economic variables, as well as measures of political ideology, nationalism, and regime support.

Randomization Method
Done by the Qualtrics randomizer
Randomization Unit
In the first experiment, randomization occurs on the unit of posts. In the second and third experiments, randomization occurs on the unit of respondents.

Update (Dec 17, 2022): In the new experiment, randomization also occurs on the unit of respondents.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
N/A
Sample size: planned number of observations
We will recruit approximately 1,800 ~ 2,000 respondents for the three experiments in total. Each respondent will see six different Weibo posts. Update (Dec 17, 2022): Due to difficulty in recruiting a sufficient number of respondents in China by Qualtrics, we were only able to collect approximately 1,000 responses for the first three experiments. Hence, we will conduct another experiment and use Lucid to recruit another 1,000 respondents for the new experiments (see Update in the Experimental Design section).
Sample size (or number of clusters) by treatment arms
In the first experiment, each post will be randomly assigned into one of the four versions (Treatment Arms 1~3 and Control) with equal probability, so we will have approximately 600 respondents * 6 posts/respondents * .25 = 900 posts in each arm.

In the second experiment, respondents will be randomly assigned into one of the four arms, so each arm will have approximately 600 respondents * 0.25 = 150 respondents.

In the third experiment, approximately 1/3 of the respondents will be assigned to Control and 2/3 of the respondents will be assigned to Treatment, so we will have approximately 600 * 1/3 = 200 respondents in Control and 600 * 2/3 = 400 respondents in Treatment.

Update (Dec 17, 2022): Due to difficulty in recruiting a sufficient number of respondents in China by Qualtrics, we were only able to collect approximately 1,000 responses for the first three experiments. Hence, we will conduct another experiment and use Lucid to recruit another 1,000 respondents for the new experiments (see Update in the Experimental Design section). The new experiment will have approximately 200 respondents in each arm.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
In the first experiment, we will be able to detect an effect size of 0.12 at alpha = 0.05 level with 80% probability (Control = 3.9; Treat = 3.78, SD = 0.9).
IRB

Institutional Review Boards (IRBs)

IRB Name
Stanford University Institutional Review Board
IRB Approval Date
2022-06-15
IRB Approval Number
54133

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials