Correlation Neglect on Social Media

Last registered on May 25, 2024

Pre-Trial

Trial Information

General Information

Title
Correlation Neglect on Social Media
RCT ID
AEARCTR-0008660
Initial registration date
February 10, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
February 10, 2022, 7:36 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
May 25, 2024, 12:15 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
Harvard University

Other Primary Investigator(s)

PI Affiliation
Harvard University
PI Affiliation
Harvard University

Additional Trial Information

Status
Completed
Start date
2021-12-19
End date
2022-10-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study aims to understand whether repetitive or correlated information on social media affects beliefs and real-life decisions. We propose a field experiment on a Chinese social platform to explore the impact of correlated posts on college students' decisions about being a civil servant. We will recruit Chinese college students and randomly assign them into two groups following different accounts operated by us. With reposting chains included in posts, we will hold constant the amount of information (content of posts, endorsement by users etc.) for both groups while varying the repetitiveness of the information.

External Link(s)

Registration Citation

Citation
Huang, Yihong, Yixi Jiang and Ziqi Lu. 2024. "Correlation Neglect on Social Media." AEA RCT Registry. May 25. https://doi.org/10.1257/rct.8660-1.2
Experimental Details

Interventions

Intervention(s)
Intervention (Hidden)
Intervention Start Date
2021-12-26
Intervention End Date
2022-10-30

Primary Outcomes

Primary Outcomes (end points)
1. Beliefs: views about civil service work
2. Intermediate outcomes: email list sign-up and video click-through rate
2.1 Willingness to sign up for an email list sending them reminders about civil service exams.
2.2 Whether participants click a link to a short video about civil service exams.
3. Sign-up decisions: actual sign-up decisions for the civil service exam
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We propose an online experiment to study correlation neglect bias on social media, under the context of civil context exam in China. We will explore how reposts from civil servants affect college students’ career choices about being a civil servant. We will recruit Chinese college students as our study participants.

We will recruit participants from Wechat groups and pre-screen college students based on the following criterion:
i. In the last two years in college or in graduate schools.
ii. Spend at least 30 minutes on Microblog every day.
iii. Hesitant about signing up for civil service exams

After getting consent from those participants who meet the criterion above, the experiment will be conducted in the following steps:
1. We will send a baseline survey via email. The baseline survey includes information about demographics, including gender, age, school, major etc. We will then ask about their baseline beliefs on civil service jobs as well as willingness to become civil servants.
2. We will randomly assign participants into two groups and incentivize them to special-follow different Microblog accounts operated by us at the end of the baseline survey (special-following makes sure that our posts appear at the top of followers’ feeds). Those in the Treatment group will follow accounts A, B, C and Z while those in the Control group will follow accounts C, D, E and Z. Accounts A, B, C and Z will post (and repost) things relevant to civil service jobs and Accounts D and E will post irrelevant things to make sure that participants in both groups follow the same amount of information. Participants will be incentivized to upload a screenshot in order to prove that they special follow us. We will also incentivize them to “like” our posts.
3. For the next three weeks, we will post and repost things with our six accounts (all posts will be based on actual posts on Sina Microblog, there won’t be any deception). For accounts A, B and C, B will repost A’s post and C will repost B’s post so that we can get the reposting chains: B//@A and C//@B//@A. (An example of what participants in each group will see can be found in “example_posts.doc”)
Those in the control group will see:
- C // @B // @ A: negative info about civil service jobs.
- Z: positive info about civil service jobs.
- D: irrelevant posts (for example about pests).
- E: irrelevant posts (for example about pests).
Those in the treatment group will see:
- C // @B // @ A: negative info about civil service jobs.
- B // @ A: negative info about civil service jobs.
- A: negative info about civil service jobs.
- Z: positive info about civil service jobs
(Note: The content of the three posts “C // @B // @ A”, “B // @ A” and A will be the same.)

The key point here is that both groups receive the exact same amount of information on civil service jobs. However, for those in the treatment group, due to repeated exposure to the same information sources, if participants are subject to correlation neglect bias, they will overreact to A’s post be more likely to overweight negative information about civil service jobs.
4. During these three weeks, we will keep track of the number of likes and who “liked” it for each post to get a proxy for attention for each post.
5. After following these accounts for three weeks, we will elicit their posteriors about civil service jobs, and willingness to become a civil servant. We will also ask them whether they want to sign up for an email list sending them reminders about civil service exams and track the click-through rate for a video about civil service exams. This endline survey will also be distributed via email.

(New online experiment, May 2024) To further shed light on the mechanism, we implement an online experiment in a more controlled setting. The core design, utilizing reposting chains to construct redundant information, is similar to the field experiment above, except that (1) we present screenshots of Microblog posts, (2) we vary the salience of correlated posts to explore the role of distraction and (3) we use positive information about civil service jobs to construct redundant posts.

In the online experiment, we included two sets of microblog posts that involved civil servant salaries and presented them to participants.
- C // @B // @ A: positive info about civil servant salary.
- @B // @ A: positive info about civil servant salary.
- A: positive info about civil servant salary.
- Z: negative info about civil servant salary.

We randomly assign participants into one of the three groups: Control, Treat-Salient and Treat-Distraction. In Treat-Salient, screenshots from C, B, A are displayed in consecutive order, together with two irrelevant posts and post Z. In Treat-Distraction, screenshots from C, B and A are split apart by irrelevant posts. Finally, in Control group, participants see screenshots from C and Z, as well as irrelevant posts.

The main outcome variable in this online experiment is participants' beliefs about civil servant salary.
Experimental Design Details
Randomization Method
Randomization done in office by a computer
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
500 college students for the pilot, 1000 college students for the main experiment.
Sample size: planned number of observations
500 college students for the pilot, 1000 college students for the main experiment. (New online experiment, May 2024): 360 participants
Sample size (or number of clusters) by treatment arms
For the pilot: 250 in treatment and 250 in control.
For the main experiment: 500 in treatment and 500 in control.

(New online experiment, May 2024): 120 participants in each treatment arm
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Harvard University-Area IRB
IRB Approval Date
2022-02-01
IRB Approval Number
IRB21-1629

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials