Prosocial Ranking Challenge

Last registered on September 12, 2024

Pre-Trial

Trial Information

General Information

Title
Prosocial Ranking Challenge
RCT ID
AEARCTR-0014274
Initial registration date
August 29, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
September 12, 2024, 5:19 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
University of Warwick

Other Primary Investigator(s)

PI Affiliation
UC Berkeley
PI Affiliation
Columbia University
PI Affiliation
University of Michigan
PI Affiliation
Civic Health Project
PI Affiliation
Columbia University

Additional Trial Information

Status
In development
Start date
2024-07-08
End date
2025-06-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
More information about the study will be available after the completion of the trial.
External Link(s)

Registration Citation

Citation
Beknazar-Yuzbashev, George et al. 2024. "Prosocial Ranking Challenge." AEA RCT Registry. September 12. https://doi.org/10.1257/rct.14274-1.0
Sponsors & Partners

Sponsors

Experimental Details

Interventions

Intervention(s)
Experimental intervention is introduced through a browser extension that all participants have to install. This makes various modifications to which content people see on Facebook, X, and Reddit.
Intervention Start Date
2024-08-30
Intervention End Date
2025-03-31

Primary Outcomes

Primary Outcomes (end points)
1. Mental Health

2. Support for partisan violence

3. Meta-perceptions on support for partisan violence

4. Affective polarization

5. Intergroup empathy

6. Political knowledge

7. Meaningful connection on platforms

8. Experiences on platforms

9. Negative Emotions

10. Social media use

a. Active time spent
- We measure active time following the methodology specified in Beknazar-Yuzbashev et al. (2022), summed across all three treated platforms.

b. Engagement rate
- Measured as: total number of engagement signals submitted, divided by number of posts seen, across all platforms.

Outcomes 1-9 will be measured in all surveys (baseline, midline, endline). Some questions from outcomes 3, 4, 5, and 9 will be measured using in-feed surveys injected by the extension. Lastly, outcome 10 will be measured through usage data.

Heterogeneity:

We will look at heterogeneity with respect to:
- Political affiliation (Democrats and Independents leaning towards Democrats vs. Independents leaning towards Republicans and Republicans),
- Minority status (female respondents, ethnic minorities)
- High/low political content exposure (upper quartile compared to lower three quartiles) based on the benchmark period
- High/low toxic content exposure (compared to median) based on the benchmark period

For outcome 10, we will additionally look at heterogeneity by social media platform.
Primary Outcomes (explanation)
Re 1: Mental Health will be measured using the following survey questions (All of the time, Most of the time, More than half the time, Some of the time, At no time):
- I have felt cheerful and in good spirits
- I have felt calm and relaxed
- I have felt active and vigorous
- I woke up feeling fresh and rested
- My daily life has been filled with things that interest me

We will standardize scores for each statement and add them up to create a single index measure.

Re 2: Support for partisan violence will be measured using the following survey questions:

- When, if ever, is it OK for [inparty] to send threatening and intimidating messages to [outparty] leaders? (101 pt scale, Never, Always)
- How much do you feel it is justified for [inparty members] to use violence in advancing their political goals these days? (101 pt scale, Not at all, Extremely)

We will standardize scores for both statements and add them up to create a single index measure.

Re 3: Meta-perceptions on support for partisan violence will be measured using four survey questions (All of the time, Most of the time, More than half the time, Some of the time, At no time):
- How do you think an average [outparty] would respond to the following question? How much do you feel it is justified for [outparty] to use violence in advancing their political goals these days?
- How do you think an average [outparty] would respond to the following question? When, if ever, is it OK for [outparty] to send threatening and intimidating messages to [inparty] leaders?

We will standardize scores for each statement and add them up to create a single index measure.

Re 4: Affective polarization will be measured using the following survey questions:
- [APOUT1] Please indicate how you feel toward [outparty members] using the scale below. 100 means that you feel very favorably or warm toward them, 0 that you feel very unfavorable or cold, and 50 are neutral.
- [APIN1] Please indicate how you feel toward [inparty members] using the scale below. 100 means that you feel very favorably or warm toward them, 0 that you feel very unfavorable or cold, and 50 are neutral.
- [APOUT2] How comfortable are you having friends who are [outgroup members]? (101 pt scale Not at all to Completely)
- [APIN2] How comfortable are you having friends who are [ingroup members]? (101 pt scale Not at all to Completely)

We will standardize scores for each statement and add them up to create a single index measure.

Re 5: Intergroup empathy will be measured using two survey questions (7-point scale ranging from Strongly disagree to Strongly agree):
- I find it difficult to see things from [outparty] point of view.
- I think It is important to understand [outparty] by imagining how things look from their perspective.

We will standardize scores for each statement and add them up to create a single index measure.

Re 6: Participants will be asked questions about their political knowledge in each of the surveys.

Of the following news events, which ones do you think are true events that occurred in the last month, and which ones do you think are false and did not occur? (True, False, Unsure)

There will be five statements in each survey, taken from headlines 2 weeks before the survey, with 2-3 modified to be false.

Re 7: Meaningful connection on platforms will be measured using four survey questions:
- In the last two weeks, have you experienced a meaningful connection with others on Facebook?
- In the last two weeks, have you experienced a meaningful connection with others on X (Twitter)?
- In the last two weeks, have you experienced a meaningful connection with others on Reddit?
- In the last two weeks, have you personally witnessed or experienced something that affected you negatively on Facebook?
- In the last two weeks, have you personally witnessed or experienced something that affected you negatively on X (Twitter)?
- In the last two weeks, have you personally witnessed or experienced something that affected you negatively on Reddit?

Re 8: Experiences on platforms will be measured using four survey questions:
- In the last two weeks, have you learned something that was useful or helped you understand something important on Facebook?
- In the last two weeks, have you learned something that was useful or helped you understand something important on X (Twitter)?
- In the last two weeks, have you learned something that was useful or helped you understand something important on Reddit?
- In the last two weeks, have you witnessed or experienced content that you would consider bad for the world on Facebook?
- In the last two weeks, have you witnessed or experienced content that you would consider bad for the world on X (Twitter)?
- In the last two weeks, have you witnessed or experienced content that you would consider bad for the world on Reddit?

We will standardize scores for each statement and add them up to create a single index measure.

Re 9: Negative Emotions will be measured using the following question (note: this question will only be asked through in-feed surveys, not in the baseline, midline, and endline surveys).
- Reading my {platform} feed makes me feel angry, sad, or disgusted.

Secondary Outcomes

Secondary Outcomes (end points)
1. Social Trust

2. Further measures of user engagement

a. Total number of posts seen
b. Total number of political/civic posts seen
c. Average toxicity (Jigsaw) of posts seen
d. Engagement rate with toxicity
- Measured as: average toxicity of posts weighted by share of total viewport time
- Alternatively measured as: average toxicity of posts engaged with divided by average toxicity of all rendered posts, where engaged can be shares, clicks, reactions
e. Engagement rate with political/civic posts
- Political/civic posts will be classified using the classifier described in https://arxiv.org/abs/2403.13362
f. Average toxicity of posts created (Jigsaw)
g. Attrition, per platform, defined as the fraction of users who had at least one session in at month 5 as compared to month 1, controlling for extension uninstallation.

Heterogeneity:
We will look at the same angles of heterogeneity as for the primary outcomes. For outcome 2, we will additionally look at heterogeneity with respect to social media platform.
Secondary Outcomes (explanation)
Re 1: Social Trust will be measured using the following survey questions:

- "Generally speaking, would you say that most people can be trusted, or that you can't be too careful in dealing with people?"
- Outparty friends (Druckman, Levendusky 2019, Rajadesingan et al, 2023)
- "How comfortable are you having close personal friends who are [Outparty]?"

We will standardize scores for each statement and add them up to create a single index measure.

Experimental Design

Experimental Design
Information on the experimental design is hidden until the end of the trial.
Experimental Design Details
Not available
Randomization Method
The browser extension assigns the user one of the experimental groups using a random number generator. The control group will be twice as large as each treatment group.
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
N/A
Sample size: planned number of observations
The number of observations depends on how successful we will be in recruiting users to install the browser extension and how many of them stay in the study through the endline. We are recruiting through a variety of online research and market research companies (CloudResearch, Forthright, PureSpectrum, Cint, several others). We estimate that we will be able to recruit approximately 15,000 users at baseline, and put an upper bound on retention to endline at 80% = 12,000.
Sample size (or number of clusters) by treatment arms
We will randomly assign individuals to treatment groups with equal probabilities, with the exception that the likelihood of being assigned a control group is twice as high as being assigned any other individual treatment group.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Committee for Protection of Human Subjects University of California, Berkeley
IRB Approval Date
2024-05-06
IRB Approval Number
2024-03-17285