Polarization in Group Settings

Last registered on July 20, 2023

Pre-Trial

Trial Information

General Information

Title
Polarization in Group Settings
RCT ID
AEARCTR-0011761
Initial registration date
July 17, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
July 20, 2023, 3:38 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Abilene Christian University

Other Primary Investigator(s)

PI Affiliation
Universidad de las Americas

Additional Trial Information

Status
In development
Start date
2023-08-01
End date
2025-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Unable to disclosure prior to experiment.
External Link(s)

Registration Citation

Citation
McNamara, Trent and Roberto Mosquera. 2023. "Polarization in Group Settings." AEA RCT Registry. July 20. https://doi.org/10.1257/rct.11761-1.0
Experimental Details

Interventions

Intervention(s)
Unable to disclosure prior to experiment.
Intervention (Hidden)
Affective political polarization has been a mainstay in political markets both within the United States and globally through the 20th century. Initial research indicates that this has significant effects on society. Dimant (2023) runs several experiments on MTURK in strategic and non-strategic contexts. First, individuals are primed with a ’MAGA’ picture, and preferences for Trump are collected. Participants are then randomly matched to ingroup or out-group pairs. Robbett and Matthews (2023) similarly ran an experiment on MTURK matching people based on their stated political beliefs, finding that heterogenous groups are less willing to compromise and are less efficient without the ability to punish. This difference is alleviated if given the ability to punish behaviors. Mill and Morgan (2022) again used MTURK to run an experiment where self-identified Trump and Clinton’s voters were assigned to an interaction group. Results demonstrate that more polarized matches produce fewer positive attitudes and overall destruction in wealth across games.

One of the primary limitations of the current research stems from the priming nature of the experimental design and the potential for experimenter demand effects to bias results. All subjects answer questions about their beliefs and are immediately sorted, knowingly, into groups with someone from the same or opposite group. Given this, participants can – consciously or subconsciously – predict what the purpose of the experiment is and modify their behaviors to produce results they think are desired.

In order to correct this problem, we present an experiment that more subtly manipulates an individual’s perceived belief of being matched with someone from their out-group by running a repeated public goods game where we additionally recruit confederates to wear apparel for public and contested organizations from both liberal and conservative spheres. Confederates will record a video introducing themselves to the group. Participants will be asked to watch these videos. Hence, by manipulating which videos a participant sees, we can generate exogenous variation in an individual’s belief of being matched with someone from an opposite worldview without the experimenter eliciting this information and informing them of this match before an intervention.

This experiment seeks to answer three primary research questions. First, how do polarized social contexts impact a group’s ability to cooperate and reach an efficient group outcome? Second, how do individuals in polarized groups compare to those in unpolarized groups regarding their ability to make choices consistent with rational preferences? Third, are participants in more polarized groups more likely to
punish other group members?
Intervention Start Date
2023-08-01
Intervention End Date
2025-12-31

Primary Outcomes

Primary Outcomes (end points)
donations to a public good, budget set choices
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Unable to disclosure prior to experiment.
Experimental Design Details
Our design and protocols integrate the experimental designs of Castillo et al. (2020), who documents how the social context of a group (the female-to-male) ratio impacts a female’s risk preference but not the other way around and those of Robbett and Matthews (2023) who use a repeated public goods game matching participants by their stated political preferences. Instead of manipulating the accepted ratio of females to males, we design and analyze behavior in a repeated public goods game consisting of N rounds between randomly paired participants in an online environment. A step-by-step listing of protocols follows.

We will hire confederates from various job sites and local connections. The research team schedules two sessions with prospective confederates. The first is a screening and consent informational. Confederates will be informed of the required task, asked if they are interested, and prompted to provide additional contact information. The research team will mail apparel/décor to confederates. One-third of these Confederates are mailed liberally (conservatively) charged apparel/décor, while the remaining set of Confederates is not provided any apparel/décor ("unbiased"). In the second session, confederates will receive a script and be asked to record a 15-30 second video reciting
this script. We will use AI techniques to standardize the videos and control that the videos are comparable, except in the politically charged apparel/décor. These videos will later be used as a treatment for experimental participants.
(a) Confederates are informed of this and provide written consent for our usage.
(b) Upon completing a video recording, the research team will no longer contact or interact with confederates. Confederates are promptly paid a flat rate of $30.
(c) Alternatively, confederates may be recruited from job sites such as MTURK and asked to record simple introduction recordings following scripts. The research team modifies these recordings using ML/AI to produce liberal and conservative variations of Confederate introductions. Similarly, "unbiased" variations will also be procured

Participants are recruited in bulk using advertisements from social media sources like Facebook, Reddit, and Twitter. Advertisements ask for participation in a research study and mention monetary gains.

Interested participants who click on the ad are directed to an informed consent document hosted through Qualtrics. Participants must accept the consent document before continuing.
(a) Participants who decline consent are promptly thanked. Their Qualtrics session ends.

Participants who consent begin by completing a demographic questionnaire. This collects information on demographic variables like gender, race, and income. This should take at most 4 minutes.

Participants are asked to record a short introductory video lasting no more than 20 seconds. Participant recordings will not be shared.

Participants are then assigned to a ’politically-charged’ condition. We provide participants with randomly selected videos from the stock of pre-recorded Confederate videos from Step 1. Based on the participants’ ’politically-charged’ condition, the percentage of videos coming from the liberal vs. conservative vs. unbiased groups is determined. For example, participants assigned to a 50% liberal charge will see two liberal and two unbiased videos. A full description of these conditions follows. Participants are asked to view each recording and can only continue after watching the four videos.

(a) Phase 1 – Partisan Polarization
In the conditions listed in Table 1, we test for the effect of being matched in a group with different political leanings. In Phase 1 of the experiment, participants are asked to watch four total introduction videos and told that these are their members. Hence, Phase 1 tests directly for the effects of being matched to one’s in and out-group and to a group with both. This is done to test for the additional effects of stress and conflict that participants feel toward a non-homogenous group.

(b) Phase 2 – Perceived Environmental Polarization
In Phase 2, we will also test for the environmental effects of polarization. This is done by presenting participants with eight introduction videos and matching participants to smaller groups from this larger set. These additional conditions are described below in Table 2.

Each participant is placed in a group with four other actual participants. However, participants are prompted that the other participants come from the people whose videos they just watched.
(a) This deception is necessary to change the environment’s political charge without priming individuals about the nature and objective of the experiment. The videos create exogenous variation in treatment. By experimentally varying the number of confederates and the apparel they wear, we create random variation in a participant’s perception of being matched with someone else who may or may not share similar worldviews. Matches with other participants are unlikely to yield similarly observable variation and may create experimenter demand effects.

Groups are then tasked with playing a repeated public goods game. In each round of the public goods game, a participant begins with K dollars that they can either contribute towards a common project or keep for themselves. After each round, group members will see feedback on the total contribution of the project as well as their own earned income. Payouts will be determined directly from an individual’s earned income, not on the payoff of others. Assuming that the project increases individual contributions by a multiple of 2x and is equally divided among all members and that members are assigned to groups of N, then payouts in the first stage are equal to Equation (1). Wrapping up the round, participants are displayed a summary of the contributions across players as well as each member’s own contribution (identity masked).
(a) The repeated public goods game will last ten rounds. Each round is expected to last 30 seconds to 1 minute.
(b) It is important to note that participants will not observe identifiable information about other participants. They only see the contributions other participants make.
(c) Depending on the rate at which participants join the survey, we may schedule participation in this game at a later date. This is to minimize the wait time for matching participants.
(d) Depending on time and funding, we will run additional trials and include a Punishment variation of the public goods game. In this condition, participants are provided information in which they can reduce the earnings of others. This is described in Gächter et al. (2009). Now, in the second round of the public goods game, participants can burn one of their own dollars to reduce those of the recipient by a certain amount. The play continues until the end of ten rounds.
(e) We expect to need close to 200 observations per condition. Our expected sample size is 4800. Assuming average group contributions differ by ten between two treatments, a standard deviation of 10, and 80% power, N = 34 group observations are needed per condition. Since each group observation requires 5 participants, 170 participants are needed per condition. We estimate needing 200 for conditions that are not dichotomous in vs. out-group. Depending on statistical significance, this will be adjusted accordingly. If we run additional trials as in part d described above, this will double the needed observations.

Before completing the public goods game, we also include a short survey that includes a component that tracks how social contexts impact the quality of choices/cognitive functioning in the form of liberal-to-conservative ratios. This is similar to Mani et al. (2013). We test for the rationality of choices by testing the generalized axiom of revealed preference (GARP) as in Varian (1982) by, for example, following the procedure in Choi et al. (2007). This can be facilitated by presenting participants with choice sets and asking them to select which is preferred. We can test whether the collective choices are consistent with rationality. This part is expected to take 15 minutes.
(a) We will ask participants to allocate tokens between two accounts. this allocation has to be a point on a budget line randomly selected. This budget line comes from the set of lines that intersect at least one axis at or above the 50-token level and intersect both axes at or below the 100-token level. Then we randomly select one of the two accounts. We will try three probability distributions: (i) equal probability for each account, (ii) 1/3 for account one and 2/3 for account two, and (iii) 2/3 for account one and 1/3 for account two. Participants repeat this task for 25 randomly selected budget sets. For payment we randomly choose one round with equal probability and the result of that round is paid to the participant.
(b) We will run rounds where GARP comes first and then Public Goods Game, where the Public Goods Game comes first and then GARP, and rounds where participants only work on one of the two tasks.

Upon completion of this, participants are thanked and prompted to provide payment information. Participants will be paid via TangoCard or GiftoGram. Two components will determine payment. First, in the GARP stage, participants will choose participation payments from sets of options. Options will be scaled to average $5. Second, participants will be paid based on their contribution decisions in the public goods game. We scale outcomes such that the average payment for the game is $10 with a maximum of $15.
Randomization Method
Randomization implemented via survey softwire (Qualtrics).
Randomization Unit
Individual-level randomization.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
4800
Sample size: planned number of observations
4800
Sample size (or number of clusters) by treatment arms
200 observations per condition
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Assuming average group contributions differ by ten between two treatments, a standard deviation of 10, and 80% power, N = 34 group observations are needed per condition. Since each group observation requires 5 participants, 170 participants are needed per condition. We estimate needing 200 for conditions that are not dichotomous in vs. out-group. Depending on statistical significance, this will be adjusted accordingly.
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
Abilene Christian University
IRB Approval Date
2023-06-01
IRB Approval Number
IRB-2023-82
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials