Countering gender stereotypes on social media: Experimental evidence

Last registered on August 08, 2024

Pre-Trial

Trial Information

General Information

Title
Countering gender stereotypes on social media: Experimental evidence
RCT ID
AEARCTR-0014035
Initial registration date
July 31, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 06, 2024, 1:25 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
August 08, 2024, 6:24 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
ifo Institute for Economic Research

Other Primary Investigator(s)

PI Affiliation
NRC Canada
PI Affiliation
NRC Canada
PI Affiliation
NRC Canada

Additional Trial Information

Status
In development
Start date
2024-08-08
End date
2024-08-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
A growing literature develops novel NLP techniques to identify gender stereotypes on social media, but the analysis of potential countermeasures has received little attention. To fill this gap, we use an online experiment (between-subject design) that examines the efficacy of AI-generated counter-stereotypes, i.e., statements that challenge gender stereotypes. Specifically, we study two promising strategies (identified through a prestudy): (i) "Broadening Universals" (= stating that the stereotypical trait is not unique to the target group), and (ii) "Counter Facts" (= providing facts that contradict the stereotype).

We randomize participants into four groups of equal size:
(i) Control Group: Receives ten brief conversations (=1 statement plus 1 response) in social media language style on gender neutral topics such as travel and nature.
(ii) Gender Stereotypes: Receives ten brief conversations, where 5 conversations are on gender neutral topics as above, and 5 conversations pick up a common gender stereotype plus a neutral response.
(iii) Broadening Universals: Receives ten conversations, where 5 conversations are on gender neutral topics as above, and 5 conversations pick up a common gender stereotype plus a counter-stereotype stating that the trait is not unique to the target group.
(iv) Counter Facts: Receives ten conversations, where 5 conversations are on gender neutral topics as above, and 5 conversations pick up a common gender stereotype plus a counter-stereotype providing facts that contradict the stereotype.

After reading through the brief conversations, we ask our participants to conduct an Implicit Association Test (IAT, Greenwald et al., 1998), the most widely implemented test to elicit implicit biases. We will then compare differences in implicit biases across treatment groups.

External Link(s)

Registration Citation

Citation
Fraser, Kathleen et al. 2024. "Countering gender stereotypes on social media: Experimental evidence." AEA RCT Registry. August 08. https://doi.org/10.1257/rct.14035-1.1
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
We consider four groups and three types of treatments:
(i) Control Group: Receives ten brief conversations (=1 statement plus 1 response) in social media language style on gender neutral topics such as travel and nature.
(ii) Gender Stereotypes: Receives ten brief conversations, where 5 conversations are on gender neutral topics as above, and 5 conversations pick up a common gender stereotype about women plus a neutral response.
(iii) Broadening Universals: Receives ten conversations, where 5 conversations are on gender neutral topics as above, and 5 conversations pick up a common gender stereotype about women plus a counter-stereotype stating that the trait is not unique to women.
(iv) Counter Facts: Receives ten conversations, where 5 conversations are on gender neutral topics as above, and 5 conversations pick up a common gender stereotype about women plus a counter-stereotype providing facts that contradict the stereotype.
Intervention Start Date
2024-08-08
Intervention End Date
2024-08-31

Primary Outcomes

Primary Outcomes (end points)
Implicit Gender Bias (d-Score)
Primary Outcomes (explanation)
Implicit Gender Bias (d-Score): Based on participants' reaction times in the Implicit Association Test (IAT), computed through a Python-script following the algorithm proposed by Greenwald et al. (2003): Understanding and Using the Implicit Association Test: I. An Improved Scoring Algorithm (Journal of personality and social psychology).

Secondary Outcomes

Secondary Outcomes (end points)
(i) Willingness to Accept (in USD) to read ten further statements
(ii) Explicit gender bias
Secondary Outcomes (explanation)
(i) We elicit participants' willingness to accept to read ten further statements written in the same style as their treatment statements through an incentive-compatible Becker-deGroot-Marschak mechanism (BDM). Participants can enter the minimum value (in USD) they would be willing to accept to read further statements through an input box. We will winsorize this value at the 99th percentile to reduce the impact of outliers. This value is not further manipulated.
(ii) Explicit gender bias is elicited through five statements that express gender stereotypes about women. Participants can indicate their agreement through a visual analog scale ranging from 0 to 100. For each participant, we compute the mean value of these responses.

Experimental Design

Experimental Design
see attachment
Experimental Design Details
Randomization Method
Randomization through Python script (computer)
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
1400 individual participants
Sample size: planned number of observations
1400 individual participants
Sample size (or number of clusters) by treatment arms
350 participants per treatment arm
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Based on pilot studies, we require 250 participants per treatment group for sizeable effects. Given that we will have to exclude slow participants from the study (see attachment for details), we aim for 350 participants per group.
IRB

Institutional Review Boards (IRBs)

IRB Name
Ethics Committee of the Department of Economics at the University of Munich
IRB Approval Date
2024-01-31
IRB Approval Number
2023-23
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials