Algorithmic Fairness Rhetoric

Last registered on January 10, 2020

Pre-Trial

Trial Information

General Information

Title
Algorithmic Fairness Rhetoric
RCT ID
AEARCTR-0005218
Initial registration date
January 08, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
January 10, 2020, 11:17 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Harvard Business School

Other Primary Investigator(s)

PI Affiliation
Columbia Business School
PI Affiliation
Columbia Business School

Additional Trial Information

Status
In development
Start date
2020-01-07
End date
2020-01-18
Secondary IDs
Abstract
Algorithms are used increasingly in the everyday decision-making of companies. Despite their opportunities, there is a growing concern about "algorithmic fairness". We want to test how public accounts of algorithmic fairness shape people's decisions of whether or not to adopt them in a particular context.
External Link(s)

Registration Citation

Citation
Cowgill, Bo, Fabrizio Dell'Acqua and Sandra Matz. 2020. "Algorithmic Fairness Rhetoric." AEA RCT Registry. January 10. https://doi.org/10.1257/rct.5218-1.0
Experimental Details

Interventions

Intervention(s)
Participants will be assigned to one of nine conditions with different op-eds about algorithms and different information about the status quo before the introduction of algorithms.
Intervention Start Date
2020-01-07
Intervention End Date
2020-01-18

Primary Outcomes

Primary Outcomes (end points)
A number of decisions by subjects as to whether they want to adopt the algorithm or stick with the status quo.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Subjects' opinions about the prospects of the algorithm and its potential problems.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We will recruit a set of decision-makers and ask them to make a number of decisions as to whether they want to adopt an algorithm or stick with the status quo
Experimental Design Details
We plan to conduct an experimental between and within-person subject design in which participants read different vignettes describing the potential use of an algorithm in the context of lending and hiring. A vignette will describe the algorithm's recommendations for who to hire in technical roles, in particular what percentage of men and women. Another vignette will describe the algorithm's recommendations about interest rates for minority borrowers, as compared to white borrowers.
Participants will be randomly assigned to one of nine conditions. We will randomly assign them to either 1) read an Op-ed with a fatalistic attitude towards AI; 2) Read an Op-ed inviting them to think counterfactually; 3) read no Op-ed.
Another randomization will happen with respect to the status quo. Participants will either: 1) read the status quo about hiring women and lending to minorities; 2) have the option (with a click) to read the status quo about hiring women and lending to minorities; 3) have no option to know about the status quo.
We will also randomize which vignette (hiring or lending) appears first.
For each vignette, participants have to make a number of decisions as to whether they want to adopt the algorithm or stick with the status quo. Finally, they will be asked to answer a number of questions about themselves.
We plan to recruit an online panel via Prolific Academic.
Randomization Method
Randomization done through Qualtrics
Randomization Unit
Individuals
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
1000 subjects.
Sample size: planned number of observations
1000 subjects.
Sample size (or number of clusters) by treatment arms
Subjects will be equally divided between treatments.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
Columbia University
IRB Approval Date
2019-12-27
IRB Approval Number
AAAS8221

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials