Normative preferences over AI system design in criminal courts - A vignette study

Last registered on October 21, 2021

Pre-Trial

Trial Information

General Information

Title
Normative preferences over AI system design in criminal courts - A vignette study
RCT ID
AEARCTR-0008200
Initial registration date
October 19, 2021

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 21, 2021, 9:10 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
University of Hamburg

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2021-10-20
End date
2021-11-24
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study investigates the preferences of societies regarding the ethical integration of AI into criminal courts. The growing use of artificial intelligence (AI) systems across social domains has sparked a heated debate about the ethical ramifications, in particular, over the fairness, accountability, and transparency of these systems. Using a vignette experiment, we explore the normative preferences of the general public over these principles. By varying the levels of fairness, accountability and transparency in the design of AI in criminal court sentencing, we explore both individual preferences and the impact of chat discussions on their attitude towards AI in courts.
External Link(s)

Registration Citation

Citation
Woemmel, Arna. 2021. "Normative preferences over AI system design in criminal courts - A vignette study." AEA RCT Registry. October 21. https://doi.org/10.1257/rct.8200-1.0
Experimental Details

Interventions

Intervention(s)
1. Vignette study with variations in the description of AI system design in criminal courts with respect to fairness, accountability, and transparency (dimensions are varied in high vs low level)

2. Chat discussion in groups of three
Intervention Start Date
2021-10-20
Intervention End Date
2021-11-24

Primary Outcomes

Primary Outcomes (end points)
Attitude towards AI in criminal courts (5 point likert scale)
Primary Outcomes (explanation)
We measure individual preferences over AI implementation in criminal courts on a 5-point Likert scale.
Additionally, we measure the impact of deliberation on the individual preferences by re-eliciting preferences on a 5-point Likert scale after a chat discussion.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
- Vignette study with a 2^3 factorial design (i.e. 8 scenarios)
3 dimensions: fairness, accountability, and transparency of the AI system
2 levels: high vs low

- Chat discussion in groups of three


Experimental Design Details
The study begins with general questions related to self-assessed knowledge and attitudes towards AI. Then, the specific application of AI in criminal courts is described in detail. This includes the topic of bias and discrimination by AI as well as human judges. After the introduction, participants are randomly assigned to a single vignette scenario describing the specific system design of an AI in courts (between-subjects design). We vary the scenario description with respect to (i) high vs low preserving accuracy within the anti-discrimination constraints (fairness); (ii) high vs low judgment power of the court vs. AI advice (accountability); and (iii) high vs low public oversight (transparency). Participants state their preference for AI implementation in courts using a Likert scale and free-form text boxes. In the next section, they discuss their preferences with two other randomly assigned participants who were presented the same vignette scenario in a chat (7 minutes). Afterwards, we re-elicit their normative preferences to investigate whether group deliberation changes their attitude towards AI in courts. As a final step, each participant will take the Implicit Association Test (version Arab-Muslim IAT) to test subconscious stereotypes and their potential influence on attitudes towards AI in courts.
The study will take approx. 40 minutes in total.
Randomization Method
Computer randomization
Randomization Unit
individual; randomization for chat discussion is based on the individually assigned vignette scenario (only participants that were presented the same scenario can be matched)
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
0
Sample size: planned number of observations
2000 individual participants (representative sample from the UK general population) 600 chat groups
Sample size (or number of clusters) by treatment arms
250 individuals per vignette scenario
approx. 80 chat groups per vignette scenario
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
Dean's office - Laboratory at the faculty of Economics and Social Science (University of Hamburg)
IRB Approval Date
2021-04-20
IRB Approval Number
N/A

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials