An Experimental Investigation of AI Fairness in Education

Last registered on October 28, 2024

Pre-Trial

Trial Information

General Information

Title
An Experimental Investigation of AI Fairness in Education
RCT ID
AEARCTR-0014596
Initial registration date
October 22, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 28, 2024, 1:10 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Wuhan University

Other Primary Investigator(s)

PI Affiliation
De Montfort University
PI Affiliation
Wuhan University
PI Affiliation
Wuhan University

Additional Trial Information

Status
On going
Start date
2024-09-10
End date
2025-02-28
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Our study investigates how the integration of AI tools in educational contexts influences perceptions of fairness. Driven by ongoing discussions around the ethical implications of AI use by students, this research seeks to understand fairness perceptions across various settings, including competitive and non-competitive scenarios.
Utilizing a structured experimental design, participants are presented with situations involving two hypothetical students, one engaging with AI tools and the other not. The study examines how different conditions related to AI usage affect third-party perceptions of fairness and their decisions regarding score redistribution.
We explore several hypotheses regarding fairness perceptions, including the notion that AI use in competitive situations may be viewed as less equitable and that specific types of AI encouragement influence the desire for score adjustments. The findings aim to contribute to the understanding of fairness in the context of AI, with implications for educational practices.
External Link(s)

Registration Citation

Citation
Cartwright, Edward et al. 2024. "An Experimental Investigation of AI Fairness in Education ." AEA RCT Registry. October 28. https://doi.org/10.1257/rct.14596-1.0
Experimental Details

Interventions

Intervention(s)
We conducted a 2*4 survey experiment using a mixed-method design that incorporates both between-subject and within-subject variables to examine third-party perceptions of fairness in the use of AI within educational contexts. We investigated differences in subjects' fairness perceptions across two contexts: a competitive environment and a non-competitive environment.
Intervention Start Date
2024-10-24
Intervention End Date
2024-11-30

Primary Outcomes

Primary Outcomes (end points)
Redistribution decision
Primary Outcomes (explanation)
Participants who considered using AI as less fair will redistribute a higher amount among Student A and Student B. We aim to examine the perception of fairness under different writing task contexts.

Secondary Outcomes

Secondary Outcomes (end points)
Willingness to pay for AI use; Willingness to punish AI users
Secondary Outcomes (explanation)
At the end of the redistribution choices, participants were asked to join a competition with another randomly paired participant, and we will elicit their WTP for AI assistance. Additionally, we elicit their willingness to punish their co-participant conditional on their use of AI.

Experimental Design

Experimental Design
We will conduct a 2*4 survey experiment using a mixed-method design that incorporates both between-subject and within-subject variables to examine third-party perceptions of fairness in the use of AI within educational contexts.

Experimental Design Details
Not available
Randomization Method
Our randomization at the recruitment is done with the assistance of the Weikeyan system, a WeChat-based recruitment system that facilitates researchers' economic or marketing experiments. The system will send advertisements to subjects pool ensuring randomization across treatments.
Randomization Unit
Our unit of observation is at the individual level.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
The experiment will be clustered at the session level.
We aim to conduct 10-20 participants per session, conditioning the participants' registration conditions.
For each treatment arm, we aim to get at least ten sessions for a sufficient number of observations.
Sample size: planned number of observations
200-400 student participants
Sample size (or number of clusters) by treatment arms
100-200 per treatment arm (10 sessions * 10-20 participants/session)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Center of Behavioral and Experimental Research at Wuhan University
IRB Approval Date
2024-10-16
IRB Approval Number
EM240036