An Experimental Investigation of AI Fairness in Education

Last registered on October 28, 2024

Pre-Trial

Trial Information

General Information

Title
An Experimental Investigation of AI Fairness in Education
RCT ID
AEARCTR-0014596
Initial registration date
October 22, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 28, 2024, 1:10 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Wuhan University

Other Primary Investigator(s)

PI Affiliation
De Montfort University
PI Affiliation
Wuhan University
PI Affiliation
Wuhan University

Additional Trial Information

Status
On going
Start date
2024-09-10
End date
2025-02-28
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Our study investigates how the integration of AI tools in educational contexts influences perceptions of fairness. Driven by ongoing discussions around the ethical implications of AI use by students, this research seeks to understand fairness perceptions across various settings, including competitive and non-competitive scenarios.
Utilizing a structured experimental design, participants are presented with situations involving two hypothetical students, one engaging with AI tools and the other not. The study examines how different conditions related to AI usage affect third-party perceptions of fairness and their decisions regarding score redistribution.
We explore several hypotheses regarding fairness perceptions, including the notion that AI use in competitive situations may be viewed as less equitable and that specific types of AI encouragement influence the desire for score adjustments. The findings aim to contribute to the understanding of fairness in the context of AI, with implications for educational practices.
External Link(s)

Registration Citation

Citation
Cartwright, Edward et al. 2024. "An Experimental Investigation of AI Fairness in Education ." AEA RCT Registry. October 28. https://doi.org/10.1257/rct.14596-1.0
Experimental Details

Interventions

Intervention(s)
We conducted a 2*4 survey experiment using a mixed-method design that incorporates both between-subject and within-subject variables to examine third-party perceptions of fairness in the use of AI within educational contexts. We investigated differences in subjects' fairness perceptions across two contexts: a competitive environment and a non-competitive environment.
Intervention (Hidden)
Participants do a critical task of redistribution: i.e., whether to redistribute the writing scores between two students--student A and student B.
We vary the writing task environment (between subjects): a competitive environment where students write for a scholarship application versus a non-competitive environment for a practice writing task (write for a diary or practice).
Our within-subject variation varied the AI usage regulations; each participant faced four distinct scenarios with randomized order: Encourage use, Prohibited use, Forced unilateral use, and Spontaneous use.
When applicable, student A always used AI to assist her writing task and gets higher marks than student B.
Given the context, participants decide whether to redistribute some of student A's score to student B.

Intervention Start Date
2024-10-24
Intervention End Date
2024-11-30

Primary Outcomes

Primary Outcomes (end points)
Redistribution decision
Primary Outcomes (explanation)
Participants who considered using AI as less fair will redistribute a higher amount among Student A and Student B. We aim to examine the perception of fairness under different writing task contexts.

Secondary Outcomes

Secondary Outcomes (end points)
Willingness to pay for AI use; Willingness to punish AI users
Secondary Outcomes (explanation)
At the end of the redistribution choices, participants were asked to join a competition with another randomly paired participant, and we will elicit their WTP for AI assistance. Additionally, we elicit their willingness to punish their co-participant conditional on their use of AI.

Experimental Design

Experimental Design
We will conduct a 2*4 survey experiment using a mixed-method design that incorporates both between-subject and within-subject variables to examine third-party perceptions of fairness in the use of AI within educational contexts.

Experimental Design Details
We described a scenario where two students, A and B, complete a writing task. In this scenario, Student A always uses AI while B does not. At the same time, A received higher scores, totaling 600 points, for superior performance, while Student B received zero points due to lower performance. Participants were asked to decide whether to reallocate some of Student A’s points to Student B.
Our design of redistribution decision closely follows the following two references:
Ingvild Almas et al., (2019), Cutthroat Capitalism versus Cuddly Socialism: Are Americans More Meritocratic and Efficiency-Seeking than Scandinavians? , Journal of Political Economy 128(5)
Dong, Lu et al., (2022), “They Never Had a Chance”: Unequal Opportunities and Fair Redistributions, No 2022-11.
Randomization Method
Our randomization at the recruitment is done with the assistance of the Weikeyan system, a WeChat-based recruitment system that facilitates researchers' economic or marketing experiments. The system will send advertisements to subjects pool ensuring randomization across treatments.
Randomization Unit
Our unit of observation is at the individual level.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
The experiment will be clustered at the session level.
We aim to conduct 10-20 participants per session, conditioning the participants' registration conditions.
For each treatment arm, we aim to get at least ten sessions for a sufficient number of observations.
Sample size: planned number of observations
200-400 student participants
Sample size (or number of clusters) by treatment arms
100-200 per treatment arm (10 sessions * 10-20 participants/session)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Center of Behavioral and Experimental Research at Wuhan University
IRB Approval Date
2024-10-16
IRB Approval Number
EM240036

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials