Preference for Explainable AI Experiment

Last registered on April 24, 2025

Pre-Trial

Trial Information

General Information

Title
Preference for Explainable AI Experiment
RCT ID
AEARCTR-0015581
Initial registration date
March 16, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 19, 2025, 11:56 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
April 24, 2025, 10:23 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Harvard University

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2025-06-01
End date
2025-12-06
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Lab experiment to investigate individual preference for getting information generated by AI and the explanation of how the AI made that prediction. The experiment is done in a real-stakes decision where a Black-Box AI informs a decision to allocate an actual US$10, 000-loan.
External Link(s)

Registration Citation

Citation
Chan, Alex. 2025. "Preference for Explainable AI Experiment." AEA RCT Registry. April 24. https://doi.org/10.1257/rct.15581-1.9
Experimental Details

Interventions

Intervention(s)
Individuals make a decision to determine how a private lender allocate a $10000-loan. Participants are randomized into a neutral treatment where payoffs are not tied to actual loan repayment outcome, or a "lender-aligned" treatment where they have with direct stakes tied to loan repayment.

The actual date of the experiment will be pre-registered here before launch.
Intervention Start Date
2025-06-01
Intervention End Date
2025-12-06

Primary Outcomes

Primary Outcomes (end points)
(1) Binary decision of whether the participant want to see an explanation of how the AI made the prediction of the default risk of the borrowers before making a loan allocation decision
** The full set of outcome variables that we intend to analyze is listed in the stata do file attached to this experiment before the experiment itself; besides those tables, we plan to present the word cloud formed by the responses to the question “You chose to see an explanation of how the AI Algorithm made the default risk predictions BEFORE making the loan decision, what were you hoping to learn from the explanation?”, among the participants who was presented and taken up the option to review an explanation where the chance to see the role race and gender play in the default risk calculation by the AI is made salient

(2) Binary decision of whether the participant want to see the predicted default risk of the borrowers before making a loan allocation decision
Primary Outcomes (explanation)
Both study information-avoidance decisions.

Secondary Outcomes

Secondary Outcomes (end points)
(3) Whether the participant want to allocate 90% of the loan to the low-risk borrower and 10% to the high-risk borrower, or 50% to each
(4) How much, to reduce the second participant’s bonus at a personal cost of $0.01 per $1.00 reduction, up to the full $10 (3 scenarios -informed, not informed, chose not to be informed about explanation) - within subject
Secondary Outcomes (explanation)
This gets at whether the information changes the actual loan allocation decision (i.e. to see if the information influences decision)

Experimental Design

Experimental Design
I experimentally test how individual's preference for information regarding an AI is influenced by certain factors.

(Full design attached and randomization flow attached in this pre-registration as pdf/docs before the experiment launch)
Experimental Design Details
Not available
Randomization Method
Randomization done using Qualtrics' in-built function
Randomization Unit
individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
See below
Sample size: planned number of observations
2,500
Sample size (or number of clusters) by treatment arms
Evenly for non-lender-aligned and for lender-aligned for main arm. 1250 each.

4/5 will be in the arms where they choose whether or not to see the explanation: Same shares for a variant of the main arm where race-gender descriptors for borrowers are replaced by odd/even number birth-day/month. (total 2000 across these arms)

1/5 as much as above, but within this even between non-lender-aligned and lender-aligned for an arm similar to main arm except that participants have a choice to see the predicted default risk generated by AI or not. These participants will be randomized to see an explanation. (total 500)

2500 in total.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Harvard University Institutional Review Board
IRB Approval Date
2025-02-25
IRB Approval Number
IRB-25-0040
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information