Preference for Explainable AI Experiment

Last registered on September 02, 2025

Pre-Trial

Trial Information

General Information

Title
Preference for Explainable AI Experiment
RCT ID
AEARCTR-0015581
Initial registration date
March 16, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 19, 2025, 11:56 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
September 02, 2025, 11:33 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Primary Investigator

Affiliation
Harvard University

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2025-09-03
End date
2025-12-06
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Lab experiment to investigate individual preference for getting information generated by AI and the explanation of how the AI made that prediction. The experiment is done in a real-stakes decision where a Black-Box AI informs a decision to allocate actual US$10, 000-loans. I also investigate how and why people value explanations, and how behavioral factors might affect such valuations.
External Link(s)

Registration Citation

Citation
Chan, Alex. 2025. "Preference for Explainable AI Experiment." AEA RCT Registry. September 02. https://doi.org/10.1257/rct.15581-1.11
Experimental Details

Interventions

Intervention(s)
Individuals make a decision to determine how a private lender allocate two $10000-loans (approve both or approve one). Participants are randomized into a neutral treatment where payoffs are not tied to actual loan repayment outcome, or a "lender-aligned" treatment where they have with direct stakes tied to loan repayment. A secondary experiment will elicit willingness-to-pay for explanations.

The actual date of the experiments will be early September 2025, it will be on Prolific. They will be run separately as 2 experiments.
Intervention (Hidden)
Primary:
- Variation 1: neutral treatment where payoffs are not tied to actual loan repayment outcome, or a "lender-aligned" treatment where they have with direct stakes tied to loan repayment
- Variation 2: race-gender of prospective borrowers revealed to be different, or phone number revealed to be different (odd vs even)
- Variation 3: no decision to choose whether to see predicted default risk or not then choice to see explanation or not (not exactly, see next variation), or with decision to choose whether to see predicted default risk or not (latter arm will have fewer subjects) then choice to see explanation
- Variation 4: always see explanation of AI, never see explanation of AI, option to see explanation of AI with no salience, option to see explanation of AI with hint that financial factors might be used by AI, option to see explanation of AI with hint that demographics might be used by AI
- Variation 5: 3 versions of explanation for those who see or chose to see explanation. One vague description of how neural network AI risk prediction works, one that plus a SHAP interpretation of why the high risk borrower was deemed as such just based on financials, one that plus a SHAP interpretation of why the high risk borrower was deemed as such just based on financials AND race and gender

Primary Part 2:
- (within subject) decision to punish others who made a "selfish" decision with 3 treatments: decision maker knew Ai was race conscious, DM does not know, DM has a chance to know but chose not to know

Secondary: subjects have to guess whether an actual previous borrower repaid, and has a choice to buy an explanation
- Variation 1: simple AI model (2 vars) vs complex AI model (4 vars)
- Variation 2: option to buy explanation before or after prediction task
- Variation 3 (within subject): with or without private information/signal

**** Please review attached full Qualtrics survey files for full experiment (note: primary and secondary are in two separately files/surveys, primary is attached to this pre-reg, and secondary will be run later but that Qualtrics file will be pre-registered separately before that secondary experiment is run)
Intervention Start Date
2025-09-04
Intervention End Date
2025-09-30

Primary Outcomes

Primary Outcomes (end points)
(1) Binary decision of whether the participant want to see an explanation of how the AI made the prediction of the default risk of the borrowers before making a loan allocation decision

(2) Binary decision of whether the participant want to see the predicted default risk of the borrowers before making a loan allocation decision
Primary Outcomes (explanation)
Both study information-avoidance decisions.

Secondary Outcomes

Secondary Outcomes (end points)
(3) Whether the participant want to accept the AI recommendation and only approve one loan; or to override AI and just approve both (or if they made the decision before seeing explanation in arm where they choose to see recommendation or not, the first would be to approve one random borrower)
(4) How much, to reduce the second participant’s bonus at a personal cost of $0.01 per $1.00 reduction, up to the full $10 (3 scenarios -informed, not informed, chose not to be informed about explanation) - within subject
(5) What willingness to pay to see an explanation (BDM) in secondary experiment where the task is to predict whether a previous borrower defaulted (this is in secondary experiment, Qualtrics attached in different pre-reg and posted before that experiment is run)
Secondary Outcomes (explanation)
This gets at whether the information changes the actual loan allocation decision (i.e. to see if the information influences decision). See more generally how people value explanations when using AI.

Experimental Design

Experimental Design
I experimentally test how individual's preference for information regarding an AI is influenced by certain factors.

(Full design attached and randomization flow attached in this pre-registration as pdf/docs before the experiment launch)
Experimental Design Details
I ask participants
that if their survey response is randomly selected, what their answers and your answers
alone will do. The participants will have to answer correctly (namely, choose the answer
“Determine whether the private lender will approve the $10,000 loan(s) for the two real borrowers”) to be included in the final analysis sample. This sample exclusion restriction
criteria is pre-registered.

To maintain data integrity, the experiment employs multiple bot-prevention mechanisms.
First, I include CAPTCHA authentication at multiple stages (once at the start of the
experiment and once towards the end). Second, I include a “honey-pot” question designed
to be opaque to humans but readable by automated scripts. I also
time the CAPTCHA authentication response at later stages of the survey to detect non-
human engagement (if the response time was longer than the initial CAPCHA response by
twice). Finally, attention and comprehension checks are included to ensure participant
understanding of task requirements. The participants will have to pass both the bot
detection and the attention check to be included in the final analysis sample. This
sample exclusion restriction criteria is also pre-registered.
Randomization Method
Randomization done using Qualtrics' in-built function
Randomization Unit
individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
See below
Sample size: planned number of observations
2,500 for primary experiment; 1000-1600 for secondary (information for secondary attached in separate pre-reg, sample size and qualtrics for secondary posted there before secondary is run)
Sample size (or number of clusters) by treatment arms
Evenly for non-lender-aligned and for lender-aligned for main arm. 1250 each.

3/4 will be in the arms where they choose whether or not to see the explanation: Same shares for each arm mentioned above plus a variant of the main arm where race-gender descriptors for borrowers are replaced by odd/even number phone no.

1/4 same as last one but subjects will consider whether to see the AI prediction/recommendation first

Secondary: experiment to elicit WTP for explanation in a task to guess whether a previous borrower actually repaid. uncorrelated with the previous:
1/4 each for complex model with private information, complex model with no private information, simple model with private information, simple model with no private information
then, 1/2 where contingent reasoning related to the use of private information is shown and illustrated in detail and 1/2 where no such hints are given (to see if people fail to contingent reason about the value of explanation in light of private information)

**** Please review attached full Qualtrics survey file for full primary experiment (as secondary experiment for the same paper is run in a later day, that will be registered separately and before that secondary experiment is run)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
Harvard University Institutional Review Board
IRB Approval Date
2025-02-25
IRB Approval Number
IRB-25-0040
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials