You need to sign in or sign up before continuing.
Back to History

Fields Changed

Registration

Field Before After
Abstract Lab experiment to investigate individual preference for getting information generated by AI and the explanation of how the AI made that prediction. The experiment is done in a real-stakes decision where a Black-Box AI informs a decision to allocate an actual US$10, 000-loan. Lab experiment to investigate individual preference for getting information generated by AI and the explanation of how the AI made that prediction. The experiment is done in a real-stakes decision where a Black-Box AI informs a decision to allocate actual US$10, 000-loans. I also investigate how and why people value explanations, and how behavioral factors might affect such valuations.
Trial Start Date June 01, 2025 September 03, 2025
Last Published April 24, 2025 10:23 AM September 01, 2025 11:47 PM
Intervention (Public) Individuals make a decision to determine how a private lender allocate a $10000-loan. Participants are randomized into a neutral treatment where payoffs are not tied to actual loan repayment outcome, or a "lender-aligned" treatment where they have with direct stakes tied to loan repayment. The actual date of the experiment will be pre-registered here before launch. Individuals make a decision to determine how a private lender allocate two $10000-loans (approve both or approve one). Participants are randomized into a neutral treatment where payoffs are not tied to actual loan repayment outcome, or a "lender-aligned" treatment where they have with direct stakes tied to loan repayment. A secondary experiment will elicit willingness-to-pay for explanations. The actual date of the experiments will be early September 2025, it will be on Prolific. They will be run separately as 2 experiments.
Intervention Start Date June 01, 2025 September 03, 2025
Intervention End Date December 06, 2025 September 30, 2025
Primary Outcomes (End Points) (1) Binary decision of whether the participant want to see an explanation of how the AI made the prediction of the default risk of the borrowers before making a loan allocation decision ** The full set of outcome variables that we intend to analyze is listed in the stata do file attached to this experiment before the experiment itself; besides those tables, we plan to present the word cloud formed by the responses to the question “You chose to see an explanation of how the AI Algorithm made the default risk predictions BEFORE making the loan decision, what were you hoping to learn from the explanation?”, among the participants who was presented and taken up the option to review an explanation where the chance to see the role race and gender play in the default risk calculation by the AI is made salient (2) Binary decision of whether the participant want to see the predicted default risk of the borrowers before making a loan allocation decision (1) Binary decision of whether the participant want to see an explanation of how the AI made the prediction of the default risk of the borrowers before making a loan allocation decision (2) Binary decision of whether the participant want to see the predicted default risk of the borrowers before making a loan allocation decision
Planned Number of Observations 2,500 2,500 for primary experiment; 1000-1600 for secondary (information for secondary attached in separate pre-reg, sample size and qualtrics for secondary posted there before secondary is run)
Sample size (or number of clusters) by treatment arms Evenly for non-lender-aligned and for lender-aligned for main arm. 1250 each. 4/5 will be in the arms where they choose whether or not to see the explanation: Same shares for a variant of the main arm where race-gender descriptors for borrowers are replaced by odd/even number birth-day/month. (total 2000 across these arms) 1/5 as much as above, but within this even between non-lender-aligned and lender-aligned for an arm similar to main arm except that participants have a choice to see the predicted default risk generated by AI or not. These participants will be randomized to see an explanation. (total 500) 2500 in total. Evenly for non-lender-aligned and for lender-aligned for main arm. 1250 each. 3/4 will be in the arms where they choose whether or not to see the explanation: Same shares for each arm mentioned above plus a variant of the main arm where race-gender descriptors for borrowers are replaced by odd/even number phone no. 1/4 same as last one but subjects will consider whether to see the AI prediction/recommendation first Secondary: experiment to elicit WTP for explanation in a task to guess whether a previous borrower actually repaid. uncorrelated with the previous: 1/4 each for complex model with private information, complex model with no private information, simple model with private information, simple model with no private information then, 1/2 where contingent reasoning related to the use of private information is shown and illustrated in detail and 1/2 where no such hints are given (to see if people fail to contingent reason about the value of explanation in light of private information) **** Please review attached full Qualtrics survey file for full primary experiment (as secondary experiment for the same paper is run in a later day, that will be registered separately and before that secondary experiment is run)
Intervention (Hidden) 2X3X5X3 design 2: neutral treatment where payoffs are not tied to actual loan repayment outcome, or a "lender-aligned" treatment where they have with direct stakes tied to loan repayment 3: race-gender of prospective borrowers revealed to be different AND no decision to choose whether to see predicted default risk or not, race-gender of prospective borrowers NOT revealed AND no decision to choose whether to see predicted default risk or not, race-gender of prospective borrowers revealed to be different AND with decision to choose whether to see predicted default risk or not (*** the cost to see information in any arm is $0.01) 5: always see explanation of AI, never see explanation of AI, option to see explanation of AI with no salience, option to see explanation of AI with hint that financial factors might be used by AI, option to see explanation of AI with hint that financial factors and demographics might be used by AI 3: 3 versions of explanation for those who see or chose to see explanation. One vague description of how neural network AI risk prediction works, one that plus a SHAP interpretation of why the high risk borrower was deemed as such just based on financials, one that plus a SHAP interpretation of why the high risk borrower was deemed as such just based on financials AND race and gender Primary: - Variation 1: neutral treatment where payoffs are not tied to actual loan repayment outcome, or a "lender-aligned" treatment where they have with direct stakes tied to loan repayment - Variation 2: race-gender of prospective borrowers revealed to be different, or phone number revealed to be different (odd vs even) - Variation 3: no decision to choose whether to see predicted default risk or not then choice to see explanation or not (not exactly, see next variation), or with decision to choose whether to see predicted default risk or not (latter arm will have fewer subjects) then choice to see explanation - Variation 4: always see explanation of AI, never see explanation of AI, option to see explanation of AI with no salience, option to see explanation of AI with hint that financial factors might be used by AI, option to see explanation of AI with hint that demographics might be used by AI - Variation 5: 3 versions of explanation for those who see or chose to see explanation. One vague description of how neural network AI risk prediction works, one that plus a SHAP interpretation of why the high risk borrower was deemed as such just based on financials, one that plus a SHAP interpretation of why the high risk borrower was deemed as such just based on financials AND race and gender Primary Part 2: - (within subject) decision to punish others who made a "selfish" decision with 3 treatments: decision maker knew Ai was race conscious, DM does not know, DM has a chance to know but chose not to know Secondary: subjects have to guess whether an actual previous borrower repaid, and has a choice to buy an explanation - Variation 1: simple AI model (2 vars) vs complex AI model (4 vars) - Variation 2: option to buy explanation before or after prediction task - Variation 3 (within subject): with or without private information/signal **** Please review attached full Qualtrics survey files for full experiment (note: primary and secondary are in two separately files/surveys, primary is attached to this pre-reg, and secondary will be run later but that Qualtrics file will be pre-registered separately before that secondary experiment is run)
Secondary Outcomes (End Points) (3) Whether the participant want to allocate 90% of the loan to the low-risk borrower and 10% to the high-risk borrower, or 50% to each (4) How much, to reduce the second participant’s bonus at a personal cost of $0.01 per $1.00 reduction, up to the full $10 (3 scenarios -informed, not informed, chose not to be informed about explanation) - within subject (3) Whether the participant want to accept the AI recommendation and only approve one loan; or to override AI and just approve both (or if they made the decision before seeing explanation in arm where they choose to see recommendation or not, the first would be to approve one random borrower) (4) How much, to reduce the second participant’s bonus at a personal cost of $0.01 per $1.00 reduction, up to the full $10 (3 scenarios -informed, not informed, chose not to be informed about explanation) - within subject (5) What willingness to pay to see an explanation (BDM) in secondary experiment where the task is to predict whether a previous borrower defaulted (this is in secondary experiment, Qualtrics attached in different pre-reg and posted before that experiment is run)
Secondary Outcomes (Explanation) This gets at whether the information changes the actual loan allocation decision (i.e. to see if the information influences decision) This gets at whether the information changes the actual loan allocation decision (i.e. to see if the information influences decision). See more generally how people value explanations when using AI.
Back to top