Back to History

Fields Changed

Registration

Field Before After
Last Published February 01, 2026 06:10 PM April 30, 2026 04:15 PM
Experimental Design (Public) AI decision-makers face allocation problems in an experiment in which the payoff of another party is hidden and can choose whether to reveal payoff information before deciding (hidden vs full information). In a separate component, human participants evaluate the moral acceptability of these decisions under different information conditions. Control group for AI decision-maker will be a baseline condition without additional prompts. For each AI prompt framing, AI will be randomly assigned to different variants of the experiment. Treatment groups involve different AI reasoning frames. Behavior between different prompt framing will be compared and evaluated. Human subjects evaluate all possible decision combinations (i.e. strategy method). AI decision-makers face allocation problems in an experiment in which the payoff of another party is hidden and can choose whether to reveal payoff information before deciding (hidden vs full information). In a separate component, human participants evaluate the moral acceptability of these decisions under different information conditions. Control group for AI decision-maker will be a baseline condition without additional prompts. For each AI prompt framing, AI will be randomly assigned to different variants of the experiment. Treatment groups involve different AI reasoning frames. Behavior between different prompt framing will be compared and evaluated. Human subjects evaluate all possible decision combinations (i.e. strategy method). 3.6.2026 Human decision-makers in the same type of allocation problems encountered by AI under 3 payoff schemes: Canonical Payoffs, Expensive Fairness, and Increased Harm. Between subject comparison between (hidden/full info) x (3 payoff schemes).
Randomization Unit Individual AI prompts Individual AI prompts 3.6.2026 Individual (recruited online)
Planned Number of Observations AI decision-maker: 40–80 AI runs per preregistered persona × condition cell (Stage 1: N=40; Stage 2 adds N=40 if stopping-rule criteria are met), with outcomes recorded at the run level. Human study: 200–250 participants (US-based Prolific), each providing 4 scenario-level observations (within-subject 2×2), for a total of ~800–1,000 scenario evaluations. AI decision-maker: 40–80 AI runs per preregistered persona × condition cell (Stage 1: N=40; Stage 2 adds N=40 if stopping-rule criteria are met), with outcomes recorded at the run level. Human study: 100–150 participants (US-based Prolific), each providing 4 scenario-level observations (within-subject 2×2), for a total of ~800–1,000 scenario evaluations. 3.7.2026 Human decision-maker study: 80 runs (hidden-info) and 40 runs (full-info) for each of Canonical Payoffs, Expensive Fairness, Increased Harm
Back to top

Analysis Plans

Field Before After
Document
When_Does_Moral_Wiggle_Room_Arise__Implications_from_Large_Language_Models 3.7.2026.pdf
MD5: 79e0fb6f959ab35c5f8c51cc15007102
SHA1: 476fbdc475ba936de02d804b7c05f52d2afb4eb2
Title Amended version with new human component study
Back to top

Documents

Field Before After
Document Name Screenshots
File
screenshots.zip
MD5:
SHA1:
Description Screenshots of the main experimental interfaces
Public Yes
Back to top

Irbs

Field Before After
IRB Name University of Californa, Merced Institutional Review Board
IRB Approval Date March 09, 2026
IRB Approval Number UCM2026-24
Back to top