|
Field
Last Published
|
Before
February 01, 2026 06:10 PM
|
After
April 30, 2026 04:15 PM
|
|
Field
Experimental Design (Public)
|
Before
AI decision-makers face allocation problems in an experiment in which the payoff of another party is hidden and can choose whether to reveal payoff information before deciding (hidden vs full information). In a separate component, human participants evaluate the moral acceptability of these decisions under different information conditions.
Control group for AI decision-maker will be a baseline condition without additional prompts. For each AI prompt framing, AI will be randomly assigned to different variants of the experiment. Treatment groups involve different AI reasoning frames. Behavior between different prompt framing will be compared and evaluated.
Human subjects evaluate all possible decision combinations (i.e. strategy method).
|
After
AI decision-makers face allocation problems in an experiment in which the payoff of another party is hidden and can choose whether to reveal payoff information before deciding (hidden vs full information). In a separate component, human participants evaluate the moral acceptability of these decisions under different information conditions.
Control group for AI decision-maker will be a baseline condition without additional prompts. For each AI prompt framing, AI will be randomly assigned to different variants of the experiment. Treatment groups involve different AI reasoning frames. Behavior between different prompt framing will be compared and evaluated.
Human subjects evaluate all possible decision combinations (i.e. strategy method).
3.6.2026
Human decision-makers in the same type of allocation problems encountered by AI under 3 payoff schemes: Canonical Payoffs, Expensive Fairness, and Increased Harm. Between subject comparison between (hidden/full info) x (3 payoff schemes).
|
|
Field
Randomization Unit
|
Before
Individual AI prompts
|
After
Individual AI prompts
3.6.2026
Individual (recruited online)
|
|
Field
Planned Number of Observations
|
Before
AI decision-maker: 40–80 AI runs per preregistered persona × condition cell (Stage 1: N=40; Stage 2 adds N=40 if stopping-rule criteria are met), with outcomes recorded at the run level.
Human study: 200–250 participants (US-based Prolific), each providing 4 scenario-level observations (within-subject 2×2), for a total of ~800–1,000 scenario evaluations.
|
After
AI decision-maker: 40–80 AI runs per preregistered persona × condition cell (Stage 1: N=40; Stage 2 adds N=40 if stopping-rule criteria are met), with outcomes recorded at the run level.
Human study: 100–150 participants (US-based Prolific), each providing 4 scenario-level observations (within-subject 2×2), for a total of ~800–1,000 scenario evaluations.
3.7.2026
Human decision-maker study: 80 runs (hidden-info) and 40 runs (full-info) for each of Canonical Payoffs, Expensive Fairness, Increased Harm
|