AI as an Agent

Last registered on January 06, 2026

Pre-Trial

Trial Information

General Information

Title
AI as an Agent
RCT ID
AEARCTR-0016811
Initial registration date
September 18, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
September 19, 2025, 10:19 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
January 06, 2026, 5:55 PM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
University of California Merced

Other Primary Investigator(s)

PI Affiliation
University of California Merced

Additional Trial Information

Status
In development
Start date
2025-09-18
End date
2027-02-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study examines AI decision-making as an agent in choice experiments with a moral component. Questions of interest are: How are AI decisions affected by (1) posited objectives for AI (extent of representation of one person v. another), (2) the decision frame / game type, (3) observability of AI choices by affected person(s), (4) observability of the choice environment by affected parties, (5) preferences of affected person(s), and (6) whether AI is making a final decision or providing advice to affected parties? Treatments vary in all of these dimensions. Additional data will be collected on human responses to AI v. human advice.
External Link(s)

Registration Citation

Citation
Hua, Tony and Robert Innes. 2026. "AI as an Agent." AEA RCT Registry. January 06. https://doi.org/10.1257/rct.16811-1.2
Experimental Details

Interventions

Intervention(s)
AI (ChatGPT 5) will be prompted with choice frameworks that vary along the dimensions indicated in the abstract. AI will decide (or advise a decision) that affects interactions and payoffs between two human individuals (e.g., Sender and Receiver) in a choice experiment. AI will be acting on behalf of one of the two individuals (e.g., Sender) as an agent, but with an objective that may account for effects on the other individual (e.g., Receiver). AI decision responses and justifications will be recorded. With a sample of prompts, inferences will be drawn on effects of different dimensions on AI decisions and associated preferences. For example, in a dictator game, how do prompted objectives favoring the Dictator (v. Receiver) relatively more affect (1) the AI propensity for choosing a fair allocation on behalf of the Dictator and (2) AI's responsiveness to a revealed Dictator preference for a fair (v. selfish) allocation? Similar questions will be examined along the other dimensions of interest in this project.
Intervention Start Date
2025-09-18
Intervention End Date
2027-02-01

Primary Outcomes

Primary Outcomes (end points)
Outcomes are AI decisions in the posited choice games in the different treatments (varying the choice environment along the dimensions indicated in the abstract). For example, in a dictator game, outcomes will be the allocation choice of AI on behalf of the Dictator (where the allocation determines payments to Dictator and Receiver). We will be interested in how treatment variation drives variation in AI decisions.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
AI choice environments will be varied in two games with moral components. Treatment variation will include (1) game type, (2) AI decision v. AI advice, (3) information made available to the human parties affected by AI decisions, (4) information on human preferences made available to AI, and (5) the objective posed to AI (e.g., representing one of the human individuals interests). Repeated prompts will generate a profile of AI behavior in each treatment.
Experimental Design Details
Not available
Randomization Method
Each AI prompt in each treatment will produce an independent draw on AI behavior under the treatment. Because the underlying AI model is not altered by prior prompts, each treatment-specific sample of AI responses will constitute a random sample of AI behavior under the treatment.
Randomization Unit
Randomization is at an individual level.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
N/A
Sample size: planned number of observations
2000-3000 AI responses.
Sample size (or number of clusters) by treatment arms
20-60 AI responses per treatment.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Sample size per treatment will be informed by power calculations.
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
University of California Merced
IRB Approval Date
2025-02-05
IRB Approval Number
UCM 10-0006
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information