The What, When and Why of AI Use in Online Preference Elicitation Experiments

Last registered on October 13, 2025

Pre-Trial

Trial Information

General Information

Title
The What, When and Why of AI Use in Online Preference Elicitation Experiments
RCT ID
AEARCTR-0016965
Initial registration date
October 07, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 13, 2025, 9:56 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation

Other Primary Investigator(s)

PI Affiliation
UCSB

Additional Trial Information

Status
In development
Start date
2025-10-14
End date
2025-12-16
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We explore the effects of AI in online preference elicitation experiments (risk, time, and social preferences) by providing subjects with an LLM within the experiment. AI provision itself does not substantially change the outcomes in all of our preference types, but the AI use has a noticeable impact on tasks that seem less intuitive to participants and tasks where AI can anchor the advice to a particular number. The measures of usefulness of AI provided by the participants are highly correlated with the measures of change in risk preferences in response to AI advice. We argue that specific elicitation methods cause more confusion and are more prone to being delegated to AI by the experimental subjects. We also point out particular patterns of AI use that can allow researchers to AI-proof their experiments.
External Link(s)

Registration Citation

Citation
Brooksby, Austin and Anastasiia Morozova. 2025. "The What, When and Why of AI Use in Online Preference Elicitation Experiments." AEA RCT Registry. October 13. https://doi.org/10.1257/rct.16965-1.0
Experimental Details

Interventions

Intervention(s)
We conduct canonical versions of risk, time, and social preferences elicitation tasks. We deploy these tasks in conjunction with a treatment, providing treated participants with the option to chat with an LLM within the experiment.
Intervention (Hidden)
We employ a between-subjects design, where the control group is asked to perform a series of decision-making tasks without explicit access to an LLM within the experiment. In contrast, the treatment group has access to an LLM chat window simultaneously and in the exact virtual location where the decision-making tasks occur.
Subjects are randomized into one of three groups: risk preferences, time preferences, or social preferences. The tasks in these groups are:
Risk:
6 Becker-DeGroot-Marschak (BDM) lottery valuation tasks
6 Gneezy-Potters (GP) investment tasks
Time:
6 Bernheim-Sprenger Convex time-budget (CTB) tasks
6 money-now-or-later tasks
Social:
Trust game (TG)
Ultimatum game (UG)
Dictator game (DG)

Subjects perform the tasks within their assigned group in a randomized order. The experiment is incentivized with a random problem selection mechanism following Azrieli, Chambers, and Healy (2018). The random problem selected is incentivized with a task-specific mechanism explained to the subject at the time of conduct.
Intervention Start Date
2025-10-14
Intervention End Date
2025-12-16

Primary Outcomes

Primary Outcomes (end points)
Risk elicitation: Risk preference parameters (assuming CRRA) and absolute deviation from expected value.
Time preference elicitation: time preference parameters under hyperbolic discounting.
Social preferences: in trust game -- amount sent, reciprocity amounts, in dictator game -- amount sent, in ultimatum game -- proposer's offer and minimum acceptable offer.
The comparison between treatment and control group for these key variables constitutes the intent-to-treat effect of AI availability in online preference elicitation experiments.
The proportion of subjects utilizing the LLM chat across experiments, in conjunction with measures of task-specific subjective AI value, as a measure of elicitation method-induced subject confusion. The take-up of AI option allows us to construct the treatment-on--the-treated measure of AI impact on preference elicitation in online experiments.
Primary Outcomes (explanation)
For the risk tasks, we estimate preferences separately for each elicitation method. In the BDM lottery valuation task, we fit a structural CRRA utility model under expected utility, assuming probabilistic choice. In the GP investment task, we similarly estimate a CRRA utility model based on observed investment shares. In addition, we report reduced-form measures: for the BDM, the certainty equivalents implied by stated valuations, and for the GP task, the fraction of the endowment invested in the risky asset.

For both MEL and CTB tasks, we estimate time preferences using a structural CRRA utility framework with quasi-hyperbolic beta–delta discounting, assuming probabilistic choice. In addition, for MEL tasks we report reduced-form measures based on implied discount rates from switching points.

Secondary Outcomes

Secondary Outcomes (end points)
For all decisions, we elicit the level of certainty in the choice, the time to reach the decision, the perceived helpfulness of AI, and the belief about how intuitive the task was. We also elicit the subject's willingness to pay for AI to make all decisions in the task and their willingness to pay for the AI to revise the decisions made by the subject, both before and after the task's completion. For the treatment group, we record the use of the LLM (take-up rate and chat history).
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We run three experimental sessions on Prolific: separately for risk, time, and social preferences elicitation. Participants are randomized in control (no LLM) or treatment (LLM offered) group.
Experimental Design Details
We run three experimental sessions on Prolific: separately for risk, time, and social preferences elicitation. Participants are randomized in control (no LLM) or treatment (LLM offered) group. In the risk preferences elicitation, they are asked to evaluate a series of lotteries using the BDM mechanism and perform a Gneezy-Potters (1997) task. In the time preferences group, subjects participate in a series of convex time-budget tasks and a series of money-now-or-later choices. In the social preferences task, subjects are asked to complete the trust game, the ultimatum game, and the dictator game in a random order.
We elicit their willingness to pay for having AI make all the decisions for them or revise their decisions before the task (but after getting familiar with the rules) and after the task is completed. We also elicit their beliefs about the helpfulness of AI, their use of AI outside the experiment for the control group, or their use of AI beyond the one provided by the experimenter in the task.
Randomization Method
Done by computer
Randomization Unit
Subject
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
NA
Sample size: planned number of observations
128 subjects for risk elicitation, 100 subjects for social preferences
Sample size (or number of clusters) by treatment arms
Risk: 64 subjects in treatment, 64 subjects in control. Social preferences: 50 people in treatment, 50 people in control
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
0.5 standard deviation
IRB

Institutional Review Boards (IRBs)

IRB Name
Human Subjects Committee (HSC)
IRB Approval Date
2025-10-06
IRB Approval Number
N/A

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials