Back to History Current Version

Consumer Choices over AI Assistant Services

Last registered on December 26, 2025

Pre-Trial

Trial Information

General Information

Title
Consumer Choices over AI Assistant Services
RCT ID
AEARCTR-0017411
Initial registration date
December 10, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
December 26, 2025, 2:01 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Rice University

Other Primary Investigator(s)

PI Affiliation
University of Notre Dame

Additional Trial Information

Status
In development
Start date
2025-12-11
End date
2027-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Consumers are increasingly adopting general-purpose AI assistants, which can be offered by both private firms and public-sector institutions. This project examines how individuals evaluate AI-powered assistant services when provider type and service design vary. Using a discrete choice experiment, we study how stated preferences respond to differences in provider, capabilities, and pricing and recover consumer preferences over these attributes. The findings will be used to assess how the availability and design of a publicly provided AI assistant affect adoption, substitution patterns with private providers, and consumer welfare.
External Link(s)

Registration Citation

Citation
Lee, Jung Youn and Joonhyuk Yang. 2025. "Consumer Choices over AI Assistant Services." AEA RCT Registry. December 26. https://doi.org/10.1257/rct.17411-1.0
Experimental Details

Interventions

Intervention(s)
This study employs an online survey experiment targeting U.S. adults to examine preferences for hypothetical AI assistant services through a series of discrete choice tasks. The experiment uses two layers of randomization. First, participants are randomly assigned to choice environments that differ in (i) whether a public-sector AI assistant is available alongside private alternatives and (ii) how that public-sector option is described. Second, across tasks, the attributes of each AI service -- such as provider type, performance, access conditions, and monthly fees -- are randomized.
Intervention Start Date
2025-12-11
Intervention End Date
2025-12-18

Primary Outcomes

Primary Outcomes (end points)
- Adoption propensity: A binary indicator for whether a participant selects any AI assistant or opts out (chooses "None") in each choice task.
- Provider choice shares: The distribution of choices across different provider types across experimental conditions.
- Market substitution effects: Changes in the choice shares of private providers when a public option is available in the choice set.
Primary Outcomes (explanation)
We use the data from the discrete choice experiment to construct several related endpoints. First, we measure overall market participation as the probability of choosing any AI assistant rather than the "None" option. Second, within choice sets where a public-sector provider is present, we measure its take-up rate and analyze how this probability varies with service attributes. Third, we summarize substitution patterns across provider types using observed choice shares. These results are then combined with a structural demand model to recover preference parameters and implied willingness-to-pay for specific service characteristics.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The study is a randomized survey experiment in a sample of U.S. adults.

- Screening: Participants first complete a short screening module that asks about prior use of, or interest in, AI assistants. Only those who indicate some prior use or interest are invited to proceed, so that the experiment focuses on individuals for whom AI assistant adoption decisions are relevant.

- Randomization: Eligible participants are randomly assigned to one of two experimental conditions that differ in the composition of the AI services shown in the choice tasks (with vs. without a public-sector provider), and, within the public-sector condition, to one of two framings of that provider.

- Choice tasks: Each respondent completes a series of discrete choice tasks (conjoint questions) in which they choose between several AI assistant profiles or an explicit "None" option. Attribute levels (e.g., price, performance, access conditions, provider type) vary across profiles and tasks according to a pre-generated experimental design.

- Survey module: A separate survey section collects covariates such as institutional trust, privacy attitudes, attitudes toward AI, fiscal attitudes, and demographics.
Experimental Design Details
Not available
Randomization Method
Randomization will be implemented within the online survey platform using built-in randomization routines and pre-generated design files:

- Respondents are randomly assigned with equal probability to one of two experimental conditions (with vs. without a public-sector provider in the choice set).

- Within the condition that includes a public-sector provider, respondents are further randomized with equal probability to one of two descriptions of that provider.

- For each experimental condition, we use pre-generated discrete choice designs in which task order, alternative positions, and attribute combinations are randomized ex ante subject to pre-specified constraints (e.g., no duplicate or strictly dominated profiles within a task). Respondents are then randomly assigned to one design block within their condition.

- The order of the survey modules and the DCE is randomized.

All randomization is done by computer; the research team does not observe or modify treatment assignment during data collection.
Randomization Unit
Individual (assignment to experimental condition) and choice‑task level (for attribute profiles within each respondent).
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Not clustered.
Sample size: planned number of observations
Approximately 5,000 respondents (yielding 50,000 choice observations, as each respondent completes 10 tasks).
Sample size (or number of clusters) by treatment arms
- Condition A (choice sets with only privately provided AI assistants): approximately 2,500 respondents.

- Condition B (choice sets that also include a publicly provided AI assistant): approximately 2,500 respondents.

- Within Condition B, approximately 1,250 respondents are assigned to each public-provider description arm.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Rice University Institutional Review Board
IRB Approval Date
2025-11-21
IRB Approval Number
IRB-FY2026-150