Back to History

Fields Changed

Registration

Field Before After
Last Published December 26, 2025 02:01 AM January 08, 2026 01:48 PM
Intervention (Public) This study employs an online survey experiment targeting U.S. adults to examine preferences for hypothetical AI assistant services through a series of discrete choice tasks. The experiment uses two layers of randomization. First, participants are randomly assigned to choice environments that differ in (i) whether a public-sector AI assistant is available alongside private alternatives and (ii) how that public-sector option is described. Second, across tasks, the attributes of each AI service -- such as provider type, performance, access conditions, and monthly fees -- are randomized. This study employs an online survey experiment targeting U.S. adults to examine preferences for hypothetical AI assistant services through a series of discrete choice tasks. The experiment uses two layers of randomization. First, participants are randomly assigned to a choice environment that differs in whether a public-sector AI assistant option is available alongside private alternatives. Second, across tasks, the attributes of each AI service, such as provider type, performance, access conditions, and monthly fees, are randomized.
Intervention Start Date December 11, 2025 January 12, 2026
Intervention End Date December 18, 2025 April 12, 2026
Experimental Design (Public) The study is a randomized survey experiment in a sample of U.S. adults. - Screening: Participants first complete a short screening module that asks about prior use of, or interest in, AI assistants. Only those who indicate some prior use or interest are invited to proceed, so that the experiment focuses on individuals for whom AI assistant adoption decisions are relevant. - Randomization: Eligible participants are randomly assigned to one of two experimental conditions that differ in the composition of the AI services shown in the choice tasks (with vs. without a public-sector provider), and, within the public-sector condition, to one of two framings of that provider. - Choice tasks: Each respondent completes a series of discrete choice tasks (conjoint questions) in which they choose between several AI assistant profiles or an explicit "None" option. Attribute levels (e.g., price, performance, access conditions, provider type) vary across profiles and tasks according to a pre-generated experimental design. - Survey module: A separate survey section collects covariates such as institutional trust, privacy attitudes, attitudes toward AI, fiscal attitudes, and demographics. The study is a randomized survey experiment in a sample of U.S. adults. - Screening: Participants first complete a short screening module that asks about prior use of, or interest in, AI assistants. Only those who indicate some prior use or interest are invited to proceed to the choice tasks; those who do not are routed directly to the remainder of the survey. - Randomization: Eligible participants are randomly assigned to one of two experimental conditions that differ in the composition of the AI services shown in the choice tasks (with vs. without a public-sector provider). - Choice tasks: Each respondent completes a series of discrete choice tasks (conjoint questions) in which they choose between several AI assistant profiles or an explicit "None" option. Attribute levels (e.g., price, performance, access conditions, provider type) vary across profiles and tasks according to a pre-generated experimental design. - Survey module: A separate survey section collects covariates such as institutional trust, privacy attitudes, attitudes toward AI, fiscal attitudes, and demographics.
Randomization Method Randomization will be implemented within the online survey platform using built-in randomization routines and pre-generated design files: - Respondents are randomly assigned with equal probability to one of two experimental conditions (with vs. without a public-sector provider in the choice set). - Within the condition that includes a public-sector provider, respondents are further randomized with equal probability to one of two descriptions of that provider. - For each experimental condition, we use pre-generated discrete choice designs in which task order, alternative positions, and attribute combinations are randomized ex ante subject to pre-specified constraints (e.g., no duplicate or strictly dominated profiles within a task). Respondents are then randomly assigned to one design block within their condition. - The order of the survey modules and the DCE is randomized. All randomization is done by computer; the research team does not observe or modify treatment assignment during data collection. Randomization will be implemented within the online survey platform using built-in randomization routines and pre-generated design files. - Respondents are randomly assigned with equal probability to one of two experimental conditions (with vs. without a public-sector provider in the choice set). - Within the condition that includes a public-sector provider, the description of the public option is randomized within respondents across choice tasks (rather than being fixed at the respondent level). - For each experimental condition, we use pre-generated discrete choice designs in which task order, alternative positions, and attribute combinations are randomized ex ante subject to pre-specified constraints (e.g., no identical profiles or no strictly dominated alternatives within a task). Respondents are then randomly assigned to one design block (survey version) within their condition; within each block, task order and the positions of alternatives are held fixed for all respondents assigned to that block. - The position of the choice tasks within the survey (before vs. after the additional survey modules) is randomized. All randomization is done by computer; the research team does not observe or modify treatment assignment during data collection.
Randomization Unit Individual (assignment to experimental condition) and choice‑task level (for attribute profiles within each respondent). Individual (assignment to experimental condition and design block) and option/profile level within choice tasks (attribute profiles, including the public-option description when applicable).
Sample size (or number of clusters) by treatment arms - Condition A (choice sets with only privately provided AI assistants): approximately 2,500 respondents. - Condition B (choice sets that also include a publicly provided AI assistant): approximately 2,500 respondents. - Within Condition B, approximately 1,250 respondents are assigned to each public-provider description arm. - Condition A (choice sets with only privately provided AI assistants): approximately 2,500 respondents. - Condition B (choice sets that also include a publicly provided AI assistant): approximately 2,500 respondents.
Intervention (Hidden) The intervention is an online discrete choice experiment (DCE) embedded in a custom survey of U.S. adults. The DCE presents respondents with hypothetical markets for AI assistant services and elicits their choices under two alternative market structures: one with only privately provided AI services and one that also includes a publicly provided AI service (a "public option"). The design is intended to isolate how provider type and service attributes affect stated adoption, substitution patterns, and willingness-to-pay for AI assistants. - Market Structure and Framing: Respondents are randomly assigned to one of two between-subjects conditions. In the No Public Option (NPO) condition, each choice task presents two branded private AI assistant profiles -- OpenAI ChatGPT and Google Gemini -- alongside an outside option labeled "None of these options." In the Public Option (PO) condition, tasks include the same two private providers plus a government-branded AI service, such that respondents always see at least one public option in the choice set. Within the PO condition, respondents are further randomized to see the public provider under one of two framing labels: "Provided by U.S. Digital Service" (pure government provision) or "Sponsored by U.S. Digital Service, Powered by OpenAI and Google" (public-private partnership). - Attributes and Levels: Each AI service is described by four attributes. Provider labels include the two private brands and the two government frames described above. Performance is presented on a 0-100 scale, with numeric levels 60, 70, 80, and 90 each accompanied by a short verbal anchor (for example, "60/100: handles routine requests; struggles on complex reasoning or technical topics"). Access is randomized between "Limited" (subject to usage caps and wait times) and "Unlimited" (priority access with no caps). Price is presented as a monthly subscription fee, with levels $0, $10, $20, and $30 for private providers and $0, $5, $10, and $15 for the public provider. - Experimental Tasks: Each respondent completes ten discrete choice tasks. In each task, participants choose their preferred option from the displayed AI assistant profiles or select the "None" outside option. All profiles and choice sets are drawn from pre-generated experimental designs for the NPO and PO arms. The design rules out choice sets with duplicate profiles or strictly dominated alternatives and maintains reasonable balance in attribute-level frequencies across the set of tasks. The intervention is an online discrete choice experiment (DCE) embedded in a custom survey of U.S. adults. The DCE presents respondents with hypothetical markets for AI assistant services and elicits their choices under two alternative market structures: one with only privately provided AI services and one that also includes a publicly provided AI service (a "public option"). The design is intended to isolate how provider type and service attributes affect stated adoption, substitution patterns, and willingness-to-pay for AI assistants. - Market Structure and Provider Labels: Respondents are randomly assigned to one of two between-subjects conditions. (1) No Public Option (NPO): Each choice task presents two private AI assistant profiles drawn from a pre-generated experimental design over provider label (OpenAI ChatGPT, Google Gemini), performance, access, and monthly price, plus an outside option ("None of these options"). The two private profiles shown in a given task may come from the same brand or different brands (for example, both profiles may be labeled ChatGPT), and the design rules out choice sets that contain two identical profiles (meaning the same provider label with the same attribute levels) or strictly dominated alternatives. (2) Public Option (PO): Each choice task presents three AI assistant profiles drawn from a pre-generated experimental design that includes both private profiles (OpenAI ChatGPT, Google Gemini) and a government-branded AI service ("Public AI assistant"), plus the outside option. PO choice sets are constrained to include at least one public profile; the remaining profiles may be private or public, subject to the same design constraints (no identical profiles; no strictly dominated alternatives). - Public-provider label (framing): For profiles that are the public option, the displayed provider label takes one of two values: (i) "Provided by U.S. Digital Service," or (ii) "Sponsored by U.S. Digital Service, Powered by Google and OpenAI." In the PO condition, this description is randomized at the profile (option) level, so the same respondent may see both descriptions across different choice tasks. - Attributes and Levels: Each AI service profile is defined by four attributes: provider label, performance, access, and monthly price. Provider labels include the two private brands (OpenAI ChatGPT and Google Gemini) and a government-branded public option; for the public option only, the displayed provider label additionally varies by the two public-provider framings described above. Performance is presented on a 0-100 scale, with numeric levels 60, 70, 80, and 90 each accompanied by a short verbal anchor (e.g., "60: handles routine requests; struggles on complex reasoning or technical topics"). Access is randomized between "Limited (subject to usage caps and wait times)" and "Unlimited (priority access with no caps)." Price is presented as a monthly subscription fee, with levels $0, $10, $20, and $30 for private providers and $0, $5, $10, and $15 for the public provider. - Experimental Tasks: Each respondent completes ten discrete choice tasks. In each task, participants choose their preferred option from the displayed AI assistant profiles or select the "None" outside option. All profiles and choice sets are drawn from pre-generated experimental designs for the NPO and PO arms. The design rules out choice sets with identical profiles or strictly dominated alternatives and maintains reasonable balance in attribute-level frequencies across the set of tasks.
Back to top