Gaps in Generative AI Adoption: Do Misperceptions about Others Matter?

Last registered on November 17, 2025

Pre-Trial

Trial Information

General Information

Title
Gaps in Generative AI Adoption: Do Misperceptions about Others Matter?
RCT ID
AEARCTR-0015961
Initial registration date
July 10, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 01, 2025, 10:05 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
November 17, 2025, 5:23 AM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
University of Technology Sydney

Other Primary Investigator(s)

PI Affiliation
University of Technology Sydney

Additional Trial Information

Status
In development
Start date
2025-08-18
End date
2025-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study examines whether misperceptions about others influence university students’ adoption of generative AI. Research indicates that generative AI adoption is lower among vulnerable populations in developed countries. Prior research also shows that women tend to use generative AI tools less than men, but the underlying causes of this gap remain unclear. One possible factor is that certain subgroups may underestimate how widely generative AI is accepted or valued by others. We implement a two-stage online study with university students in Australia. The first-stage survey measures students’ generative AI use, personal attitudes, and second-order beliefs about peers and employers. Informed by the first-stage results, the second stage of the project tests whether providing accurate information about employers’ demand for generative AI skills can update students’ directly and indirectly elicited beliefs and influence subsequent behavior. Together, the study aims to deepen our understanding of how misperceptions affect technology use, reported use, and the effects of information interventions on engagement.
External Link(s)

Registration Citation

Citation
Incekara-Hafalir, Elif and Yujiao Li. 2025. "Gaps in Generative AI Adoption: Do Misperceptions about Others Matter?." AEA RCT Registry. November 17. https://doi.org/10.1257/rct.15961-1.3
Experimental Details

Interventions

Intervention(s)
Intervention (Hidden)
Intervention Start Date
2025-09-04
Intervention End Date
2025-12-31

Primary Outcomes

Primary Outcomes (end points)
Stage 1 (Usage and Belief Survey)
Measures of generative AI use:
binary adoption indicator, weekly usage frequency, daily usage time
Second-order beliefs:
perceived peer usage and attitudes, perceived attitudes of employers

Stage 2 (Information-Treatment Experiment)
Measures of information treatment effect:
perceived attitudes of employers (measured once for the control group and both before and after the information treatment for the treatment group); willingness to pay for an AI training course; planned use of generative AI; moral acceptability of generative AI shadow use (elicited directly and indirectly)
Primary Outcomes (explanation)
Moral acceptability is measured both directly through self-reports and indirectly through a list experiment.

Secondary Outcomes

Secondary Outcomes (end points)
Stage 1 (Usage and Belief Survey)
Experience with generative AI:
type of tasks, tools used, paid subscription
Attitudes toward generative AI:
moral attitudes, perceived benefits, direct utility benefit (enjoyable to use), direct utility cost (difficult to use), trust in accuracy, confidence in skills, patience with generative AI, concerns about using GenAI
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
In our first stage, a sample of university students in Australia is recruited through UTS Behavioural Lab to complete an online survey. Participants are asked to report their own use of generative AI tools, their personal attitudes toward generative AI, and their second-order beliefs. These beliefs include perceptions about their peers’ adoption of generative AI, peers’ moral attitudes toward generative AI, and employers’ attitudes toward generative AI in the labor market. Second-order beliefs are elicited by asking participants to guess the percentage of others who hold specific behaviors or views. In addition to a base payment, a small number of participants whose guesses are closest to the average values receive bonus payments. The survey concludes with basic demographic questions.

In the second-stage experiment, informed by the first-stage results, another sample of university students is recruited through the UTS Behavioural Lab and randomly assigned, in equal proportions (1:1), to an information-treatment group or a control group. Participants in the treatment group receive information about leaders’ attitudes toward generative AI skills in the labor market. The information describes evidence from a recent external survey indicating how generative AI is valued in hiring decisions. The control group does not receive any informational content and proceeds directly to the outcome questions. This design measures the effect of belief correction on participants’ perceptions of employer attitudes, their willingness to pay for an online AI training course, their planned use of generative AI, and their moral acceptability of generative AI shadow use in study. Moral acceptability is assessed through both direct self-reports and a list experiment. This list-experiment randomization is orthogonal to the information treatment: participants in both the information-treatment arm and the control arm are further randomly split into list-treatment and list-control groups. Only participants in the list-control group—who have not seen the sensitive item embedded in a list—receive the direct moral acceptability question. This design allows us to examine how reported and indirectly elicited beliefs are updated by the information treatment.
Experimental Design Details
Randomization Method
Randomization done by a computer
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
200 individuals in the first stage.
300 individuals in the second stage.
Sample size: planned number of observations
200 individuals in the first stage. 300 individuals in the second stage.
Sample size (or number of clusters) by treatment arms
The first-stage sample includes 100 male and 100 female students and involves no treatment.

The second-stage sample consists of 300 students, who are randomly assigned in equal proportions to the information-treatment arm (150 students) or the control arm (150 students). Within each of these two arms, participants are further randomized, in equal proportions, into the list-treatment and list-control groups:
Information-treatment arm: 75 students in the list-treatment group and 75 students in the list-control group
Control arm: 75 students in the list-treatment group and 75 students in the list-control group
Only participants in the list-control groups receive the direct moral acceptability question, for a total of 150 students (75 in each arm).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
UTS HREC
IRB Approval Date
2025-06-22
IRB Approval Number
ETH23-8040

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials