Back to History

Fields Changed

Registration

Field Before After
Abstract This study examines whether misperceptions about others’ use of and attitudes toward generative artificial intelligence (AI) tools influence students’ own adoption—and whether these belief gaps differ by gender and other demographics. Research indicates that there is lower generative AI adoption among vulnerable populations in developed countries. Prior research also shows that women tend to use generative AI tools less than men, but the underlying causes of this gap remain unclear. One possible factor is that certain subgroups may underestimate how widely generative AI is used or valued by peers and employers. We conduct a two-stage online study with university students in Australia. The first stage survey measures actual generative AI use, personal attitudes, and perceptions about others, as well as collecting demographic data. The second stage tests impact of potential inaccurate beliefs and potential information treatment effects, which will be based on the first stage data. This study examines whether misperceptions about others influence university students’ adoption of generative AI. Research indicates that generative AI adoption is lower among vulnerable populations in developed countries. Prior research also shows that women tend to use generative AI tools less than men, but the underlying causes of this gap remain unclear. One possible factor is that certain subgroups may underestimate how widely generative AI is accepted or valued by others. We implement a two-stage online study with university students in Australia. The first-stage survey measures students’ generative AI use, personal attitudes, and second-order beliefs about peers and employers. Informed by the first-stage results, the second stage of the project tests whether providing accurate information about employers’ demand for generative AI skills can update students’ directly and indirectly elicited beliefs and influence subsequent behavior. Together, the study aims to deepen our understanding of how misperceptions affect technology use, reported use, and the effects of information interventions on engagement.
Last Published August 17, 2025 10:03 AM November 17, 2025 05:23 AM
Primary Outcomes (End Points) Measures of generative AI use: binary adoption indicator, weekly usage frequency, daily usage time Second-order beliefs: perceived peer usage and attitudes, perceived attitudes of employers Stage 1 (Usage and Belief Survey) Measures of generative AI use: binary adoption indicator, weekly usage frequency, daily usage time Second-order beliefs: perceived peer usage and attitudes, perceived attitudes of employers Stage 2 (Information-Treatment Experiment) Measures of information treatment effect: perceived attitudes of employers (measured once for the control group and both before and after the information treatment for the treatment group); willingness to pay for an AI training course; planned use of generative AI; moral acceptability of generative AI shadow use (elicited directly and indirectly)
Primary Outcomes (Explanation) Moral acceptability is measured both directly through self-reports and indirectly through a list experiment.
Experimental Design (Public) A sample of university students in Australia are recruited through UTS Behavioural Lab to complete an online survey. Participants are asked to report their own use of generative AI tools, their personal attitudes toward generative AI, and their second-order beliefs. These beliefs include perceptions about their peers’ adoption of generative AI, peers’ moral attitudes toward generative AI, and employers’ attitudes toward generative AI in the labor market. Second-order beliefs are elicited by asking participants to guess the percentage of others who hold specific behaviors or views. In addition to a base payment, a small number of participants whose guesses are closest to the average values receive bonus payments. The survey concludes with basic demographic questions. In the second-stage experiment, participants will be randomly assigned in a 1:1 ratio to either a treatment or a control group. The treatment group will receive corrective information (e.g., a bar graph summarising peer generative AI usage from stage 1) before completing outcome measuring questions. The control group will not receive any such information and will proceed directly to the same outcome tasks. This random assignment is designed to estimate the effect of belief correction on generative AI-related attitudes and willingness to pay. The second-stage experiment details will be based on the results of the first stage, and this preregistration will be updated accordingly before second-stage data collection. In our first stage, a sample of university students in Australia is recruited through UTS Behavioural Lab to complete an online survey. Participants are asked to report their own use of generative AI tools, their personal attitudes toward generative AI, and their second-order beliefs. These beliefs include perceptions about their peers’ adoption of generative AI, peers’ moral attitudes toward generative AI, and employers’ attitudes toward generative AI in the labor market. Second-order beliefs are elicited by asking participants to guess the percentage of others who hold specific behaviors or views. In addition to a base payment, a small number of participants whose guesses are closest to the average values receive bonus payments. The survey concludes with basic demographic questions. In the second-stage experiment, informed by the first-stage results, another sample of university students is recruited through the UTS Behavioural Lab and randomly assigned, in equal proportions (1:1), to an information-treatment group or a control group. Participants in the treatment group receive information about leaders’ attitudes toward generative AI skills in the labor market. The information describes evidence from a recent external survey indicating how generative AI is valued in hiring decisions. The control group does not receive any informational content and proceeds directly to the outcome questions. This design measures the effect of belief correction on participants’ perceptions of employer attitudes, their willingness to pay for an online AI training course, their planned use of generative AI, and their moral acceptability of generative AI shadow use in study. Moral acceptability is assessed through both direct self-reports and a list experiment. This list-experiment randomization is orthogonal to the information treatment: participants in both the information-treatment arm and the control arm are further randomly split into list-treatment and list-control groups. Only participants in the list-control group—who have not seen the sensitive item embedded in a list—receive the direct moral acceptability question. This design allows us to examine how reported and indirectly elicited beliefs are updated by the information treatment.
Planned Number of Clusters 200 individuals in the first stage. The number for the second stage will be based on the results of the first stage, and the preregistration will be updated accordingly before second-stage data collection. 200 individuals in the first stage. 300 individuals in the second stage.
Planned Number of Observations 200 individuals in the first stage. The number for the second stage will be based on the results of the first stage, and the preregistration will be updated accordingly before second-stage data collection. 200 individuals in the first stage. 300 individuals in the second stage.
Sample size (or number of clusters) by treatment arms The first-stage sample includes 100 male and 100 female students. In the second stage, participants will be randomly assigned to treatment and control groups in equal proportions (1:1) and the information treatment details will be based on the first stage results. The preregistration will be updated before the treatment and data collection in the second stage. The first-stage sample includes 100 male and 100 female students and involves no treatment. The second-stage sample consists of 300 students, who are randomly assigned in equal proportions to the information-treatment arm (150 students) or the control arm (150 students). Within each of these two arms, participants are further randomized, in equal proportions, into the list-treatment and list-control groups: Information-treatment arm: 75 students in the list-treatment group and 75 students in the list-control group Control arm: 75 students in the list-treatment group and 75 students in the list-control group Only participants in the list-control groups receive the direct moral acceptability question, for a total of 150 students (75 in each arm).
Additional Keyword(s) Generative AI Generative AI, Inequality
Secondary Outcomes (End Points) Experience with generative AI: type of tasks, tools used, paid subscription Attitudes toward generative AI: moral attitudes, perceived benefits, direct utility benefit (enjoyable to use), direct utility cost (difficult to use), trust in accuracy, confidence in skills, patience with generative AI, concerns about using GenAI Stage 1 (Usage and Belief Survey) Experience with generative AI: type of tasks, tools used, paid subscription Attitudes toward generative AI: moral attitudes, perceived benefits, direct utility benefit (enjoyable to use), direct utility cost (difficult to use), trust in accuracy, confidence in skills, patience with generative AI, concerns about using GenAI
Secondary Outcomes (Explanation) We include a list experiment to test whether social desirability bias influences responses to the moral attitude question.
Back to top