Experimental Design
This study uses a randomized online experiment to test whether informational interventions and transparent AI-use policies can increase AI adoption and influence both task outcomes and workers’ experience with AI.
Stage 0 (S0): Screening for valid subjects
Following the approach of Celebi et al. (2026), participants first complete a screening survey designed to identify valid and attentive respondents before entering the main study. This stage includes attention checks, comprehension questions, and basic demographic measures to ensure that participants are reading instructions carefully and engaging thoughtfully with the survey content. Only participants who successfully pass the screening criteria are invited to continue to Survey 1 (S1). Based on previous pilots, we expect approximately 3,000 participants to pass the S0 screening and complete S1.
Survey 1 (S1): 2x2 factorial design
In S1, participants are randomly assigned to one of four conditions generated by crossing two independent interventions in a 2x2 factorial design. Randomization is implemented automatically when participants begin the survey.
The first factor is an informational AI-productivity intervention. Participants assigned to this treatment receive a brief message highlighting evidence from prior research showing that AI tools can increase productivity. This intervention is designed to encourage hands-on engagement with AI in a low-stakes context. Participants in the control condition do not receive this information.
The second factor is a transparent AI-use policy statement. Participants assigned to the transparency condition see the following statement before completing the task: “In this task, you may choose whether or not to use artificial intelligence (AI) tools (e.g., ChatGPT, Claude, Copilot, Gemini, Perplexity, or similar AI tools).
In many workplaces today, employees are allowed to use AI tools to support their work, while remaining responsible for the final output. Similarly, in this task, you are free to decide whether to use AI tools or not.
Your choice will not affect your payment or your participation in this study. “
This statement is designed to reduce concerns about being monitored or judged for AI-use decisions, thereby lowering social and psychological barriers to experimentation. Participants in the control condition do not receive this statement. Participants in the transparency condition also complete a comprehension question to verify that they have read and understood the policy statement.
The crossing of these two factors produces four experimental conditions: (A) Information + Transparency, (B) Information only, (C) Transparency only, and (D) Control, with approximately 750 participants per cell.
Survey structure and outcome measurement
The survey is structured to distinguish pre-treatment moderators from post-treatment outcomes. Before treatment, participants complete a battery of attitudinal, behavioral, and psychological measures, with question order randomized across participants. These include AI-related attitudes (ATTARI-12), baseline self-efficacy, growth mindset, trust in AI, trust in technology, trust in science, empowerment, sense of agency, feelings of guilt, ethical concerns, privacy and transparency attitudes, beliefs about AI's labor market impacts by age and gender, loss aversion, risk preferences, ambiguity aversion, time preferences, Big Five personality traits, cognitive skills, and collaboration and interdependence. These variables are measured before treatment to reduce priming and are used as pre-specified moderators in the heterogeneity analysis.Following exposure to the assigned intervention(s), participants complete an occupation-specific incentivized productivity task, such as a writing or coding task, with optional access to a generative AI tool.Primary outcomes are organized into three families. First, task performance outcomes include productivity, measured by quality ratings and completion time, combined into a standardized productivity index. Second, task experience outcomes include task-specific self-efficacy, perceived realism, and enjoyment. Third, behavioral outcomes include AI adoption during the task, persistence and engagement, time spent, time allocation across different task components (brainstorming, drafting, and editing), and self-reported AI use outside the study.Post-intervention, we also measure perceived benefits of AI (Humlum and Vestergaard, 2025) and predictions for labor market effects (e.g., displacement, if any). As secondary outcomes, we will analyze participants' perceptions of AI's broader effects on work, productivity, health, and well-being, as well as self-reported well-being and health (life satisfaction, job satisfaction, mental health) measured post-treatment in S1 and again at S2. We also measure expected changes in participants' use of generative AI tools over the next six months.Sociodemographic and employment characteristics are also collected after treatment, as these are stable individual attributes that cannot be affected by the intervention.Following S1, participants will be invited to complete a follow-up Survey 2 (S2), contingent on funding. The purpose of S2 is to measure the longer-run effects of the interventions on AI-related attitudes, AI adoption, perceptions, well-being, health, and productivity-related outcomes. S2 will also assess whether the effects observed in S1 persist over time.