AI Usage in Research and Attitudes Towards AI

Last registered on April 23, 2025

Pre-Trial

Trial Information

General Information

Title
AI Usage in Research and Attitudes Towards AI
RCT ID
AEARCTR-0015769
Initial registration date
April 17, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 23, 2025, 10:19 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
University of Ottawa

Other Primary Investigator(s)

PI Affiliation
University of Ottawa

Additional Trial Information

Status
In development
Start date
2024-02-14
End date
2025-08-31
Secondary IDs
Prior work
This trial is based on or builds upon one or more prior RCTs.
Abstract
This study will examine the use of generative AI tools in academic research and teaching, and the impact of exposure to an AI-focused experimental intervention on subsequent AI usage and attitudes. The study follows up with participants of the AI Replication Games (Brodeur et al., 2025), a randomized experiment conducted in 2024, in which participants were assigned to work with or without AI tools during research replication event. The follow-up survey collects data on participants' AI usage, work behavior, and attitudes towards AI. The planned analyses will estimate treatment effects of the intervention, explore whether effects vary by prior AI usage, and use randomized assignment as an instrument to examine the relationship between AI use and attitudes towards it.

Research Questions:

Exploratory Research Questions
RQ1: How do academic researchers use generative AI tools in their work?
• Descriptive evidence on patterns, intensity, and purposes of AI use across typical research-related tasks.

RQ2: What are academic researchers’ attitudes towards AI?
• Descriptive evidence on perceived risks, benefits, trust, and views about the future impact of AI.

Primary Research Questions
RQ3: What is the effect of exposure to AI during the AI Replication Games intervention on subsequent AI usage, work behavior, and attitudes towards AI?
• We will estimate treatment effects on AI usage and attitudes, comparing participants assigned to the Human-only, AI-assisted, and AI-led conditions.

RQ4: Does AI usage affect attitudes towards AI?
• Using the randomized treatment assignment as an instrument for AI usage, we will estimate whether greater AI use leads to changes in attitudes towards AI.

Secondary research questions
RQ5: Do treatment effects on AI usage and attitudes vary by participants' prior experience with AI?
• We will examine whether treatment effects on AI usage and attitudes differ for participants who were already using AI tools prior to the intervention (early adopters), compared to those with little or no prior AI experience.

RQ6: Do treatment effects on AI usage and attitudes vary by time since the AI Replication Games?
• We will examine whether treatment effects on AI usage, work behavior, and attitudes differ depending on the time elapsed between the AI Replication Games and the follow-up survey.
External Link(s)

Registration Citation

Citation
Brodeur, Abel and David Valenta. 2025. "AI Usage in Research and Attitudes Towards AI." AEA RCT Registry. April 23. https://doi.org/10.1257/rct.15769-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
The intervention was participation in the AI Replication Games, a series of events organized in 2024 by the Institute for Replication (I4R). These events aimed to experimentally test how access to AI tools influences researchers' ability to reproduce published quantitative social science studies.

Participants (graduate students, postdoctoral fellows, faculty, and other researchers in various, primarily social sciences, fields) were randomly assigned to one of three experimental conditions:
1. Human-only teams — No AI tools allowed.
2. AI-assisted teams — Free access to and use of ChatGPT (GPT-4/4o), without restrictions.
3. AI-led teams — Could only interact with the study materials via ChatGPT. They were prohibited from reading the paper, directly inspecting data, or manually exploring the code. All analysis had to be conducted through prompting ChatGPT.

Participants worked in small teams (usually 2–4 people) to perform three tasks:
• Reproduce selected numerical results from an assigned published article.
• Identify coding errors or data irregularities.
• Propose and implement robustness checks.

All teams were given 7 hours to complete their tasks.

AI Training Component
Before participating, all members of the AI-assisted and AI-led teams were required to complete a mandatory about one-hour long training session on how to use ChatGPT for replication purposes or to watch its recording. The training covered how to interact with ChatGPT effectively, including how to craft prompts, upload and process documents and datasets, use the built-in Python interpreter for running analysis, and share or save AI conversations. It also included examples of using ChatGPT for debugging, code translation, and conducting reproducibility tasks. The training was delivered online and recorded.
Participation in the training was optional for human-only teams. Some human-only might have attended the training but were prohibited from using AI tools during the event itself, the attendance was not tracked.
Intervention (Hidden)
Intervention Start Date
2024-02-14
Intervention End Date
2024-11-22

Primary Outcomes

Primary Outcomes (end points)
AI Usage Outcomes
• General frequency of AI use
• Number of conversations in main AI tool(s) used
• Average AI usage intensity across academic tasks (index)
• Average perceived efficiency from AI use across academic tasks (index)

Work Behavior Outcomes
• Frequency of discussing work or brainstorming ideas with colleagues
• Frequency of asking colleagues for advice

Attitudes Towards AI
• Perceived error rates and danger of AI
• Preferences for interacting with AI versus humans in routine transactions
• Expectations about the societal impact of AI, including both positive and negative future consequences
We will analyze these items individually and will not construct an aggregate index.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
AI Usage Outcomes
• Paid subscription to AI tools
• Task-specific AI usage and efficiency (individual items)
• AI tools used (exploratory)

Work Behavior Outcomes
• Tasks performed (binary) in the past 6 months (individual items)
• RA hiring and contracted hours
• Refereeing activity (offers accepted/refused)

Secondary Outcomes (explanation)

Experimental Design

Experimental Design
This study is a follow-up survey of participants in the AI Replication Games, a series of events conducted in 2024 by the Institute for Replication (I4R).

About 290 participants were randomly assigned to one of three conditions: a human-only control group (no AI access), an AI-assisted group with unrestricted access to ChatGPT, and an AI-led group restricted to interacting with study materials exclusively through ChatGPT. For full details of the intervention, see the “Intervention” section.

Randomization was conducted at the individual level and was stratified by participants' preferred software (Stata or R) and their mode of participation (in-person or virtual). After randomization, participants were grouped into small teams (usually 2–4 people) within the same treatment condition to complete the replication tasks together. AI-assisted and AI-led participants received mandatory training on ChatGPT prior to the event.

The follow-up survey will be conducted in April and May 2025, between 5 and 14 months after the intervention, depending on the date of participation. It collects information on participants’ current AI usage, perceived effects of AI tools on their work, attitudes towards AI, and reflections on their experience in the AI Replication Games. The survey also measures recall of treatment assignment and self-reported behavioral change resulting from participation in the experiment.

All participants of the AI Replication Games are invited to complete the survey.
Experimental Design Details
Randomization Method
computer
Randomization Unit
Individual

Although treatment was assigned at the individual level, participants completed the tasks in teams formed within treatment arms. Therefore, we will cluster standard errors at the team level in our analyses.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
About 290 individuals/researchers. Spread across about 100 teams (formed after treatment assignment).
Sample size: planned number of observations
About 290 individuals/researchers
Sample size (or number of clusters) by treatment arms
About 90 in each treatment condition
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
Office of Research Ethics and Integrity, University of Ottawa
IRB Approval Date
2025-04-14
IRB Approval Number
S-03-25-11172
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials