AI & Expertise research with patent lawyers

Last registered on May 21, 2025

Pre-Trial

Trial Information

General Information

Title
AI & Expertise research with patent lawyers
RCT ID
AEARCTR-0015823
Initial registration date
May 12, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 21, 2025, 2:10 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
MIT Department of Economics

Other Primary Investigator(s)

PI Affiliation
Massachusetts Institute of Technology
PI Affiliation
Magnit on behalf of Google
PI Affiliation
Google

Additional Trial Information

Status
In development
Start date
2025-05-09
End date
2025-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
The study explores how access to AI tools, supplemented by corresponding training focused on most appropriate ways to apply AI in patent drafting, affect professional output, skill acquisition, and job satisfaction of patent lawyers at various levels of expertise. In particular, it tests whether providing access to a specialized AI assistance tool for drafting and critiquing patent claims improves the quality and speed with which tasks are completed as well as the speed and extent of skill acquisition. Findings will inform how AI reshapes work in high-skill domains.
External Link(s)

Registration Citation

Citation
Autor, David et al. 2025. "AI & Expertise research with patent lawyers." AEA RCT Registry. May 21. https://doi.org/10.1257/rct.15823-1.0
Experimental Details

Interventions

Intervention(s)
Control: Lawyers will use their traditional process without AI assistance when performing experimental tasks.
Treatment: Lawyers will have access to a specialized AI tool when performing experimental tasks and in their day-to-day work after receiving training on how to work with AI effectively in the patent drafting context.
Intervention (Hidden)
Intervention Start Date
2025-05-09
Intervention End Date
2025-10-27

Primary Outcomes

Primary Outcomes (end points)
Quality of completed tasks (As rated by independent 3P raters) ; time on task (self-reported) ; perceived quality & time gains or losses (self-reported); satisfaction with patent drafting experience (self-reported)
Primary Outcomes (explanation)
The quality of each test task will be assessed by two raters independently according to an assessment rubric covering the following components:This experiment is designed as a two-part randomized controlled trial (RCT) to investigate the impact of an AI tool on patent lawyers' work. The first part involves lab experiments where junior and expert patent associates are asked to draft patent sections and critique existing drafts, both with and without AI assistance. These tasks are performed twice, with a three-month interval, and the outputs are evaluated by blinded third-party raters for quality. The second part is a field experiment where the AI tool is rolled out to patent law firms working with Google, and the quality of their real-world patent drafts created with and without AI is compared by internal raters, along with surveys assessing user experience and perceived changes in efficiency and proficiency.

- Enforceability & legal strength
- Technical accuracy
- Strategic ambiguity
- Completeness & alignment with patent objectives
- Clarity
Each rubric component above will be assessed on a 1-5 scale with clear definitions for each score.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
This experiment is designed as a two-part randomized controlled trial (RCT) to investigate the impact of an AI tool on patent lawyers' work. The first part involves lab experiments where junior and expert patent associates are asked to draft patent sections and critique existing drafts, both with and without AI assistance. These tasks are performed twice, with a three-month interval, and the outputs are evaluated by blinded third-party raters for quality. The second part is a field experiment where the AI tool is rolled out to patent law firms working with Google, and the quality of their real-world patent drafts created with and without AI is compared by internal raters, along with surveys assessing user experience and perceived changes in efficiency and proficiency.
Experimental Design Details
Abstract:
This study evaluates the impact of an AI tool on the productivity and quality of patent drafting among professional patent lawyers. We implement a two-part experimental design in collaboration with law firms drafting patents for Google. The design assesses both performance in controlled test environments and performance of AI integration in real-world workflows.
Design Overview:
Participants are randomized into a control group (traditional methods) or treatment group (AI-assisted drafting with prior tool-specific training). The treatment group is twice the size of the control to accommodate participant recruiting constraints and Google product user acquisition goals. These groups are used as control and treatment for both parts of the experiment.
In Part 1, we conduct two rounds of lab experiments: shortly after onboarding and again three months later. In each round, participants complete a standardized drafting task based on novel invention materials: inventor notes, a primary claim, relevant figures, and background information. Tasks require drafting 5–8 dependent claims and a detailed description for the main embodiment, followed by critique and redline edits of two sample patent drafts. Participants self-report time spent per sub-task, with total time capped at four hours over a one-week window.
Outputs are scored by blinded expert raters (e.g., patent examiners or litigators) using a structured rubric assessing enforceability, technical accuracy, strategic ambiguity, completeness, and clarity. Post-task surveys capture perceptions of tool impact, time spent, and satisfaction. This phase tests whether AI-assisted lawyers produce higher-quality drafts in less time and exhibit steeper improvement over time.

Part 2 involves a field experiment during the three-month study. The AI tool is rolled out across participating firms’ Google-related patent work. For each firm, at least five baseline patents (pre-AI adoption) are collected. New drafts created during the study are compared against these baselines and across treatment groups. The drafts are evaluated upon inventor submission by internal Google patent counsel, blind to treatment status, using the same rubric as in Part 1. Recurring surveys track tool usage, perceived quality and speed changes, job satisfaction, and any evolution in firm workflows. This phase tests whether real-world patent drafts improve after AI adoption, and whether proficiency gains are faster among AI users, as measured by patent quality, number of review cycles, and self-assessed skill.
Randomization Method
Computer randomization from a list
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
350 maximum
Sample size: planned number of observations
350 maximum
Sample size (or number of clusters) by treatment arms
Control: 33% of participants
Treatment: 69% of participants
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials