Experimental Design Details
Abstract:
This study evaluates the impact of an AI tool on the productivity and quality of patent drafting among professional patent lawyers. We implement a two-part experimental design in collaboration with law firms drafting patents for Google. The design assesses both performance in controlled test environments and performance of AI integration in real-world workflows.
Design Overview:
Participants are randomized into a control group (traditional methods) or treatment group (AI-assisted drafting with prior tool-specific training). The treatment group is twice the size of the control to accommodate participant recruiting constraints and Google product user acquisition goals. These groups are used as control and treatment for both parts of the experiment.
In Part 1, we conduct two rounds of lab experiments: shortly after onboarding and again three months later. In each round, participants complete a standardized drafting task based on novel invention materials: inventor notes, a primary claim, relevant figures, and background information. Tasks require drafting 5–8 dependent claims and a detailed description for the main embodiment, followed by critique and redline edits of two sample patent drafts. Participants self-report time spent per sub-task, with total time capped at four hours over a one-week window.
Outputs are scored by blinded expert raters (e.g., patent examiners or litigators) using a structured rubric assessing enforceability, technical accuracy, strategic ambiguity, completeness, and clarity. Post-task surveys capture perceptions of tool impact, time spent, and satisfaction. This phase tests whether AI-assisted lawyers produce higher-quality drafts in less time and exhibit steeper improvement over time.
Part 2 involves a field experiment during the three-month study. The AI tool is rolled out across participating firms’ Google-related patent work. For each firm, at least five baseline patents (pre-AI adoption) are collected. New drafts created during the study are compared against these baselines and across treatment groups. The drafts are evaluated upon inventor submission by internal Google patent counsel, blind to treatment status, using the same rubric as in Part 1. Recurring surveys track tool usage, perceived quality and speed changes, job satisfaction, and any evolution in firm workflows. This phase tests whether real-world patent drafts improve after AI adoption, and whether proficiency gains are faster among AI users, as measured by patent quality, number of review cycles, and self-assessed skill.