Artificial Intelligence and Tax Behavior: A Field Experiment

Last registered on May 06, 2026

Pre-Trial

Trial Information

General Information

Title
Artificial Intelligence and Tax Behavior: A Field Experiment
RCT ID
AEARCTR-0018430
Initial registration date
May 01, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 06, 2026, 11:03 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
UCLA

Other Primary Investigator(s)

PI Affiliation
UT Dallas
PI Affiliation
University of Michigan
PI Affiliation
University of Virginia

Additional Trial Information

Status
In development
Start date
2026-05-01
End date
2026-05-16
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We conducted a field experiment on how artificial intelligence affects households' property tax appeals. We mailed postcards to a random sample of households that pay property taxes and could legally reduce their taxes by filing an appeal.
External Link(s)

Registration Citation

Citation
Holz, Justin et al. 2026. "Artificial Intelligence and Tax Behavior: A Field Experiment." AEA RCT Registry. May 06. https://doi.org/10.1257/rct.18430-1.0
Experimental Details

Interventions

Intervention(s)
We mailed postcards to a random sample of households that pay property taxes and could legally reduce their taxes by filing an appeal.
Intervention Start Date
2026-05-01
Intervention End Date
2026-05-15

Primary Outcomes

Primary Outcomes (end points)
The main outcome of interest is whether the household filed a tax appeal on its own; the 2026 filing deadline is May 15. For households assigned to the chatbot treatment, a key secondary outcome is chatbot usage, including whether they interacted with it at all and, if so, what questions they asked.
Primary Outcomes (explanation)
Given the nature of the setting and the timing of the protest process, we expect the intervention to affect primarily whether households file a direct protest. For this reason, we will focus on direct protests as the primary outcome. We also plan to use direct protests in prior years as falsification, or placebo, outcomes in the spirit of an event-study analysis.

Secondary Outcomes

Secondary Outcomes (end points)
The rich administrative data will also allow us to examine additional outcomes. For example, we can estimate effects not only on whether households file an appeal, but also on whether their appeal is successful. We may also be able to obtain information on the Opinion of Value (for households who file an appeal), which could be potentially an outcome of interest. For households assigned to the chatbot treatment, we can also analyze chat transcripts to study the content and dynamics of their interactions.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Some households are randomly selected to receive a postcard with a URL directing them to a website. Screenshots of a sample postcard and a sample website are attached to this preregistration.
Experimental Design Details
Not available
Randomization Method
Randomization done in office by a computer
Randomization Unit
Household
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
61,474 households. However, as explained above in 'Experimental Design,' some of these households will need to be dropped ex post; for example, if they had already filed an appeal by the time our postcards were delivered.
Sample size: planned number of observations
61,474 households (again, some of these households will have to be excluded from the sample)
Sample size (or number of clusters) by treatment arms
45,200 households are sent a postcard, while the remaining 16,274 households are not.

Among households that receive a postcard, we randomly assign them to one of two sub-treatments:
- 50% are assigned to a website without a chatbot.
- 50% are assigned to an otherwise identical website with a chatbot.

And among households assigned to the chatbot condition, we randomly assign them to one of two sub-treatments:
- 50% are assigned to a more human-like version ('Marina').
- 50% are assigned to a less human-like version ('Chatbot').
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

Documents

Document Name
Screenshot of the Website
Document Type
other
Document Description
This screenshot shows the website for the treatment in which the household is assigned to the more human-like chatbot ('Marina').
File
Screenshot of the Website

MD5: 3279cfc35b22bfb47ecb2e0b0e8a3933

SHA1: 06e1baba7e035953469696d4257896108342d125

Uploaded At: May 01, 2026

Document Name
Sample Postcard
Document Type
other
Document Description
Sample Postcard
File
Sample Postcard

MD5: 02711ff16439f61c9bf253d3f1329b43

SHA1: 15465fc4022e728447096481e8fa68036aa18da6

Uploaded At: May 01, 2026

IRB

Institutional Review Boards (IRBs)

IRB Name
University of Michigan Institutional Review Board
IRB Approval Date
2026-04-10
IRB Approval Number
HUM00290813