What Shapes Perceptions of Fraud in the Social Safety Net?

Last registered on April 24, 2026

Pre-Trial

Trial Information

General Information

Title
What Shapes Perceptions of Fraud in the Social Safety Net?
RCT ID
AEARCTR-0018401
Initial registration date
April 18, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 24, 2026, 8:45 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation

Other Primary Investigator(s)

PI Affiliation
Research Director at The People Lab - Harvard Kennedy School

Additional Trial Information

Status
Completed
Start date
2026-04-17
End date
2026-04-24
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Widespread concerns about fraud in the social safety net are used to justify calls for cutting funding for and imposing administrative burdens in benefits programs, including documentation, interview, and work requirements. Although public opinion polls measure general support for fraud reduction and administrative burdens, relatively little is known about the factors that shape perceptions of fraud in the social safety net.

In this survey experiment, we use a conjoint experimental design to disentangle the causal factors that shape perceptions of fraud in the social safety net. Approximately 2,000 U.S. adults, recruited via Prolific and representative on race, gender, age, and political affiliation, will evaluate pairs of government benefits recipients (Module A) and pairs of government benefits programs (Module B), each varying randomly on attributes such as beneficiary demographics, program type, benefit generosity, and administrative requirements. Our primary outcome is perceived fraud; secondary outcomes include support for additional verification burdens and support for continued program funding.
External Link(s)

Registration Citation

Citation
Lasky-Fink, Jessica and Giovanny Martinez Rodriguez. 2026. "What Shapes Perceptions of Fraud in the Social Safety Net?." AEA RCT Registry. April 24. https://doi.org/10.1257/rct.18401-1.0
Experimental Details

Interventions

Intervention(s)
In this survey experiment, we use a conjoint experimental design to disentangle the causal factors that shape perceptions of fraud in the social safety net.
Intervention Start Date
2026-04-17
Intervention End Date
2026-04-24

Primary Outcomes

Primary Outcomes (end points)
Fraud perception
Which person is more likely to be cheating the system to receive benefits? (Module A)
Which program has more recipients who are cheating the system to receive benefits? (Module B)
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Additional burdens:
If the government is going to choose one person to conduct extra verification checks to ensure eligibility, which person should they choose?(Module A)
The government is considering imposing additional requirements to make it more difficult to access benefits. For which program should the government impose more requirements? (Module B)

Continued funding:
If the government can only continue funding one program, which program should continue funding? (Module B only)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
This is a choice-based paired conjoint experiment with two modules, counterbalanced across respondents. Each respondent completes 3 tasks per module (6 tasks total). In each task, respondents view two side-by-side profiles and select one profile for each outcome measure. Each profile varies on 16 attributes with two to four levels, randomly assigned with no restrictions on combinations.
Experimental Design Details
Not available
Randomization Method
Randomization done in office by a computer
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
1200 individuals
Sample size: planned number of observations
1200 individuals
Sample size (or number of clusters) by treatment arms
Module A and Module B: 1200 respondents x 3 tasks x 2 profiles = ~7,200 profile-level observations for each module
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
The study is powered to detect AMCEs as small as 4 percentage points for attributes with two levels, 5 percentage points for attributes with 3 levels, and 5 percentage points for attributes with four levels. All attributes in the conjoint have between two and four levels (see Table 1), so the study is adequately powered to detect effects of this magnitude across all primary analyses.
IRB

Institutional Review Boards (IRBs)

IRB Name
Harvard University Institutional Review Board
IRB Approval Date
2022-11-04
IRB Approval Number
#IRB22-1397
Analysis Plan

Analysis Plan Documents

Perceptions of Fraud in the Social Safety Net

MD5: ddf53c6c780bdd780e003ddacb75ab28

SHA1: 36d1f8a7e841856b886d50500f7cddee9170a6e1

Uploaded At: April 18, 2026