Algorithmic Inequality, Contestability, and Policy Support

Last registered on January 28, 2026

Pre-Trial

Trial Information

General Information

Title
Algorithmic Inequality, Contestability, and Policy Support
RCT ID
AEARCTR-0017778
Initial registration date
January 27, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
January 28, 2026, 7:55 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
University of Bologna

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2026-02-01
End date
2026-03-15
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Public administrations increasingly rely on algorithmic scoring to allocate scarce benefits. While automation can improve speed and consistency, it can also generate algorithmic inequality: systematic differences across groups in error rates and in access to correction channels. We map the model to a large-scale, two-wave randomized survey experiment in Italy (N ≈ 12,000) built around a realistic voucher allocation scenario, randomized information treatments, randomized outcomes, and incentivized choices that elicit willingness to pay (in lottery tickets) for explanation, appeal, and independent auditing. The framework delivers sharp predictions: information about inequality reduces support for algorithmic allocation; adding contestability and auditing mitigates the support decline but can increase inequality in effective access if recourse is unevenly costly. The design identifies both direct informational effects and structural primitives governing “recourse gaps” and “legitimacy gaps,” allowing a unified contribution to the literatures on algorithmic fairness, administrative burden, and political economy of the welfare state.
External Link(s)

Registration Citation

Citation
Pignataro, Giuseppe. 2026. "Algorithmic Inequality, Contestability, and Policy Support." AEA RCT Registry. January 28. https://doi.org/10.1257/rct.17778-1.0
Experimental Details

Interventions

Intervention(s)
Participants complete an online survey in Italy. All respondents read the same baseline vignette describing a public voucher program whose applications are processed using an automated scoring/verification procedure. The main intervention is a randomized informational message about algorithmic decision-making: (i) control (no substantive information about inequality), (ii) information emphasizing potential risks of algorithmic inequality, or (iii) information emphasizing risks plus institutional safeguards (explanations/report, right to review/appeal, independent audit). Respondents then answer questions and complete short decision tasks related to perceived fairness, trust, and willingness to use contestability tools.
Intervention Start Date
2026-02-08
Intervention End Date
2026-03-01

Primary Outcomes

Primary Outcomes (end points)
Procedural trust / legitimacy toward automated voucher allocation (post-vignette): perceived fairness of the procedure and trust in the procedure.

Demand for explanations: binary choice to request an explanation/report (in the incentivized task).

Demand for review/appeal (conditional on a negative outcome): binary choice to file a request for review/appeal (in the incentivized task).
Primary Outcomes (explanation)
Procedural trust / legitimacy index: constructed as the average of standardized (z-scored) items measured immediately after the vignette. Items include perceived procedural fairness and trust in the procedure (each measured on a common Likert/0–10 scale). The index will be standardized using the control group distribution (mean 0, sd 1) for interpretability.

Demand for explanations: indicator equal to 1 if the respondent chooses to request the report/explanation when offered, 0 otherwise (the report may have an experimentally assigned ticket cost).

Demand for review/appeal: indicator equal to 1 if the respondent chooses to file a review/appeal request following a negative outcome, 0 otherwise (the appeal may have an experimentally assigned ticket cost).

Secondary Outcomes

Secondary Outcomes (end points)
Policy support for using automated procedures in similar public programs (attitudinal support).

Perceived algorithmic inequality: perceived risk that automated procedures generate unequal outcomes across social groups.

Support for safeguards: support for requiring explanations, an appeal channel, and independent audits in algorithm-assisted public decisions.

Safeguard preference score from choice tasks: number of times the respondent selects the option with stronger contestability features across a set of pairwise comparisons.

Willingness to contribute to collective oversight: willingness to sacrifice lottery weight (tickets) for an audit/oversight option (measured as ticket allocation or stated choice, depending on implementation).

Emotional reactions (e.g., anger/frustration/relief) after the vignette.

Persistence and updating (Wave 2): re-measured trust/legitimacy and contestability demand outcomes one month later.
Secondary Outcomes (explanation)
Safeguard preference score: integer from 0 to K (e.g., 0–4) counting how many times the respondent chooses the “more contestable” regime in K pairwise comparisons (after accounting for randomized A/B flips and randomized order saved in the dataset).

Collective oversight contribution: measured as the number of tickets allocated away from the respondent’s own lottery weight toward an oversight/audit option (or as a binary support choice), depending on the final implementation; higher values indicate greater willingness to fund collective safeguards.

Experimental Design

Experimental Design
Two-wave online survey experiment with individual-level randomization among a nationally stratified Italian sample. In Wave 1, respondents are randomized to different informational messages about algorithmic decision-making (control vs risk vs risk+ safeguards) and to a randomized vignette outcome (voucher granted vs denied). Outcomes include post-vignette trust/fairness measures and incentivized choices related to explanations and appeals. Wave 2 re-contacts the same respondents about one month later to measure persistence of key outcomes and to repeat/extend the contestability-choice module.
Experimental Design Details
Not available
Randomization Method
Computer-based randomization implemented in the survey software by the panel provider. Randomization is blocked within pre-defined strata. A fixed random seed and a log of assignments/draws are stored for reproducibility and audit.
Randomization Unit
Individual respondent (person-level). Additional within-respondent randomizations (e.g., vignette outcome, randomized costs in incentivized tasks, order/A–B flips in pairwise comparisons) are also implemented at the individual level.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Not clustered (individual-level randomization). Equivalent unit: 12,000 individuals.
Sample size: planned number of observations
12,000 adult respondents in Italy in Wave 1; all baseline respondents will be re-contacted for Wave 2 (expected retention <100%).
Sample size (or number of clusters) by treatment arms
Planned Wave 1 allocation (approximate):

Control: 4,000 individuals (with a pre-specified placebo split inside control; e.g., 2,800 pure control and 1,200 placebo-control neutral text)

Treatment 1 (risk information): 4,000 individuals

Treatment 2 (risk + safeguards information): 4,000 individuals
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Assumptions: two-sided test, α = 0.05, power = 0.80, no clustering (individual-level randomization). With equal-sized arms (n ≈ 4,000 vs 4,000), the minimum detectable effect (difference in means) for a standardized continuous outcome is 0.063 SD. For a binary main outcome with baseline rate p = 0.50, the MDE is approximately 0.031 (3.1 percentage points). Comparisons against the pure-control sub-arm only (e.g., n ≈ 2,800 vs 4,000) yield an MDE of about 0.069 SD for continuous outcomes and about 3.5 percentage points for binary outcomes (conservative, ignoring covariate adjustment and blocking gains).
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number