Trust, Accountability, and Reliance on Algorithmic Advice

Last registered on April 01, 2026

Pre-Trial

Trial Information

General Information

Title
Trust, Accountability, and Reliance on Algorithmic Advice
RCT ID
AEARCTR-0017756
Initial registration date
March 30, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 01, 2026, 10:53 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
IIMB

Other Primary Investigator(s)

PI Affiliation
IIMB

Additional Trial Information

Status
On going
Start date
2025-10-01
End date
2027-12-10
Secondary IDs
Prior work
This trial is based on or builds upon one or more prior RCTs.
Abstract
This preregistered experiment investigates how two organizational factors -- trust in an AI forecasting tool and accountability for AI‑generated errors -- shape individuals’ reliance on algorithmic advice. Participants complete a forecasting task where they predict product demand using two numerical features, with or without the aid of an AI tool. We independently manipulate: (1) Trust, via information about the AI’s development, training data, and reliability; and (2) Accountability, via information about who is responsible for errors generated by the AI tool (the participant vs. an external Analytics Team responsible for monitoring and correcting the AI). In a subsequent experiment, we further decompose the accountability construct into three sub-dimensions -- liability (who bears blame), auditability (whether performance is monitored), and contestability (whether users can challenge AI outputs) -- to identify which specific governance feature drives reliance. Primary outcomes include weight on advice (WoA) and forecast accuracy. This preregistration specifies the hypotheses, sample size, exclusion criteria, and analysis plan in advance of data collection.
External Link(s)

Registration Citation

Citation
Dharanikota, Spurthy and Akhil Ilango. 2026. "Trust, Accountability, and Reliance on Algorithmic Advice." AEA RCT Registry. April 01. https://doi.org/10.1257/rct.17756-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2026-01-10
Intervention End Date
2027-12-01

Primary Outcomes

Primary Outcomes (end points)
Primary dependent variable:
Weight on Advice (WoA):
Defined only in conditions with AI advice (T1–T4). For each forecasted product:
WoAi=FinalForecasti−InitialForecastiAIForecasti−InitialForecasti\text{WoA}_i = \frac{\text{FinalForecast}_i - \text{InitialForecast}_i}{\text{AIForecast}_i - \text{InitialForecast}_i}WoAi​=AIForecasti​−InitialForecasti​FinalForecasti​−InitialForecasti​​
For cases where the AI forecast equals the initial forecast (denominator = 0), we will mark WoA as missing for that observation. We will also consider bounding WoA to [0,1] or excluding extreme values (see Analysis Plan).
We will use either:

The average WoA per participant across all products, or
A mixed-model specification with trial-level WoA and random effects for participants.
The preregistered primary specification will use average WoA per participant as the main DV.



Secondary dependent variables:
Forecast error: absolute percentage error (APE)
APEi=100×∣FinalForecasti−TrueDemandiTrueDemandi∣\text{APE}_i = 100 \times \left|\frac{\text{FinalForecast}_i - \text{TrueDemand}_i}{\text{TrueDemand}_i}\right|APEi​=100×​TrueDemandi​FinalForecasti​−TrueDemandi​​​
and participant-level average APE.

Bonus attainment: indicator = 1 if participant’s average APE corresponds to ≥ 95% “accuracy” threshold according to the bonus rule; 0 otherwise.

Initial forecast error (to check whether AI improves accuracy relative to T0).
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We independently manipulate: (1) Trust, via information about the AI’s development, training data, and reliability; and (2) Accountability, via information about who is responsible for errors generated by the AI tool (the participant vs. an external Analytics Team responsible for monitoring and correcting the AI). In a subsequent experiment, we further decompose the accountability construct into three sub-dimensions -- liability (who bears blame), auditability (whether performance is monitored), and contestability (whether users can challenge AI outputs) -- to identify which specific governance feature drives reliance.
Experimental Design Details
Not available
Randomization Method
Data is collected through Prolific, which randomizes the treatments over stratified groups (e.g., by gender) of individuals.
Randomization Unit
Individual participant.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
This trial is not clustered. The intervention is implemented at the individual level.
Sample size: planned number of observations
Experiment 1 Planned total: approximately 400 participants (approximately 80 per arm across 5 arms) Per-arm target: 75-100 Experiment 2 Planned total: approximately 400 participants (approximately 80 per arm across 5 arms) Per-arm target: 75-100
Sample size (or number of clusters) by treatment arms
Experiment 1
Planned total: approximately 400 participants (approximately 80 per arm across 5 arms)
Per-arm target: 75-100

Experiment 2
Planned total: approximately 400 participants (approximately 80 per arm across 5 arms)
Per-arm target: 75-100
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
Indian Institute of Management Bangalore (IIMB) - Institutional Review Board (IRB)
IRB Approval Date
2025-07-21
IRB Approval Number
118