Seeing the Cliff Before You Fall: The Cognitive Cost of Misperception

Last registered on March 31, 2026

Pre-Trial

Trial Information

General Information

Title
Seeing the Cliff Before You Fall: The Cognitive Cost of Misperception
RCT ID
AEARCTR-0018135
Initial registration date
March 24, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 31, 2026, 9:44 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
University of Pittsburgh

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2026-04-15
End date
2026-05-06
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study examines how the format in which benefit information is presented affects low-income workers' understanding of multi-program benefit cliffs and their labor-supply decisions. Benefit cliffs occur when modest earnings increases trigger disproportionate losses in public assistance — such as SNAP, Medicaid, CCDF, and TANF — leaving households worse off despite higher wages. Using a 2×2 between-subject experiment recruited through Prolific, we randomly assign participants to one of two information formats (an interactive dashboard or an AI assistant) and independently vary whether they receive information about the temporary nature of benefit losses (recovery information). Participants first complete a baseline estimate of how a hypothetical job offer would affect their total household resources, then receive their assigned treatment and revise their estimate. We measure belief improvement (reduction in prediction error), decision quality, and money-metric regret. The study aims to identify which informational friction — computational complexity, format, and situational mapping, or intertemporal uncertainty — generates the largest share of welfare loss from misperceived benefit cliffs.
External Link(s)

Registration Citation

Citation
Tascon, Daniel. 2026. "Seeing the Cliff Before You Fall: The Cognitive Cost of Misperception." AEA RCT Registry. March 31. https://doi.org/10.1257/rct.18135-1.0
Experimental Details

Interventions

Intervention(s)
Participants are randomly assigned to one of four arms in a 2×2 design. The first dimension varies the format in which benefit information is presented: (1) an interactive dashboard that visualizes how total household resources change at different income levels, or (2) an AI assistant that delivers the same underlying information conversationally and maps program rules to the participant's specific household. The second dimension independently varies whether participants receive recovery information — a brief statement indicating that workers who experience a benefit cliff typically return to their prior resource level within 1–3 years as wage growth offsets benefit losses. All participants first complete a baseline stage with no additional information beyond a standard text description of the scenario, replicating the information environment workers encounter in real life. In the second stage, participants receive their assigned treatment and answer the same questions again. This design allows us to separately identify the effect of information format (Frictions 1 and 2: computational complexity and situational mapping) from the effect of reducing perceived permanence of losses (Friction 3: intertemporal uncertainty).
Intervention Start Date
2026-04-22
Intervention End Date
2026-04-29

Primary Outcomes

Primary Outcomes (end points)
The primary outcome is belief improvement — the reduction in absolute prediction error between Stage 1 and Stage 2, measured in dollars per month:
BI = |ε₁| − |ε₂| = |Δ̂₁ − Δ*| − |Δ̂₂ − Δ*|
where Δ̂ₛ is the participant's stated prediction of monthly household resource change at stage s and Δ* is the true change computed from Policy Rules Database rules. A positive value indicates the treatment brought beliefs closer to the truth. This outcome captures the extent to which each information format reduces misperception of benefit cliff magnitudes.
Primary Outcomes (explanation)
Belief improvement is constructed as follows. At Stage 1, the participant reports a predicted monthly resource change Δ̂₁ (a signed dollar amount). At Stage 2, after receiving the assigned treatment, the participant reports a revised prediction Δ̂₂. The true monthly resource change Δ* is computed programmatically from Policy Rules Database (PRD) rules for the vignette household, taking into account wages, taxes, and all relevant benefit programs (SNAP, Medicaid, CCDF, TANF, EITC, Section 8) at both the pre- and post-offer income levels. The absolute prediction error at each stage is |Δ̂ₛ − Δ*|. Belief improvement is the difference |ε₁| − |ε₂| = |Δ̂₁ − Δ*| − |Δ̂₂ − Δ*|, measured in dollars per month. Positive values indicate improvement; negative values indicate that the treatment worsened accuracy. To limit the influence of extreme responses, predicted changes will be winsorized at the 1st and 99th percentiles of the Stage 1 distribution prior to computing errors. The winsorization thresholds will be determined from the data and applied symmetrically to both stages.

Secondary Outcomes

Secondary Outcomes (end points)
Decision quality — an indicator equal to 1 if the participant's job acceptance recommendation aligns with the financially optimal decision (accept if Δ* > 0, reject if Δ* < 0), measured at Stage 1 and Stage 2 separately. The treatment effect is the change in decision quality between stages: Optimal₂ − Optimal₁.
Money-metric regret — the dollar value of welfare loss from a suboptimal decision, defined as Regret = R_best − R_chosen ≥ 0, where R_j is total monthly household resources under option j. Regret is positive only when the participant recommends the dominated option.
Willingness to pay for benefit information — the percentage of the monthly wage increase the participant would pay for a service that reveals exact benefit changes before accepting a job offer (Stage 3), converted to dollars using the vignette wage increase.
Reconsideration — an indicator for whether the participant changes their job acceptance recommendation after seeing the true resource change in Stage 3.
Tolerance for short-term loss — the maximum monthly resource loss the participant would accept for a job with significantly better long-term career prospects (Stage 3), measured in dollars per month.
Stated confidence — self-reported confidence in the prediction at Stage 1 and Stage 2, used to test whether treatments raise confidence without improving accuracy (over-confidence check, particularly relevant for the AI assistant arm).
Secondary Outcomes (explanation)
Decision quality is constructed as a binary indicator from the participant's yes/no job acceptance recommendation at each stage. The optimal decision is defined as accepting when Δ* > 0 and rejecting when Δ* < 0. For vignettes where Δ* = 0, decision quality is coded as 1 regardless of the participant's choice. The treatment effect is the within-person change Optimal₂ − Optimal₁, which takes values in {−1, 0, 1}.
Money-metric regret is computed as the absolute difference between total monthly household resources under the optimal option and the chosen option: Regret = |R_best − R_chosen|. For participants who recommend the optimal option, Regret = 0. For those who recommend the dominated option, Regret = |Δ*|. The outcome is bounded below by zero and measured in dollars per month.
Willingness to pay is elicited as a percentage (0–100) in Stage 3 and converted to dollars by multiplying by the vignette monthly wage increase divided by 100. Responses above 100 or below 0 will be recoded as missing.
Reconsideration is a binary indicator equal to 1 if the participant's Stage 3 job acceptance recommendation differs from their Stage 2 recommendation, after having seen the true resource change. This captures preference-based updating distinct from belief-based updating.
Tolerance for short-term loss is directly elicited in Stage 3 as a dollar amount per month. Responses will be winsorized at the 99th percentile. This outcome is used to assess intertemporal preferences independently of misperception.
Stated confidence is elicited on a scale at Stage 1 and Stage 2. An over-confidence indicator is constructed as high stated confidence combined with large absolute prediction error, defined as confidence above the median and |ε| above the median simultaneously. This is used as a robustness check on the AI assistant arm specifically.

Experimental Design

Experimental Design
This study uses a 2×2 between-subject experiment. Participants are randomly assigned to one of four arms crossing two dimensions: information format (interactive dashboard vs. AI assistant) and recovery information (absent vs. present). All participants complete a baseline estimation task before receiving their assigned treatment. The target sample is N = 300 U.S. adults recruited via Prolific who currently receive or have recently received at least one government assistance program.
Experimental Design Details
Not available
Randomization Method
Individual-level randomization implemented automatically by the oTree platform at the start of Stage 2. Participants are assigned to one of four arms using a pseudo-random number generator, stratified by vignette type to ensure balance across treatment cells.
Randomization Unit
Individual participant.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
300 participants
Sample size: planned number of observations
300 participants
Sample size (or number of clusters) by treatment arms
75 participants — Arm A (Dashboard, no recovery information)
75 participants — Arm B (Dashboard, recovery information)
75 participants — Arm C (AI assistant, no recovery information)
75 participants — Arm D (AI assistant, recovery information)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Primary outcome: belief improvement (reduction in absolute prediction error, measured in dollars per month). At N = 300 (150 participants per main-effect comparison group), α = 0.05, power = 0.80, and assuming a between-subject standard deviation of σ = $200/month (Taubinsky & Rees-Jones 2018, upper bound), the minimum detectable effect is $68/month, representing approximately 15–20% of a typical cliff magnitude (Δ* ≈ −$200 to −$600/month). Under the more conservative Abeler & Jäger (2015) anchor (σ = $250/month), the MDE is $85/month. Both are below the pre-specified focal threshold of $100/month. The sample size formula used is N = 2 × ⌈(z₁₋α/₂ + z₁₋β)² × 2σ² / d²⌉ for a two-sided two-sample t-test. Standard errors will be clustered at the individual level.
IRB

Institutional Review Boards (IRBs)

IRB Name
University Of Pittsburgh
IRB Approval Date
2026-03-23
IRB Approval Number
STUDY26030081