Synthetic Identity and the Dilution of Opportunity Cost: A longitudinal study on the erosion of financial agency under predictive algorithms).

Last registered on February 10, 2026

Pre-Trial

Trial Information

General Information

Title
Synthetic Identity and the Dilution of Opportunity Cost: A longitudinal study on the erosion of financial agency under predictive algorithms).
RCT ID
AEARCTR-0017833
Initial registration date
February 03, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
February 10, 2026, 6:00 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Universidad Autónoma de Aguascalientes

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2026-02-09
End date
2026-07-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study investigates how predictive recommendation algorithms in e-commerce platforms influence consumer sovereignty and financial decision-making. We introduce the concept of "Synthetic Identity"—a digital profile co-created by algorithms—and hypothesize that individuals increasingly prioritize purchases that validate this digital persona over traditional rational budget constraints.

Using a randomized controlled trial (RCT) with a 16-week longitudinal panel design, we track 180 young adults in Mexico. The treatment group is exposed to high-fidelity predictive environments, while the control group interacts with neutral search interfaces. We measure three primary dimensions: (1) perceived financial agency, (2) identity-algorithm affinity, and (3) shifts in spending patterns (opportunity cost). Data is collected through weekly Experience Sampling Methodology (ESM) and pre/post-expenditure audits.

The analysis employs Differences-in-Differences (DiD) and Latent Growth Models (LGM) to identify the specific point where algorithmic influence begins to erode human agency. This research contributes to behavioral economics by redefining the "opportunity cost" in the age of Artificial Intelligence
External Link(s)

Registration Citation

Citation
Murillo Lopez, Francisco Jacobo. 2026. "Synthetic Identity and the Dilution of Opportunity Cost: A longitudinal study on the erosion of financial agency under predictive algorithms).." AEA RCT Registry. February 10. https://doi.org/10.1257/rct.17833-1.0
Experimental Details

Interventions

Intervention(s)
The study implements a randomized intervention with two arms to compare the effects of predictive personalization on consumer agency:

Treatment Group (High-Fidelity Predictive Interface): Participants in this group interact with a simulated e-commerce environment (via an API overlay or specialized browser extension) that utilizes a "Synthetic Identity" engine. This engine goes beyond standard recommendations by predicting "future-self" desires, utilizing high-frequency behavioral data to nudge users toward purchases that align with an algorithmically generated persona rather than their historical budget constraints. The intervention includes daily personalized notifications and a "predictive cart" feature.

Control Group (Neutral Search Interface): Participants interact with a standard search-based interface. Recommendations are limited to "top-rated" or "generic category" items, requiring the user to initiate the search process and maintain active authorship over their browsing path without predictive identity-nudging.

Key components of the intervention across 16 weeks:

Experience Sampling (ESM): Weekly triggers to measure perceived agency and identity-algorithm affinity.

Identity Nudging: The treatment group receives prompts specifically designed to test the "Synthetic Identity" hypothesis (e.g., "This item matches your evolving style").

Measurement: Continuous tracking of spending (spent_mxn) and time spent on platform (use_minutes_week).
Intervention Start Date
2026-02-09
Intervention End Date
2026-05-25

Primary Outcomes

Primary Outcomes (end points)
1. Divergence in Spending Patterns (Economic)
Variable name: spent_mxn (Category Divergence).

Description: Total monthly expenditure across 8 categories (Groceries, Fashion, Electronics, Fitness, Outdoor, Beauty, Gaming, Home).

Measure: Difference in Mexican Pesos (MXN) between Month 0 (Baseline) and Month 4 (Post-intervention).

Time of measurement: Month 0 and Month 4.

2. Perceived Financial Agency (Psychological)
Variable name: agency_scale.

Description: A composite index measuring the individual's sense of authorship and control over their purchase decisions. This is calculated as the average of the sub-scales: authorship, trust in own judgment vs. algorithm, and intention-action gap.

Measure: Scale from 1 to 5 (Likert).

Time of measurement: Weekly (from Week 1 to Week 16) via ESM.

3. Identity-Algorithm Affinity (Relational)
Variable name: id_affinity (Synthetic Identity Congruence).

Description: The degree to which the participant perceives the algorithm's suggestions as a reflection of their "true" or "evolving" self.

Measure: Scale from 1 to 5 (Likert).

Time of measurement: Weekly (from Week 1 to Week 16) via ESM.
Primary Outcomes (explanation)
Constructed Outcomes Explanation
1. Perceived Financial Agency (Latent Construct): This outcome is constructed as a composite index from four specific metrics collected via weekly Experience Sampling (ESM). It is defined as

Agency_{Scale} = {(Authorship + Trust_{Self} + (6 - Intention_Gap) + Awareness)} / {4}
Authorship: Degree to which the user feels they initiated the purchase.

Trust_Self: Reliance on internal judgment vs. algorithmic suggestion.

Intention_Gap: Inverted scale of "unplanned" purchases. High values indicate high human agency; declining slopes over the 16-week period indicate agency erosion.

2. Synthetic Identity Affinity Index: This measures the "merger" between the user’s self-concept and the algorithmic profile. It is constructed by averaging:

Perceived Recognition: "Does the algorithm know me better than I know myself?"

Predictive Acceptance: The rate at which "Future-Self" suggestions are accepted as valid.

Affective Congruence: Emotional alignment with the persona suggested by the interface.

3. Opportunity Cost Dilution (Economic Divergence): Unlike total spending, this outcome is a relative variance measure. It is constructed by calculating the Euclidean distance between the user’s baseline budget allocation (Month 0) and their final allocation (Month 4).

A significant shift toward categories heavily weighted by the predictive engine (e.g., Fashion or Gaming) at the expense of "Base Categories" (Groceries) constitutes our measure of Opportunity Cost Dilution.

4. Algorithmic Dependency Slope: Using Latent Growth Modeling (LGM), we construct a "Dependency Slope" for each participant. This is the interaction term between Time (Weeks 1-16) and Algorithm_Interaction. A positive slope in spending coupled with a negative slope in agency scores defines the primary experimental effect.

Secondary Outcomes

Secondary Outcomes (end points)
Para completar los Secondary Outcomes, debemos incluir aquellas variables que, aunque no son el corazón de tu hipótesis, aportan evidencia crucial sobre el impacto colateral de la Identidad Sintética.

Aquí tienes los tres puntos finales secundarios basados en tus modelos de validación:

1. Post-Purchase Regret (Arrepentimiento de Compra)
Outcome Name: post_purchase_regret

Description: An assessment of the cognitive dissonance experienced after a transaction. This measures if the "Synthetic Identity" purchase leads to long-term satisfaction or immediate buyer's remorse.

Measure: Scale from 1 to 5 (Likert).

Timepoint(s): Monthly (Months 1 through 4) and 15 days after the final intervention.

2. Platform Engagement and Time Sunk (Uso de Plataforma)
Outcome Name: use_minutes_week

Description: The total amount of time spent interacting with the commerce interface. This helps determine if the "dilution of opportunity cost" is correlated with an increase in time spent within the algorithmic environment.

Measure: Continuous variable (Minutes per week).

Timepoint(s): Weekly for 16 weeks.

3. Algorithmic Trust vs. Human Expertise (Confianza Algorítmica)
Outcome Name: trust_ratio

Description: The ratio of acceptance of algorithmic suggestions versus external search results or peer recommendations.

Measure: Ratio (Accepted Suggestions / Total External Searches).

Timepoint(s): Bi-weekly during the 16-week period.
Secondary Outcomes (explanation)
"The Post-Purchase Regret index is constructed by cross-referencing the spent_mxn in 'High-Affinity' categories with a follow-up ESM survey 72 hours after delivery. The Platform Engagement outcome serves as a proxy for 'Digital Capture,' measured through logs of session duration. Finally, Algorithmic Trust is a behavioral measure calculated by the frequency of coi_search_external (external searches) compared to direct clicks on alg_interac (algorithmic interactions)."

Experimental Design

Experimental Design
The study employs a between-subjects, randomized controlled trial (RCT) with a longitudinal panel design spanning 16 weeks. The goal is to isolate the effect of predictive personalization on financial decision-making and consumer agency.

1. Participant Selection and Randomization: A sample of 180 young adults (ages 18-35) will be recruited and screened for baseline digital literacy and shopping habits. Participants will be randomly assigned to one of two parallel arms with a 1:1 allocation ratio:

Group A (Control): Access to a neutral e-commerce interface using standard search-based navigation.

Group B (Treatment): Access to an enhanced interface driven by a high-fidelity predictive engine ("Synthetic Identity" model).

2. Data Collection Phases:

Baseline (Month 0): Comprehensive assessment of historical spending patterns across 8 categories, psychological profiling (impulsivity, agency, and self-concept), and digital literacy.

Intervention Phase (Weeks 1–16): Participants interact with their assigned interface. Weekly data is collected via Experience Sampling Methodology (ESM) to track real-time changes in perceived agency, identity affinity, and decision authorship.

Outcome Audits (Monthly): Detailed expenditure reports are generated at the end of each month to measure shifts in budget allocation and the dilution of opportunity cost.

3. Analysis Strategy: The primary analysis will use a Differences-in-Differences (DiD) approach to compare the evolution of spending and agency between the two groups. Additionally, Latent Growth Curve Modeling (LGCM) will be applied to the longitudinal data to identify the trajectory of "agency erosion" over the four-month period.
Experimental Design Details
Not available
Randomization Method
Randomization Method
The randomization was performed in-office via a computer-generated algorithm (using R's set.seed() function for reproducibility).

To ensure the internal validity of the study and account for individual differences that could confound the results, we employed a stratified randomization approach based on the following baseline characteristics:

Algorithmic Literacy: Participants were stratified into high and low digital literacy groups.

Innate Impulsivity (Barratt Impulsiveness Scale): To ensure that the "Synthetic Identity" effect is not merely a reflection of pre-existing impulsivity.

Process:

After the baseline assessment (Month 0), the sample of 180 participants was divided into strata.

Within each stratum, individuals were assigned to either Group A (Control) or Group B (Treatment) using a simple random number generator (1:1 allocation ratio).

This process ensures that both arms of the study are balanced regarding their psychological profiles and technical skills before the 16-week exposure begins.
Randomization Unit
The randomization unit is the individual.

Justification: The intervention is delivered at the individual user level through personalized e-commerce interfaces. Since the primary outcomes focus on personal financial agency, individual psychological constructs (identity affinity), and private expenditure patterns, individual-level randomization is the most robust method to isolate the causal effect of algorithmic influence. There is no cluster-level randomization (e.g., at the household or geographical level) in this study design.

Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters

180 Individuals
Sample size: planned number of observations
180 Individuals
Sample size (or number of clusters) by treatment arms
90 individuals control, 90 individuals treatment (Synthetic Identity algorithm)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Minimum Detectable Effect Size (MDES)Based on a power analysis for a two-arm randomized controlled trial with N=180 and a significance level of alpha = 0.05:Minimum Detectable Effect (MDES): We are powered to detect a Cohen’s d of 0.42 (a small-to-medium effect size) with 80% power (1 - beta = 0.80).Unit and Standard Deviation: For our primary outcome (agency_scale), which uses a 5-point Likert scale with an assumed baseline standard deviation of sigma = 1.0, the MDES corresponds to a difference of 0.42 units on the scale between the treatment and control groups.Percentage Change: This represents approximately an 8.4% divergence in the perceived agency score relative to the total scale range. Justification for Longitudinal GainSince this is a longitudinal study with 17 measurement points (Baseline + 16 weeks), the precision of the estimate increases. By using Latent Growth Modeling (LGM), the effective power is enhanced because we are not just comparing two points in time, but the trajectories of change.Accounting for an expected 15% attrition rate over the 4-month period (leaving final N approx 153), the MDES would slightly adjust to d = 0.46, which remains within the "medium effect" threshold suitable for behavioral economics research.
IRB

Institutional Review Boards (IRBs)

IRB Name
Centro de Ciencias Económicas y Administrativas de la Universidad Autónoma de Aguascalinetes
IRB Approval Date
2026-01-27
IRB Approval Number
CCEA-UAA-2026-23