Experience and the Demand for Ad Personalization: Evidence from a Field Experiment

Last registered on February 10, 2026

Pre-Trial

Trial Information

General Information

Title
Experience and the Demand for Ad Personalization: Evidence from a Field Experiment
RCT ID
AEARCTR-0017851
Initial registration date
February 06, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
February 10, 2026, 6:38 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Boston University

Other Primary Investigator(s)

PI Affiliation
Harvard Business School
PI Affiliation
Boston University
PI Affiliation
UNC-Chapel Hill

Additional Trial Information

Status
In development
Start date
2026-02-10
End date
2026-04-10
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We study whether consumers hold informed preferences about personalized advertising and how experience with alternative ad settings shapes these preferences. We implement a browser-based field experiment that measures objective ad exposures, browsing behavior, and subjective ad-attitude ratings, and experimentally induce switching across major advertising platforms. We elicit willingness-to-accept (WTA) for switching both before and after treatment to study how information and experience affect preferences.
External Link(s)

Registration Citation

Citation
Farronato, Chiara et al. 2026. "Experience and the Demand for Ad Personalization: Evidence from a Field Experiment." AEA RCT Registry. February 10. https://doi.org/10.1257/rct.17851-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2026-02-10
Intervention End Date
2026-04-10

Primary Outcomes

Primary Outcomes (end points)
Our primary outcomes of interest include three groups of metrics:

1. Preference for personalized advertising: binary preference by platform, combined WTA for personalized advertising across all platforms, and out-of-range WTA indicator (if combined WTA > 25).
2. Subjective welfare measures: ad relevance and distraction binary ratings in the pop-up questions, divided by the number of ads seen.
3. Objective welfare measures:
(a) Ad exposure frequency: (i) total number of ads seen per day; (ii) ratio of ads on visit duration; (iii) The following measures:
- Percentage of brand ads, product ads, or price ads.
- Average price level in price ads.
- Unique number of ads, repeat exposure
(b) Ad-attributed site visits: unique domain counts that originated from ads.
(c) Overall browsing duration by site category (overall, publisher websites, shopping websites, search engines, social media).

Primary Outcomes (explanation)
Our primary outcomes of interest include three groups of metrics:

1. Preference for personalized advertising: binary preference by platform, combined WTA for personalized advertising across all platforms, and out-of-range WTA indicator (if combined WTA > 25). In our design, both the binary preference by platform and the combined WTA across all platforms are incentive-compatible, whereas the platform-specific WTAs are stated-preference measures. We include the out-of-range WTA indicator because there is concern that WTAs outside the BDM price range are not incentive compatible. For the WTA, we will report both the raw level and the version winsorized at 25 for robustness.
2. Subjective welfare measures: ad relevance and distraction binary ratings in the pop-up questions, divided by the number of ads seen.
3. Objective welfare measures:
(a) Ad exposure frequency: (i) total number of ads seen per day; (ii) ratio of ads on visit duration; (iii) The following measures:
- Percentage of brand ads, product ads, or price ads.
- Average price level in price ads.
- Unique number of ads, repeat exposure
(b) Ad-attributed site visits: unique domain counts that originated from ads. We parse URL parameters (e.g., gclid, fbclid, utm_*) and HTTP referrers to identify clicks that originated from ads.
(c) Overall browsing duration by site category (overall, publisher websites, shopping websites, search engines, social media).
Since most browsing metrics have long right tails, we will normalize them relative to each participant’s Week 1 baseline as follows. For each metric M (e.g., minutes, unique domains, ads/day), let M_{i0} be participant i’s Week 1 average per active day and let M_{i1} be the corresponding average in Weeks 2–3. Our normalized outcome is the log change:
\widetilde M_i \equiv \log(1+M_{i1}) - \log(1+M_{i0}),
which is well-defined with zeros and can be interpreted approximately as a percent change for small changes.

Our first primary analysis aims to understand the causal effects of switching participants away from their status quo setting (i.e., “gaining experience”) and changing their preferences for personalized advertising. Define Preference for personalized ads as Y1 and the ad setting switch as E. Since the switch is not random, we adopt an IV specification to estimate the treatment-on-treated effect, using treatment assignments (T) as the instrument:

E_i = \pi_0 + \pi_1 T_i + X_i'\rho + u_i.\tag{1},

\Delta Y1_{i} = \beta_1 + \beta_2 \hat E_i + X_i'\beta_3 + \varepsilon_i,\tag{2}

|\Delta Y1_{i}| = \beta_4 + \beta_5 \hat E_i + X_i'\beta_6 + \varepsilon_i,\tag{3}

where \Delta Y1_i \equiv Y1_{i1} - Y1_{i0} is the change in ad setting preferences from baseline to endline; X_i are variables representing consumer characteristics, including baseline browsing intensity, demographics, and whether they previously have incorrect beliefs about their current ad settings. In Equation (2), we examine whether gaining experience in non-preferred settings shifts the average preferences for personalized ads. In Equation (3), we examine whether gaining experience leads to more frequent preference updates.

Our second primary analysis aims to understand the causal effect of personalized advertising on subjective and objective welfare measures. Define the welfare measures as Y2 and having the personalized ad setting on as D. Since having the personalized ad setting on is not random, we adopt an IV specification to estimate the treatment-on-treated effect.

Instrument definition: Our BDM design nudges participants toward their less-preferred setting, which means the treatment assignment has opposite effects on D depending on participants' baseline preferences. Specifically:

- For participants who prefer personalization OFF (i.e., their less-preferred setting is "personalized ads on"), the high-incentive nudge makes D=1 more likely.
- For participants who prefer personalization ON (i.e., their less-preferred setting is "personalized ads off"), the high-incentive nudge makes D=0 more likely.

This creates a potential violation of the standard IV monotonicity assumption, since the instrument does not push all individuals' D in the same direction. To address this, we adopt one of the following approaches:

Approach A (Separate regressions by baseline preference): We estimate separate IV regressions for each baseline preference group. Let B_i = 1 if participant i prefers personalized ads ON at baseline, and B_i = 0 otherwise. We estimate:

For participants with B_i = 0 (prefer personalization OFF at baseline):

D_i = \pi_2^{(0)} + \pi_3^{(0)} T_i + X_i'\rho^{(0)} + u_i,\tag{4a}

\Delta Y2_{i} = \gamma_1^{(0)} + \gamma_2^{(0)} \hat D_i + X_i'\gamma_3^{(0)} + \varepsilon_i,\tag{5a}

For participants with B_i = 1 (prefer personalization ON at baseline):

D_i = \pi_2^{(1)} + \pi_3^{(1)} T_i + X_i'\rho^{(1)} + u_i,\tag{4b}

\Delta Y2_{i} = \gamma_1^{(1)} + \gamma_2^{(1)} \hat D_i + X_i'\gamma_3^{(1)} + \varepsilon_i,\tag{5b}

Within each subgroup, the instrument satisfies monotonicity: T unambiguously increases D for the B_i=0 group and decreases D for the B_i=1 group. The coefficient \gamma_2^{(0)} captures the LATE of turning personalization ON among compliers who prefer it OFF, while \gamma_2^{(1)} captures the LATE of turning personalization ON among compliers who prefer it ON.

Approach B (Pooled regression with interaction terms): Alternatively, we estimate a pooled specification with separate first-stage instruments by baseline preference:

D_i = \pi_2 + \pi_3 (T_i \times B_i) + \pi_4 (T_i \times (1-B_i)) + \pi_5 B_i + X_i'\rho + u_i,\tag{4c}

\Delta Y2_{i} = \gamma_1 + \gamma_2 \hat D_i + X_i'\gamma_3 + \varepsilon_i,\tag{5c}

This approach uses two instruments (T_i \times B_i and T_i \times (1-B_i)) that push D in opposite directions, allowing the first stage to correctly capture how treatment affects personalization status for each group.

We prefer Approach A (separate regressions) because it yields a cleaner interpretation of causal effects and avoids the functional-form assumptions required for pooling. However, if Approach A lacks sufficient power (for example, if one preference group is much smaller in proportion), we fall back to Approach B as the primary specification.

For inference across multiple outcomes, we check robustness using Anderson q-values within each family of outcomes. When analyzing platforms separately, we treat each platform as its own family (i.e., adjustments are platform-specific).

Our third primary analysis examines the role of information friction/inertia in adhering to ad settings that differ from participants’ preferences. We define \Delta D_{i0} as the indicator that participants have switched to their baseline preferred setting during the baseline tracking period (week 1), and \Delta D_{i1} as the indicator that participants have switched to their baseline preferred setting during the intervention period (week 2-3; this variable takes value of 1 even if a participant switched at some time but switch back later), after they are aware how to switch ad settings on each platform. We then regress \Delta D_{i1} - \Delta D_{i0} on the information treatment indicator using the information treatment and control groups as the sample. The regression coefficient is the causal effect of information treatment on switching to participants' preferred settings.

In addition to the reduced-form models above, we plan to estimate a model of consumer ad-setting decisions to formally characterize the roles of experience and behavioral frictions. The model would allow us to simulate outcomes from counterfactual policies when consumers are provided with additional information/experience and when the behavioral friction is reduced.

Secondary Outcomes

Secondary Outcomes (end points)
Our secondary outcomes of interest include the following metrics:

1. Preference for personalized advertising: stated preference WTAs by ad platform.
2. Subjective welfare measures: ad relevance and distraction ratings by platform, collected in both the baseline and endline surveys.
3. Objective welfare measures:
(a) conversion outcomes.
(b) share of “no ads seen” responses in the pop-up questions.
Secondary Outcomes (explanation)
Our secondary outcomes of interest include the following metrics:

1. Preference for personalized advertising: stated preference WTAs by ad platform. Though stated preferences may be less reliable than revealed preferences, our pilot studies show that the sum of these individual platform preferences is highly correlated with (though much larger than) the incentive-compatible combined WTA measure. For the full experiment, we will recheck the correlation between these WTA measures and provide platform-specific WTA measures if they remain highly correlated with the combined WTA.
2. Subjective welfare measures: ad relevance and distraction ratings by platform, collected in both the baseline and endline surveys. We consider these as secondary outcomes as consumers may be unable to discern which display ads come from the Google display ad network, the Meta ad network, etc.
3. Objective welfare measures:
(a) conversion outcomes. We consider this a secondary outcome because we may lack power to detect meaningful differences in conversion outcomes between personalized and non-personalized ads.
(b) share of “no ads seen” responses in the pop-up questions, to test the hypothesis that personalized ads blend in better with the content and are harder to be recognized as ads.

Heterogeneity analysis:

- We will examine treatment effect heterogeneity across the following participant dimensions: baseline browsing intensity (number of minutes and number of unique domains), gender, age cohort, and education level (if collected).
- Although we ask participants who have ad blockers not to participate, roughly 10% of pilot participants had an ad blocker installed yet still saw ads. We plan to conduct the analysis both with and without these participants to ensure robustness. It is possible that ad blocker users see a different composition of ads and/or have different characteristics than non-users; we will report these differences to the extent that our sample has sufficient statistical power to address them.

Additional analysis:

- To analyze the nature of preference update after gaining experience, we will also run the following regression:
Y1_{i1} = \beta_7 + \beta_8 \hat E_i + \rho Y1_{i0} + X_i'\beta_9 + \varepsilon_i.\tag{6}
Here, our goal is to determine whether participants are more likely to strengthen or weaken their existing preference upon gaining experience with the opposite ad setting.
- We will report the ITT (intent-to-treat) version of each of the TOT (treatment-on-treated) regressions in the primary analysis for robustness. Given that the treatment-on-treated regression relies on instruments, if the first-stage estimates indicate weak instruments, we will revert to ITT as the main specification.
- In the control and nudge-switch treatments, our extension checks ad settings daily and tries to switch participants back to the setting consistent with the treatment if they manually switch the setting back. However, in the rare event that this implementation fails, we will define a continuous variable, S2_i, as the share of days on which a participant has the personalized ad settings enabled.

Experimental Design

Experimental Design
See section below.
Experimental Design Details
Not available
Randomization Method
Randomization will be done by a computer.
Randomization Unit
individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
There is no clustering in this experiment. See the next section for the number of observations
Sample size: planned number of observations
The expected number of observations that contribute valid datapoints is 1500. We consider “contribute valid datapoints” as being active for at least 3 days in Week 1 & Week 2-3, regardless of whether they complete the end-line survey. Our target number of participants completing the endline survey is 1000.
Sample size (or number of clusters) by treatment arms
We expect to randomly assign 20% of users to the information treatment group, 40% to the control group, and 40% to the nudge-switch group.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Committee on the Use of Human Subjects
IRB Approval Date
2025-01-21
IRB Approval Number
DAT25-0021
IRB Name
UNC-CH Institutional Review Board
IRB Approval Date
2025-08-21
IRB Approval Number
25-2047