Behavioral Incentive Compatibility and Personalized Interventions

Last registered on April 14, 2026

Pre-Trial

Trial Information

General Information

Title
Behavioral Incentive Compatibility and Personalized Interventions
RCT ID
AEARCTR-0018307
Initial registration date
April 10, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 14, 2026, 9:06 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Caltech

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2026-04-20
End date
2026-10-20
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study examines whether reports in an incentivized measurement task change when participants know that their reports will affect future options. In many real-world settings, information elicited from individuals is used not only to measure an underlying type, but also to determine recommendations, menus, or allocations. If individuals anticipate these downstream consequences, they may adjust their reports accordingly. The experiment studies this possibility in a controlled induced-value environment in which the researcher observes the underlying type. Participants are assigned a token value and complete a multiple price list that determines a reported switch point. In a subsequent allocation stage, the report may affect the set of alternatives available to the participant according to a treatment-specific allocation rule. The design varies these future consequences across treatments to test when elicitation continues to recover the underlying value and when it does not. The main outcomes are deviation from the matching report and the fraction of matching reports, with additional analysis of optimal reporting, learning, and heterogeneity in understanding of incentives. The study also uses post-experimental quiz and survey measures to examine whether reporting behavior is associated with failures to integrate incentives, backward induction, narrow framing, lying aversion, and consideration of future consequences. The experiment is conducted online on Prolific using a between-subjects design.
External Link(s)

Registration Citation

Citation
Farres Rodriguez, Camila. 2026. "Behavioral Incentive Compatibility and Personalized Interventions." AEA RCT Registry. April 14. https://doi.org/10.1257/rct.18307-1.0
Experimental Details

Interventions

Intervention(s)
Participants complete an online incentivized experiment consisting of 20 trials. In each trial, a participant is assigned an induced-value token, completes a multiple price list that determines a reported switch point, and then chooses between two alternatives in a subsequent allocation stage. The intervention is the treatment-specific rule that maps the participant’s token value and reported switch point into the two alternatives offered in the allocation stage. Participants are randomly assigned to one of seven between-subjects treatment arms and face the same rule throughout the experiment. Across treatments, the allocation rule varies whether reporting has no future consequence, creates incentives to misreport upward or downward, or introduces uncertainty in how reports affect later options, including cases in which incentive compatibility is preserved. This design allows the study to isolate how anticipated downstream consequences of reporting affect behavior in an otherwise standard incentivized measurement task. Refer to the Analysis Plan, Section 2: Experimental Design, for additional information.
Intervention Start Date
2026-04-20
Intervention End Date
2026-10-20

Primary Outcomes

Primary Outcomes (end points)
The primary outcomes are two measures of reporting behavior in the incentivized trials: the difference between the participant’s reported switch point and the assigned token value, and an indicator for whether the participant’s reported switch point exactly matches the assigned token value. Both are measured for each participant in each of the 20 incentivized trials. Refer to the Analysis Plan, Sections 3.2: Data, Outcomes and Analysis Overview, for additional information.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary outcomes include the absolute distance between the participant’s report and the treatment-specific payoff-maximizing report, an indicator for whether the participant exactly submits that payoff-maximizing report, participant-level summary measures of reporting behavior, specifically each participant’s average deviation from the assigned token value, average matching rate, average distance from the treatment-specific payoff-maximizing report, average rate of exact optimal reporting across incentivized trials and measures of responses after costly versus non-costly misreports. Additional secondary outcomes include quiz-based participant categories, classifications based on backward induction, narrow framing, and lying aversion, survey responses on consideration of future consequences, practice-question performance and dominance violations. Refer to the Analysis Plan, Sections 2 and 3, for additional information.
Secondary Outcomes (explanation)
The participant-level summary outcomes are constructed by averaging trial-level behavior across incentivized rounds for each participant. These include the participant’s average signed deviation between the reported switch point and the assigned token value, the share of incentivized rounds in which the participant exactly matches the token value, the average absolute distance between the participant’s report and the treatment-specific payoff-maximizing report, and the share of incentivized rounds in which the participant exactly submits that payoff-maximizing report. Costly-misreport response measures are constructed by identifying rounds in which the participant’s report differs from the token value and the realized BDM draw makes that misreport payoff-relevant, and then tracking next-round adjustment.

Quiz-based categories are constructed from the first block of quiz questions based on which of the questions on total incentives, Measurement Stage incentives, and Allocation Stage incentives are answered correctly. The narrow-framing indicator is constructed from whether the participant gives different responses to two equivalent lottery problems presented in different formats. The backward-induction indicator is constructed from whether the participant reveals the payoff boxes and chooses the action consistent with backward induction. The lying-aversion indicator is constructed by comparing the participant’s reported roulette outcome to the realized draw. The consideration-of-future-consequences responses are constructed from the average answers to six survey items asked at the end of the study. Practice performance is based on the number of attempts required to answer practice questions correctly. Dominance violations are constructed by counting cases in which a participant chooses the lower monetary option in the Allocation Stage. Refer to the Analysis Plan, Sections 2 and 3, for additional information.

Experimental Design

Experimental Design
This study is an online, between-subjects experiment conducted on Prolific. Participants are randomly assigned at the individual level to one of seven treatment arms and face the same treatment throughout the study. After consent and a bot check, participants read instructions with comprehension questions and complete a practice trial with open-response practice questions that must be answered correctly before they can continue. The main experiment consists of 20 incentivized trials. In each trial, the participant is assigned an induced-value token, completes a multiple price list that determines a reported switch point, chooses between two alternatives in an allocation stage, and then receives feedback on trial earnings. The intervention is the treatment-specific allocation rule that determines how the token value and reported switch point affect the alternatives offered in the allocation stage. Following the trials, participants complete an incentivized post-experimental quiz and a non-incentivized exit survey. Participants receive a fixed completion payment, additional earnings for correct first-attempt practice answers, and a probabilistic bonus based on either one randomly selected trial or one randomly selected quiz question. Refer to the Analysis Plan, Section 2: Experimental Design, for additional information.
Experimental Design Details
Not available
Randomization Method
Randomization done by computer at the individual level, with approximately equal assignment across seven treatment arms. Refer to the Analysis Plan, Section 3.4.2: Power and Sample Size, for additional information.
Randomization Unit
The unit of randomization is the individual participant. Each participant is randomly assigned to one treatment arm and remains in that treatment for the full experiment.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
700 individual participants.
Sample size: planned number of observations
700 individual participants.
Sample size (or number of clusters) by treatment arms
Approximately 100 individual participants per treatment arm, with equal planned assignment across the 7 treatment arms; realized completed sample sizes may be slightly above or below 100 per arm due to attrition. Refer to the Analysis Plan, Section 3.4.2: Power and Sample Size, for additional information.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Power calculations are based on simulation using the pilot data and the planned analysis specification, including repeated observations per participant, participant-level clustering, token fixed effects, round fixed effects, and treatment-by-token interactions. The study is powered for the two primary outcomes: mean deviation from matching and the fraction of matching reports. Mean deviation from matching is measured in switch-point units, and the fraction of matching reports is measured in percentage points. Because the planned inference is based on joint Wald tests in the full clustered design, I do not use a single closed-form minimum detectable effect size; instead, sample size is chosen to achieve approximately 80% power for the planned treatment-versus-control tests. Refer to the Analysis Plan, Section 3.4.2: Power and Sample Size, for additional information.
IRB

Institutional Review Boards (IRBs)

IRB Name
Caltech Institutional Review Board
IRB Approval Date
2025-04-09
IRB Approval Number
IR25-1538
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information