Incentives, Expectations, and Political Identity

Last registered on April 03, 2025

Pre-Trial

Trial Information

General Information

Title
Incentives, Expectations, and Political Identity
RCT ID
AEARCTR-0015660
Initial registration date
March 27, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 03, 2025, 12:32 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
University of Mississippi

Other Primary Investigator(s)

PI Affiliation
University of Oxford

Additional Trial Information

Status
In development
Start date
2025-03-27
End date
2025-05-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We investigate how different survey design features influence respondent behavior, with particular attention to how these features interact with respondents’ self-reported characteristics.







External Link(s)

Registration Citation

Citation
Rholes, Ryan and Alena Wabitsch. 2025. "Incentives, Expectations, and Political Identity." AEA RCT Registry. April 03. https://doi.org/10.1257/rct.15660-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
We manipulate various features of a survey used to elicit respondent beliefs.
Intervention (Hidden)
We investigate how survey design influences economic expectations reported by individuals with differing political affiliations, focusing on whether the effect of incentives varies around a major shift in political power. Specifically, we conduct the study in multiple waves both before and after the 2024 presidential election to assess whether changes in political leadership alter how participants incorporate information and respond to monetary rewards for accurate forecasts. More concretely, we elicit expectations use flat-fee payments and various forms of marginal incentives. Additionally, we implement a standard information-provision RCT to study how incentive structure influences learning rates across self-reported political identity and whether/how this changes whenever political power changes.
Intervention Start Date
2025-03-27
Intervention End Date
2025-05-01

Primary Outcomes

Primary Outcomes (end points)
We are primarily interested in (1) how different incentives influence point expectations of inflation (and other macroeconomic variables) across self-reported political identity (2) Whether/how this changes after a shift in political power (3) Similar to 1,2 but regarding the link between expectations and perceptions of inflation across political parties (4) Similar to 1,2 but for learning rates in an information-provision RCT.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We randomly assign subjects recruited via Prolific to variations of a survey meant to elicit the same distribution of beliefs. Our interest is in how different survey designs influence the underlying distribution of beliefs we collect.
Experimental Design Details
We implement a multi-wave survey experiment, with data collection occurring just prior to the 2024 U.S. presidential election and then again shortly afterward. During each wave, participants answer questions about their demographic profile, political affiliation, and economic expectations—focusing on outcomes such as inflation, gas prices, or general economic conditions. In both the pre- and post-election surveys, we will randomly assign participants to different versions of the questionnaire. Specifically, these versions will vary in the incentives we use to elicit expectations. More concretely, we vary whether subjects face marginal monetary incentives tied to forecast accuracy. We do this in an RCT framework, sometimes incentivizing priors and posteriors, sometimes neither, and sometimes only one of the two. We will invite the same participants in both waves so that we can observe within-subject changes over time; however, we will also supplement the second wave with additional, newly recruited subjects for cross-sectional comparisons to mitigate concerns about survey learning effects confounding our within-subject results.

The experiment’s core objective is to identify how incentivization influences reported beliefs across self-reported political identies. In particular, our interest is in whether imposing marginal incentives can mitigate or resolve long-standing empirical puzzles related to inflation expectations and political identity. Because the pre- and post-election timing captures a substantial real-world event, we can also measure how shifts in political power or in participants’ perceptions of that power affect their reliance on certain types of information or incentives. For instance, we will compare how participants respond to expert forecasts or to performance-based bonuses before and after the election results are clear. This setup enables us to track whether heightened or diminished trust in government outcomes, stemming from election results, manifests in systematic changes to economic expectations or willingness to exert effort in making accurate predictions.

Participants will receive a baseline show-up payment for each wave, but the precise bonus scheme will vary by treatment. In the performance-pay conditions, if participants’ forecasts are sufficiently close to realized metrics (for instance, the actual inflation rate published months later), they receive an additional bonus. If participants are in a control condition with no performance-based incentives, they simply receive a fixed compensation for completing the survey. Each wave’s questionnaire will last approximately 15 to 25 minutes and include attention checks to ensure data quality. At the conclusion of both waves, participants will be debriefed about the nature of the study and provided with details on their final compensation, including any performance-based bonus that can only be computed once relevant economic statistics become available. Through this design, we aim to shed light on how political events and various survey design elements together shape individuals’ stated economic beliefs.
Randomization Method
Randomization done by software coded in oTree and deployed on Prolific.
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
We do not cluster.
Sample size: planned number of observations
1000 participants in our original wave. Our goal is to call back all of those 1000 participants in our second wave and to include roughly 500 new participants in our second wave.
Sample size (or number of clusters) by treatment arms
250 per treatment in the original wave. Hopefully, 250 recalls per treatment in the second wave. Roughly125 - 250 fresh recruits per treatment in the follow wave.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
We conducted a power analysis to determine the sample size required to detect various effect magnitudes (measured by Cohen’s d) with a significance level of 0.05 and a power of 0.80. Cohen’s d is the standardized mean difference between the treatment and control groups, calculated as (M1 – M2) / SD_pooled. Conventional interpretations label d=0.2 as a small effect, d=0.5 as medium, and d=0.8 as large. Based on these guidelines, we decided on a sample size of 250 subjects per treatment arm, which allows sufficient power to detect small differences (or precisely estimate a null effect) at a one-percent level of significance (with power equal to 0.8) for typical between-group comparisons.
IRB

Institutional Review Boards (IRBs)

IRB Name
The Department of Economics Departmental Research Ethics Committee (University of Oxford)
IRB Approval Date
2023-12-15
IRB Approval Number
ECONCIA23-24-04

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials