Survey experiments on policymakers' perspectives towards evaluation and experimentation

Last registered on September 19, 2025

Pre-Trial

Trial Information

General Information

Title
Survey experiments on policymakers' perspectives towards evaluation and experimentation
RCT ID
AEARCTR-0016824
Initial registration date
September 19, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
September 19, 2025, 10:28 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region
Region
Region
Region
Region
Region
Region
Region
Region
Region
Region
Region
Region
Region
Region
Region
Region
Region
Region
Region
Region
Region
Region
Region
Region
Region
Region

Primary Investigator

Affiliation
Innovation Growth Lab

Other Primary Investigator(s)

PI Affiliation
Innovation Growth Lab
PI Affiliation
Innovation Growth Lab

Additional Trial Information

Status
In development
Start date
2025-09-19
End date
2025-10-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
The Innovation Growth Lab (IGL) and Technopolis Group will conduct an online survey of policymakers and policy practitioners across Europe, to assess their current level of understanding of policy experimentation, the drivers of and barriers to use of experimentation, and their views on the potential for a community of practice.

The survey incorporates four randomised experiments:
- A conjoint (discrete-choice) experiment designed to elicit respondents’ willingness to engage in randomised experimentation, and which conditions are conducive to this.
- An A/B test to determine whether respondents’ perceptions of their understanding of experimentation is affected by whether they are prompted with a particularly rigorous definition of what is meant by experimentation.
- An A/B test to examine whether respondents are more supportive of policy experimentation if they are first told of the results of a previous survey in which a majority of the general public in several European countries expressed generally supportive views about experimentation.
- An A/B test to examine whether respondents have different reactions to the terms “randomised experimentation” or “randomised controlled trials (RCTs)”, when asked whether there should be more or less of this type of experiment in research and innovation policy.

Registration Citation

Citation
Brackin, Maria, Rob Fuller and James Phipps. 2025. "Survey experiments on policymakers' perspectives towards evaluation and experimentation." AEA RCT Registry. September 19. https://doi.org/10.1257/rct.16824-1.0
Sponsors & Partners

Sponsors

Partner

Type
private_company
Experimental Details

Interventions

Intervention(s)
The interventions will involve adjustments to the wording of several different questions in the survey – including a conjoint experiment and three A/B tests. Further details are in the attached statistical analysis plan.
Intervention (Hidden)
Intervention Start Date
2025-09-19
Intervention End Date
2025-10-31

Primary Outcomes

Primary Outcomes (end points)
The conjoint experiment will test the extent to which policymakers prefer to carry out a randomized controlled trial rather than other types of evaluation, and how this is affected by the purpose of the evaluation, the timeframe in which results are needed, and the budget, set-up time and support available.

Outcomes for the A/B tests are as follows:
- Self-assessed level of understanding of policy experimentation
- Level of own support for randomized experimentation
- Expected level of support for randomized experimentation among those who are served or most closely affected by policies
- Opinion about whether there should be more or less experimentation in R&I policy
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The structure of the conjoint (discrete-choice) experiment follows the approach set out by Hainmueller, Hopkins & Yamamoto (2014).
Respondents will be presented with two scenarios for a programme evaluation, each of which consists of a random assortment of the pre-defined levels of each of the six attributes. They will be asked which of the two scenarios they prefer. Each respondent will be asked to make five such pairwise comparisons.

In the A/B tests, each respondent will be randomly assigned to one of two arms. The arms are set out in the attached analysis plan.
Experimental Design Details
Randomization Method
Randomization will be carried out within the Medallia survey interface, using simple randomization with no stratification. Randomization for each of the experiments will be carried out independently.
Randomization Unit
Individual survey respondent
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
400 individual survey respondents
Sample size: planned number of observations
400 individual survey respondents
Sample size (or number of clusters) by treatment arms
Not applicable for the conjoint experiment. For the A/B tests, there are planned to be 200 respondents in each arm.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Conjoint experiment: With 400 respondents each carrying out five pairwise comparisons in the experiment, the effective sample size (as defined by Schuessler and Freitag (2020)) will be 4000. This will provide 80% power to detect: - Average marginal component effect (AMCE) of 5.5 percentage points for attributes with three levels (including our primary AMCE analysis), and 4.5 percentage points for attributes with two levels. - Average marginal component interaction effect (AMCIE) of 10.7 percentage points for the interaction between an attribute with three levels (including whether the RCT is selected) and one with two levels, or 13.1 percentage points for the interaction between two attributes each with three levels. The A/B tests will each have 80% power to detect an effect of 0.28 standardized deviations of each of the outcome measures. In the attached analysis plan, these are converted into effect sizes on the measurement scales used in the survey, based on pilot data. This power analysis does not take account of the potential explanatory power of covariates (which would reduce the detectable effect size somewhat) nor the correction for multiple hypothesis testing (which would increase it).
Supporting Documents and Materials

Documents

Document Name
Survey instrument
Document Type
survey_instrument
Document Description
File
Survey instrument

MD5: 2a302ad3128d4f77adb9591ec3026fc7

SHA1: b89f4a3c0d4ff4c1e89bce42c7306ad4f770627b

Uploaded At: September 19, 2025

IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number
Analysis Plan

Analysis Plan Documents

Analysis plan

MD5: 7f48e9c999f448752e9e4d80b0e690a4

SHA1: f8efe7f7b71e080935c25ccac53cc2bbcd99be6b

Uploaded At: September 19, 2025

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials