Experimental Design
Design.
This study employs a discrete choice experiment (DCE) with a mixed randomized controlled design. The experiment uses a 2 arms between subject treatment (institutional context: collaborative vs conflicting), and the following withing subject varying attributes levels: 5 (geographic coverage levels), (time period levels), 2 (direct experience collection: yes vs no), 3 (impact evaluation type levels).
Scenario description.
All participants are exposed to the same vignette (hypothetical scenario): they are responsible for developing a policy proposal within a three-month deadline addressing a highly relevant issue in their area of competence. They need to select which types of scientific studies should be prioritized for inclusion in an evidence synthesis to support their policy development. Participants are asked to imagine a specific policy problem consistent with the described scenario.
Choice task structure.
Participants complete 8 binary choice tasks, each presenting two alternative types of scientific studies for inclusion in the evidence synthesis. The two study options are presented side-by-side on desktop computers or vertically stacked on mobile devices, with attributes displayed using visual icons and standardized descriptions. In each task, participants must select their preferred study type to include in their evidence synthesis by clicking the corresponding button. Choice tasks are presented one at a time and are self-paced, but must be completed in a single session. Participants cannot return to previous tasks once submitted.
Attribute specification.
Each study type is characterized by four attributes with the following levels:
1. Geographic coverage (two levels, Italian and non-Italian, with the values of non-Italian varying for realism):
• Italy
• Southern Europe (Spain, Greece, Portugal)
• Nordic countries (Sweden, Norway, Denmark)
• Central Europe (Germany, Austria, Switzerland)
• Anglo-Saxon countries (United Kingdom, United States, Canada)
2. Time period analyzed:
• 1990-2000
• 2000-2010
• 2015-2025
3. Collection of direct experiences:
• Yes: the study collects direct experiences from beneficiaries and program managers through in-depth interviews
• No: the study does not collect direct experiences through interviews
4. Type of impact evaluation:
• Before and After Design
• Quasi Experimental Design
• Experimental design
Treatment manipulation.
Participants are randomly assigned to one of two between-subject conditions that differ in the institutional climate:
• Treatment group (conflicting climate): Commission members strongly support different policy solutions and show little openness to alternative approaches;
• Control group (collaborative climate): Commission members favor different policy solutions but are open to considering alternative approaches
The choice set are generated through a fully randomized design. In each choice set,the values of the four attributes are allowed to vary randomly in each option of each choice set. The only constraint is that the two options cannot be identical.
The data collection will be stagered, we will invite participants gradually, spanning the invitations across several weeks. If the response rate for the initial 5000 emails sent is lower than 3%, we will use the collected data to generate priors and change the design to a D-efficient fractional design, to ensure a higher efficiency for the estimation of the main effects. The intial pool of respondents will be assigned to a separate block and we will constrol for it in the analysis.