Impediments to yardstick competition in federal systems: Experimental survey evidence from German citizens

Last registered on May 29, 2020

Pre-Trial

Trial Information

General Information

Title
Impediments to yardstick competition in federal systems: Experimental survey evidence from German citizens
RCT ID
AEARCTR-0005943
Initial registration date
May 29, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 29, 2020, 3:35 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Technical University of Munich

Other Primary Investigator(s)

PI Affiliation
ZEW Mannheim
PI Affiliation
ifo Institute
PI Affiliation
ZEW Mannheim

Additional Trial Information

Status
In development
Start date
2020-06-03
End date
2021-12-31
Secondary IDs
Abstract
A key argument for policy competition between sub-national entities is that direct comparison of policy outcomes yields efficient public service provision. However, the necessary condition to reap these efficiency gains – that voters can observe policy outcomes – is often not fulfilled. A case in point is the education system of Germany, where the legislative and executive power over public education is vested in the federal states, but regular cross-state comparative student assessment data is largely lacking. To study the underlying political-economy reasons for lacking comparability across states, we conduct parallel surveys among German state parliamentarians and German citizens.
We elicit citizens’ preferences for comparability in a representative online survey with 10,000 respondents who are randomly assigned to one of five experimental groups. Respondents in group 1 state their (i) satisfaction with their state’s education policy, and (ii) support for comparability-enhancing policies without receiving any information. Respondents in groups 2 and 3 answer the same questions after being informed about their state’s educational performance relative to the other German states. Respondents in groups 4 and 5 have the option to retrieve educational performance information through clicking on a link before answering the outcome questions. The experimental results will shed light on the determinants of public support for enhanced comparative testing of education outcomes, which is a necessary condition to foster education-policy competition.
External Link(s)

Registration Citation

Citation
Blesse, Sebastian et al. 2020. "Impediments to yardstick competition in federal systems: Experimental survey evidence from German citizens." AEA RCT Registry. May 29. https://doi.org/10.1257/rct.5943-1.0
Experimental Details

Interventions

Intervention(s)
We investigate whether providing citizens with information about their state’s relative educational performance affects satisfaction with education policy, and policy support for increased comparative educational testing. Survey respondents are randomized into five experimental groups, and complete five consecutive stages. In stage 1, they state the importance of cross-state comparability of educational performance. In stage 2, they estimate the rank of their state in the latest comparative math-achievement test. In stage 3 and 4, respondents state their satisfaction with their state’s education policy, and support for the policy proposal to enhance comparative educational testing. In group 1, respondents answer these two questions without any further information. In groups 2 and 3, they receive information about their state’s relative educational performance. In groups 4 and 5, respondents have the option to retrieve this educational-performance information through clicking on a link. In stage 5, all respondents estimate the share of state parliamentarians who support the policy proposal to enhance comparative educational testing.
We implemented a second, independently randomized, experiment when eliciting policy support in stage 4. Before answering this question, half of respondents are informed that the average answer will be forwarded to their state politicians. This allows us to assess whether highlighting the consequentiality of the survey answers affects answering behaviour.
Intervention Start Date
2020-06-03
Intervention End Date
2020-06-17

Primary Outcomes

Primary Outcomes (end points)
Citizens’ satisfaction with their state’s education policy, and support for the policy proposal to enhance comparative educational testing.
Primary Outcomes (explanation)
Satisfaction and policy support in TREATMENT 1 (elicited in stage 3 and 4) will serve as a benchmark. Comparing these answers to TREATMENT 2 and 3 will allow us to assess the causal effect of providing relative educational-performance information. Comparing answers to TREATMENT 4 and 5 reveals the causal effect of offering the option to acquire this information.

Secondary Outcomes

Secondary Outcomes (end points)
Heterogeneity by baseline importance of comparability, prior beliefs about the state’s performance, actual relative performance, and political ideology.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We conduct the experiment in a sample of 10,000 adults aged between 18 and 69 years. The survey is conducted in cooperation with the survey institute responi. The recruitment and polling is managed by respondi, who collect the data via an online platform. That is, our participants answer the survey questions autonomously on their own digital devices. Randomization is carried out by respondi at the individual level, using a computer.

Our experiment is structured as follows:
Respondents will be randomly assigned (between subject) to treatment 1 (p=1/3), treatment 2 (p=1/6), treatment 3 (p=1/6), treatment 4 (p=1/6), or treatment 5 (p=1/6). In our main analysis, we plan to pool treatments 2 and 3, and treatments 4 and 5, respectively.


Treatment 1:
Stage 1: Perceived importance of cross-state comparability.
Stage 2: Beliefs about state performance.
Stage 3: Satisfaction with state’s education policy.
Stage 4: Policy support for increased comparative educational testing.
Stage 5: Beliefs about parliamentarian support for enhanced-comparability policy.

Treatment 2:
Stage 1: Perceived importance of cross-state comparability.
Stage 2: Beliefs about state performance.
Stage 3: Satisfaction with state’s education policy + information about own performance.
Stage 4: Policy support for increased comparative educational testing.
Stage 5: Beliefs about parliamentarian support for enhanced-comparability policy.

Treatment 3:
Stage 1: Perceived importance of cross-state comparability.
Stage 2: Beliefs about state performance.
Stage 3: Satisfaction with state’s education policy + information about performance rank.
Stage 4: Policy support for increased comparative educational testing.
Stage 5: Beliefs about parliamentarian support for enhanced-comparability policy.

Treatment 4:
Stage 1: Perceived importance of cross-state comparability.
Stage 2: Beliefs about state performance.
Stage 3: Satisfaction with state’s education policy + information acquisition about own performance.
Stage 4: Policy support for increased comparative educational testing.
Stage 5: Beliefs about parliamentarian support for enhanced-comparability policy.

Treatment 5:
Stage 1: Perceived importance of cross-state comparability.
Stage 2: Beliefs about state performance.
Stage 3: Satisfaction with state’s education policy + information acquisition about performance rank.
Stage 4: Policy support for increased comparative educational testing.
Stage 5: Beliefs about parliamentarian support for enhanced-comparability policy.
Experimental Design Details
Randomization Method
Randomization is carried out by the survey company respondi, using a computer.
Randomization Unit
at the individual level
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
10,000 individuals (adults aged between 18 and 69 years)
Sample size: planned number of observations
10,000 individuals (adults aged between 18 and 69 years)
Sample size (or number of clusters) by treatment arms
Treatment 1: 3,333; treatments 2 to 5: 1,666 each.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials