Building State Capacity for Evidence Use: A Randomized Controlled Trial in the Democratic Republic of Congo

Last registered on January 02, 2025

Pre-Trial

Trial Information

General Information

Title
Building State Capacity for Evidence Use: A Randomized Controlled Trial in the Democratic Republic of Congo
RCT ID
AEARCTR-0015097
Initial registration date
December 30, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
January 02, 2025, 7:27 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
ETH Zurich

Other Primary Investigator(s)

PI Affiliation
Radboud University
PI Affiliation
ETH Zürich

Additional Trial Information

Status
On going
Start date
2024-10-10
End date
2025-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Impact evaluations using experimental and quasi-experimental methods provide rigorous evidence on the effectiveness of policies and programs, making them essential tools for evidence-based policymaking. As such, the use of impact evaluations offers policymakers an opportunity to make better-informed decisions, allocate scarce resources more effectively, and design policies that can address deep economic and social disparities. This is especially relevant for countries with weak state capacity, limited resources, and high levels of inequality, where targeted and cost-effective interventions are necessary to achieve meaningful change. Despite the clear potential of impact evaluations to guide policy decisions, their use remains limited due to barriers such as competing priorities, language, limited capacity to interpret evidence, and a lack of actionable frameworks for implementation. While existing literature highlights these barriers, less attention has been paid to how enhancing policymakers' capacity to engage with and apply impact evaluations can bridge the gap between evidence generation and its practical use in policymaking. In countries like the DRC, where the need for effective decision-making is high but the capacity to act on available evidence is often limited, strengthening the ability to use impact evaluations becomes even more essential. This study evaluates the impact of a two-day training program on impact evaluations for policymakers in Kinshasa, DRC, using a staggered randomized controlled trial with 430 participants from various ministries and government branches. The study leverages the existing centralized training structures for most policymakers, taking advantage of the shared background and homogeneity of their training. By combining this infrastructure with access to impact evaluations, the study explores how capacity-building interventions influence policymakers’ understanding of evidence, attitudes towards evidence as well their practices.
External Link(s)

Registration Citation

Citation
Günther, Isabel, Ghislaine Tegha Megha-Bongnkar and Laura Metzger. 2025. "Building State Capacity for Evidence Use: A Randomized Controlled Trial in the Democratic Republic of Congo." AEA RCT Registry. January 02. https://doi.org/10.1257/rct.15097-1.0
Experimental Details

Interventions

Intervention(s)
This study involves a randomized controlled trial among 429 public administrators in Kinshasa, Democratic Republic of Congo, to evaluate the impact of training on policymakers' capacity to understand and use evidence. Participants are randomly assigned to receive a two-day, in-person training program focused on impact evaluations. The program includes video lectures on the theory and methods of impact evaluations, covering topics such as types of evaluation methods and the concepts of internal and external validity. These lectures are complemented by interactive workshops designed to apply the theoretical knowledge to practical scenarios. As part of the intervention, participants also receive tailored evidence in the form of impact evaluation studies that align with their specific policy needs and/or interests.
Intervention Start Date
2025-01-27
Intervention End Date
2025-07-31

Primary Outcomes

Primary Outcomes (end points)
The primary outcome variables in this study aim to assess the impact of the training intervention on policymakers' learning, attitudes, and actual use of evidence. These include:
1. Learning Knowledge Score: This outcome measures the increase in participants' knowledge of impact evaluations as a result of the training.
2. Attitudes Toward Evidence Use: This variable captures the change in participants' attitudes towards the use of evidence in decision-making. It includes their level of trust in evidence, their perceptions of its usefulness, and their beliefs about the importance of evidence-based policymaking.

Actual Use of Evidence: The third outcome measures whether the training led participants to actually use the evidence provided to them in their work. This will be assessed through follow-up surveys and self-reports, asking participants about their use of the evidence, whether they applied it in policy decisions, and how they integrated the provided impact evaluation studies into their daily tasks. Additionally, we will test participants' knowledge of the evidence by asking them specific questions to determine if they read and understood the provided materials. A comparison will be made between those who received the training, those who did not, and their engagement with the evidence, to examine whether training influences both the likelihood of reading the evidence and the depth of understanding.


These outcomes will allow us to assess whether the training enhanced policymakers' knowledge, influenced their attitudes toward evidence, and led to a tangible increase in evidence use in policymaking.

Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
This study uses a randomized controlled trial (RCT) to assess the effectiveness of a two-day training program on impact evaluations for 429 public administrators in Kinshasa, DRC. Participants will be randomly assigned to either a training group, which will attend the in-person program, or a control group, which will not receive any training. The study includes three survey waves: a baseline survey to measure participants' knowledge and attitudes toward evidence use, an immediate follow-up survey after the training, and a delayed follow-up survey to assess the longer-term impact. The experiment is conducted in collaboration with the École Nationale d'Administration in the DRC, utilizing the existing government training infrastructure.
Experimental Design Details
Not available
Randomization Method
The randomization for this study was conducted using R software with a seed of 32343 to assign participants to either the training or control group.
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
429 public administration officials
Sample size: planned number of observations
429 public administration officials
Sample size (or number of clusters) by treatment arms
429 public administration officials
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
ETH Zurich Ethics Commitee
IRB Approval Date
2024-09-01
IRB Approval Number
24 ETHICS-048