Evaluation of Integrated Digital Primary Health Care in Rwanda

Last registered on September 19, 2022

Pre-Trial

Trial Information

General Information

Title
Evaluation of Integrated Digital Primary Health Care in Rwanda
RCT ID
AEARCTR-0009823
Initial registration date
July 29, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
September 19, 2022, 3:02 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
University of Rwanda

Other Primary Investigator(s)

Additional Trial Information

Status
On going
Start date
2022-06-16
End date
2024-03-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
The use of telemedicine has been on the rise over the last decade, fueled even more by the need for remote services during the COVID-19 pandemic. Telemedicine has the potential to increase access and lower costs of healthcare. But there are concerns that these benefits could come at the expense of quality. Despite the rise in its use, there is little empirical evidence on the impact of telemedicine on quality of care, utilization, or costs. In this study, we evaluate the impact of Babyl, a digital health services platform in Rwanda, compared to conventional health care facilities through several complementary approaches including qualitative surveys, use of standardized patients to evaluate quality of care, analysis of observational data, and a randomized control trial to encourage take-up and use of Babyl digital services.
External Link(s)

Registration Citation

Citation
Humuza, James. 2022. "Evaluation of Integrated Digital Primary Health Care in Rwanda ." AEA RCT Registry. September 19. https://doi.org/10.1257/rct.9823-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
In the formative research phase, we will gather qualitative evidence on the potential benefits and barriers to adoption of Babyl from patients, providers, and other key players in both digital and in-person health care delivery. These insights will be used to develop an encouragement intervention, with the goal of removing barriers to Babyl adoption and use.
Intervention Start Date
2023-02-01
Intervention End Date
2023-05-01

Primary Outcomes

Primary Outcomes (end points)
Primary outcomes for encouragement intervention:
-- Uptake of Babyl services, measured as patient registration for Babyl.
-- Health care utilization of Babyl and conventional care.
-- Costs to patients, measured through patient surveys.

Primary outcome for baseline quality of care assessment:
-- Correct case management is defined as the proportion of interactions in which providers managed the case according to Babyl and national guidelines. Depending on the SP case, the outcome is an index composed of actions a provider did during the interaction with the SP: correct diagnostic tests ordered, correct or harmful treatment prescribed, or referral to a qualified health care provider. These details are extracted from an exit questionnaire that is completed by the SP after the interaction.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary outcomes for baseline quality of care assessment:
• Correct medicines prescribed or dispensed
Indicator for medicines prescribed or dispensed in accordance with Babyl and national guidelines in each case scenario.
• Any unnecessary medicines
Indicator for any unnecessary medicines prescribed/dispensed in each case scenario.
• Total number of medicines prescribed / dispensed
Count of total number of medicines given (both correct and unnecessary)
• Received all medicines
Did the patient receive all prescribed medications (dispensed or picked up)?
• Correct labs ordered
Indicator for any lab tests ordered in accordance with Babyl and national guidelines in each case scenario.
• Any unnecessary lab tests
Indicator for any unnecessary lab tests ordered in each case scenario.
• Total number of lab tests ordered
Total number of medicines prescribed/dispensed in each case scenario.
• Received all labs
If a lab test was ordered, and consistent with the SP protocol to take, was the patient able to take it?
• Positive test
We will calculate the positivity rate for particular lab exams and compare to baseline positivity rates prior to SP fieldwork.
• Referral for further management
Referral for further management is defined as the proportion of interactions in which the provider refers the simulated patient to a qualified provider or another facility.
• Consult consistent with suggestion
Indicator for whether the provider diagnosed, ordered labs, or prescribed/dispensed medication consistent with the diagnosis suggested by the SP.
• Medicines consistent with request
Indicator for whether the medication requested by the SP was dispensed or prescribed.
• Total time at the health center
How long a patient spends at the health center, including waiting time, time with the provider, waiting for labs, etc.
• Total time with provider
How long a patient spends with the main provider(s).
• Patient costs
Patient costs is defined as the average amount charged to the simulated patients by providers per interaction for the entire visit, including consultation, medicine, and test costs.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
• In the formative research phase, we will gather qualitative evidence on the potential benefits and barriers to adoption of Babyl from patients, providers, and other key players in both digital and in-person health care delivery. These insights will be used to develop an encouragement intervention, which will be tested in a pilot and ultimately brought to scale in a randomized control trial. The exogenous variation in Babyl usage introduced by our intervention will allow us to evaluate the impact of Babyl on healthcare utilization, health outcomes, and patient costs.
• In addition to data from the exogenous encouragement intervention, we will also leverage quasi-random variation in Babyl penetration across regions and over time. We will use historical administrative data to compare health care outcomes and utilization across Babyl and non-Babyl patients and to compare aggregate health outcomes and utilization across catchment areas with higher or lower levels of Babyl penetration in the market.
• Finally, we will assess quality of care at baseline, prior to the encouragement intervention. To overcome common challenges in the evaluation of health care quality, we use standardized patients, or actors who seek care with a prespecified set of symptoms. In addition to evaluating quality of care, data from standardized patients will allow us to explore aspects of clinical decision-making that are difficult to study with observational data. Specifically, we study the influence of patient suggestions and requests and whether these vary across platforms and patients.
• Because providers may differ across platforms, we will conduct a survey of providers visited by our standardized patients including measures of clinical experience, clinical knowledge assessed through vignettes, attitudes towards patients, and burnout. In our assessments of quality, we will control for these provider characteristics to isolate the impact of Babyl on quality of care for patients with different symptoms and other characteristics.

Baseline quality assessment:
At baseline, we will evaluate aspects of quality of care, clinical decision making, and costs of seeking consultation for primary care in conventional care (at 80 health facilities) vs. digital health services (through Babyl).
o We capture levels of quality of care through standardized patients ("mystery shoppers" or "fake patients"), who are individuals recruited locally and trained to portray three different conditions. SPs have been extensively trained to present their condition in the same way (same description of their symptoms, backstory, and answers to questions by the provider).
o These conditions were selected to reflect different aspects of quality of care, and to measure potential under- and over-treatment.
o In addition to the three medical conditions, we layer two additional experiments to assess clinical decision making across platforms.
 First, while presenting the same conditions, patients may suggest to the provider that they may have the correct condition, an incorrect condition, or make no suggestion.
 Second, patients may request an unnecessary prescription after the end of the interaction.
o Outcomes captured by the standardized patients through an exit questionnaire given to them after the interaction with providers' include: history questions asked by the provider, laboratory tests ordered, medicines dispensed or prescribed, and referrals made.
o We will also explore variation across SP gender, age, and insurance status (majority covered by the community-based health insurance, but some paying out of pocket), as well as day and time of the visit.
o To control for differences across providers in Babyl vs. conventional care, we will also field a provider survey after the SP fieldwork. This will include measures of providers’ clinical experience, knowledge of treatment for the SP conditions, caseload, burnout, altruism towards patients, personality characteristics, and attitudes about the pandemic and telemedicine.
Experimental Design Details
Randomization Method
For the encouragement intervention, we will use a two-stage cluster randomized design, sampling first catchment areas and then individuals within a catchment area. We will have an equal number of individuals in treatment and control groups. The treatment is the most effective encouragement intervention as identified in the pilot and the control will continue to receive the standard of care.
Randomization Unit
individual
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
At most 200 catchment areas.
Sample size: planned number of observations
Once determining the exact nature of the encouragement intervention and anticipated effect size, we will select the sample size to have 80% power to distinguish two-sided effects at a 5% significance level. We anticipate this will be about 40,000 individuals. For the baseline quality assessment: We have approximately 75 SPs who will make about 960 visits to 80 conventional care facilities and 1056 visits to Babyl. The sample of providers for the provider surveys will include all providers who see an SP, which we expect to be around 200 providers.
Sample size (or number of clusters) by treatment arms
We anticipate having 20,000 individuals per treatment arm.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Republic of Rwanda National Ethics Committee
IRB Approval Date
2021-10-14
IRB Approval Number
899

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials