Back to History

Fields Changed

Registration

Field Before After
Abstract The Electronic Health Records (EHR) Demonstration, implemented by the Centers for Medicare & Medicaid Services (CMS), provided financial incentives to physician practices to use a certified EHR. Practices that met minimum EHR use requirements received payments on a graduated scale, increasing for more use of EHR functions. The demonstration was implemented in four sites, targeting practices with 20 or fewer providers supplying primary care to at least 50 fee-for-service Medicare beneficiaries. The demonstration was expected to operate for five years (June 1, 2009–May 31, 2014), but was cancelled in August 2011 because 43% of the practices did not meet program requirements. The evaluation used a stratified, experimental design—412 treatment and 413 control practices—to estimate the impacts of the payments on adoption and use of EHR functionalities. In June 2011, treatment group practices were, on average, 9 to 18 percentage points more likely than control group practices to report using 13 EHR functionalities queried at baseline (2008). The payments increased a summary score of EHR use, which ranged from 1 to 100, by more than 11 points on average relative to the control group (54 versus 43). Moderate incentive payments did not lead to universal EHR adoption and use in a two-year time frame. However, the demonstration showed that incentives can influence physician use of EHRs. These results are encouraging regarding the potential effectiveness of the EHR Medicare Incentive Program but also suggest that meaningful use of EHRs on a national scale may take longer than anticipated. The Electronic Health Records Demonstration (EHRD), implemented by the Centers for Medicare & Medicaid Services (CMS), provided financial incentives to physician practices to use a certified EHR. Practices that met minimum EHR use requirements received payments on a graduated scale, increasing for more use of EHR functions. The demonstration was implemented in four sites, targeting practices with 20 or fewer providers supplying primary care to at least 50 fee-for-service Medicare beneficiaries. The demonstration was expected to operate for five years (June 1, 2009–May 31, 2014), but was cancelled in August 2011 because 43% of the practices did not meet program requirements. The evaluation used a stratified, experimental design—412 treatment and 413 control practices—to estimate the impacts of the payments on adoption and use of EHR functionalities. In June 2011, treatment group practices were, on average, 9 to 18 percentage points more likely than control group practices to report using 13 EHR functionalities queried at baseline (2008). The payments increased a summary score of EHR use, which ranged from 1 to 100, by more than 11 points on average relative to the control group (54 versus 43). Moderate incentive payments did not lead to universal EHR adoption and use in a two-year time frame. However, the demonstration showed that incentives can influence physician use of EHRs. These results are encouraging regarding the potential effectiveness of the EHR Medicare Incentive Program but also suggest that meaningful use of EHRs on a national scale may take longer than anticipated.
JEL Code(s) I110, J44, O33, O380
Last Published February 06, 2014 06:00 PM February 07, 2014 09:10 AM
Final Sample Size: Total Number of Observations 800,524 800,524 (Total for the demonstration; for the analysis of the impact of the intervention on Medicare service use and expenditures)
Primary Outcomes (End Points) Key measures for the evaluation were practices’ adoption and use of EHRs and other health IT, and a summary (composite) score that quantifies EHR use for the calculation of the incentive payment, which were drawn from a web-based Office Systems Survey (OSS). Key measures for the evaluation were practices’ adoption and use of EHRs and other health IT, and a summary (composite) score that quantifies EHR use for the calculation of the incentive payment, which were drawn from a web-based Office Systems Survey (OSS). Specific outcomes were: - Any EHR/Health IT Use - Electronic Patient Problem Lists - Automated Patient-Specific Alterts and Reminders - Electronic Disease-Specific Patient Registries - Patients' Email - Patient-Specific Educational Materials - Online Referrals to Other Providers - Laboratory Tests: * Online order entry * Online results viewing - Radiology Tests: * Online order entry * Online results viewing (reports and/or digintal films) - E-Prescribing: * Printing and/or faxing Rx * Online Rx transmission to pharmacy - Overall OSS score - OSS Score Domains: * Completeness of information in the EHR * Communication of care outside the practice * Clinical decision support * Increasing patient engagement * Medication safety
Primary Outcomes (Explanation) In order to calculate EHR summary scores for practices that used a certified EHR, the Office Systems Survey (OSS) measured 53 functions (for example, prescribing medications, ordering laboratory tests and other procedures, and care management and coordination) that are thought to be connected to improved care, although a causative link is not yet empirically proven for many. These functions were sorted into five domains: completeness of information, communication about care outside the practice, clinical decision support, increasing patient engagement, and medication safety. If practices were to use all 53 functions for three-fourths or more of their patients, the total composite score would equal 100. In addition to calculating this score, composite scores were calculated for the five OSS domains. (Baseline scores cannot be estimated because application data on EHR/other health IT use are only available for 13 of the 53 functions.) Based on the total composite score for each treatment practice, CMS calculated payments during each demonstration year. Practices received their payments in the fall following the end of each demonstration year.
Randomization Method The routine ralloc in STATA, written by Phillip Ryan. Available at: http://econpapers.repec.org/software/bocbocode/s319901.htm Randomization was done in office by a computer with the use of the routine ralloc for STATA, written by Phillip Ryan. The code is available at: http://econpapers.repec.org/software/bocbocode/s319901.htm
Planned Number of Observations 52,287 Medicare beneficiaries in the first year of the demonstraiton 52,287 Medicare beneficiaries in the first year of the demonstraiton (for the analysis of the impact of the intervention on Medicare service use and expenditures))
Power calculation: Minimum Detectable Effect Size for Main Outcomes 8.9 percentage points for a continuous, composite score that measures adoption and use of electronic health records systems. The MDE assumes a response rate of 85 percent to the Office Systems Survey; an intracluster correlation coefficient of 0.06; a significance level of 0.05 for a two-sided test; and 80 percent power. The assumed sample size was 825 practices. For a continuous composite score that measures adoption and use of electronic health records systems (see Outcomes section above), the MDE was 9.9 percent of the control group mean. The MDE assumes a response rate of 85 percent to the Office Systems Survey; an intracluster correlation coefficient of 0.037; a coefficient of variation of 2.2; a significance level of 0.10 for a two-sided test; and 80 percent power; and a regression R-squared of 0.3. For binary variables, the MDE was 10 percentage points with a mean of 0.5 also assuming a regression R-squared of 0.3. For both outcomes, the sample size was 267 control group and 314 intervention respondents. The impacts of the demonstration on practices's use of health IT were generally much larger than the MDEs, suggesting the evaluation was well powered for these outcome measures.
Additional Keyword(s) Health Information Technnology, Delivery Systems, Primary Care, Financial Incentives Health IT, Electronic Health Records Systems, Delivery Systems, Primary Care, Financial Incentives
Back to top