Guest post by Definitive Healthcare
In today’s healthcare delivery environment, few concerns are as important to providers, payers, and consumers as quality. Quality scores can drive patients towards or away from hospitals and doctors, determine reimbursement rates, and subject organizations to a wide range of government incentives or penalties. Most would agree that quality measurements serve a critical need, helping patients make informed decisions and encouraging providers to improve, but a less-public debate continues over the validity and appropriateness of today’s most widely used indicators.
The National Quality Forum, a public/private multi-stakeholder initiative launched in 1999 that wields significant influence over the establishment, maintenance, and removal of quality indicators, recently added to the discussion with its 2017 guidance for quality measures. Out of the 634 quality measures the NQF tracks, it suggested removing 51 of them, primarily in order to reduce administrative burden. While it may seem that there’s no such thing as too much quality information, reporting mandates do impose a real cost on providers. A hospital may need to report over a hundred indicators to regulators and payers, which requires dedicated staff and software applications, increasing costs without any direct impact on care. A 2012 study estimated that quality measurements and analysis cost healthcare providers $190 billion annually, a figure that has likely increased over the past five years. The cost is especially high for organizations that participate in multiple quality initiatives.
Among the measures NQF has listed for removal are those whose results have topped out, with little variation between providers and organizations and therefore have limited comparison value, and those that have failed “maintenance review,” a process NQF employs to ensure it has the latest information supporting the measure’s effectiveness as a quality tool. Many of them are process measures, which count how often a provider follows a specific treatment protocol in any given situation and is rated independently of actual outcomes, such as the administration of aspirin at entry for heart attack patients in the emergency room or the use of a safe-surgery checklist. CMS’ quality programs have earned criticism in the past for the dominance of process-of-care measurements, which may reflect best practices but as the NQF states in its guidance, do not typically act as a high-value driver of improvement for healthcare organizations. In addition, process measures tend to have a shorter shelf life, as following guidelines is usually easier than achieving real improvements in outcomes.
|Measure Description||Median Score*||Removal Reason|
|Percent of ED patients who left before being seen||1%||Failed Maintenance Review|
|Avg minutes before ED patient saw provider||22||Failed Maintenance Review|
|Avg minutes before ED patient with broken bones received pain meds||51||Failed Maintenance Review|
|Avg before possible heart attack ED patient given ECG||7||Failed Maintenance Review|
|Percent of possible heart attack patients given aspirin within 24 hours||97%||Failed Maintenance Review|
|Median minutes before patient leaves ED||137||Excessively burdensome, low value|
|Percent of HHA patients who improved at bathing||71%||Topped Out|
|Percent of HHA patients who improved at getting out of bed||63%||Topped Out|
|Percent of HHA patients who improved at taking oral medications||56%||Topped Out|
|Percent of HHA patients or family were instructed on drugs taken||95%||Topped Out|
*Source: Definitive Healthcare
Some new research has even called into question measures that are already widely accepted as critical quality indicators, such as 30-day readmission rates. A report appearing in the October 2016 issue of Health Affairs suggests that hospitals may only have meaningful control of patients’ conditions for the first week. An analysis of Medicare claims for patients over 65 who were hospitalized for common reasons such as heart attack, heart failure, and pneumonia found that readmissions tended to average out after the first seven days. The result held true for multiple diagnoses across several hospitals and even different states. The researchers asserted that community and household factors were better indicators of readmission, and that a seven-day rate would be a more accurate indicator for hospital care quality than the standard 30-day period.
CMS has proven receptive to eliminating certain quality measures that healthcare industry stakeholders, like those in the NQF, have found to be redundant or ineffective. In May 2016, CMS announced it would remove 13 measures from the HHA quality reporting program, 11 of which were process measures that were either found to be “topped out” or would be reworked into a new, more accurate measure. However, it would seem that much work still needs to be done, as a recent survey of hospital executives found 76 percent were concerned with the validity, importance, and/or fairness of the quality measures they reported, while only 13 percent expressed a positive view of the indicators. Much like improving clinical quality itself, it will take a determined effort from participants from all areas of healthcare delivery to ensure that clinical quality data is useful, accessible, and accurate.
Guest Blog by Definitive Healthcare
Definitive Healthcare has the most up-to-date, comprehensive and integrated data on over 7,700 hospitals, 1.4 million physicians, and numerous other healthcare providers. Our databases include detailed information on quality for hospitals, long-term care facilities, home health agencies, and more. For more information, visit www.definitivehc.com.