Author + information
- Paul Heidenreich, MD, MS∗ ()
- ↵∗Address for correspondence:
Dr. Paul Heidenreich, VA Palo Alto Health Care System, 111C, 3801 Miranda Avenue, Palo Alto, California 94304.
- Centers for Medicare and Medicaid Services
- health care cost and use project
- state inpatient database
- value-based purchasing
An accepted tenet of health care quality is that the focus of measurement should be important to patients. Thus, rather than measure a process of care, such as use of beta-blockers known to prolong life, we should measure mortality directly. Although this makes sense at first glance, I would argue that patients (or anyone else) cannot make informed decisions regarding the impact on health of choosing one hospital over another with available mortality data. The ability to adequately risk-adjust all the important patient characteristics is not possible. Although we care about mortality, it is not clear we should care as much about published hospital morality rates. Why are we not similarly concerned with fatal crash rates when selecting U.S. airlines and airplanes? Clearly, such an outcome is important to the passenger. As it turns out, the common Airbus planes (A320, A319) have a 38% higher fatal crash rate than similar Boeing (later 737 series) planes (0.11 per million flights compared with 0.08 per million flights) (1). I accept these rates as accurate; but then, why am I more concerned about leg room and in-seat power? I (and I expect most others) feel these small differences in mortality rates are more likely due to chance, and do not represent a systematic safety difference.
Put another way, a quality measure is only as good as its signal-to-noise ratio. In the case of 30-day mortality for heart failure, the signal (variation in mortality due to quality differences) is small compared with the noise (variation in mortality due to everything else). Ironically, as hospitals improve their processes of care, and the quality improves across the board, this signal/noise ratio will only worsen, making 30-day mortality rates even less useful.
In this issue of JACC: Heart Failure, Bruckel et al. (2) have demonstrated one way to reduce this noise in hospital mortality rates is to consider do-not-resuscitate (DNR) status. They used data from the state of California, where DNR status within 24 h of admission is captured for each hospitalization. Using a cohort of over 55,000 patients hospitalized with heart failure from 290 hospitals, they found that hospitals with higher DNR rates had higher mortality rates, a relationship that persisted after adjusting for severity of illness. Using Center for Medicare and Medicaid Services (CMS) risk-adjustment methodology, the authors identified 22 high-mortality outlier hospitals. However, after additional adjustment for DNR status, one-half of these hospitals were reclassified as nonoutliers. Three nonoutliers were subsequently found to be outliers, resulting in a drop of outliers from 22 to 14 with DNR adjustment. Although these numbers may seem small compared with the total number of hospitals, such labeling creates significant turmoil for hospital administrators, as bonuses, job security, and funding for process improvements are tied to such measures. One wonders, if we had more clinical data, would we find even fewer true outliers?
The 30-day mortality measure affects hospitals in multiple important ways. CMS publically reports it and also rewards hospitals financially through its Value Based Payment System. In this system, 2% of diagnosis related group payments are withheld and redistributed using a score where 30-day mortality (for heart failure, acute myocardial infarction, and pneumonia) contributes 25%. Because it is a zero-sum system, hospitals on average will lose money if they have a below-average mortality rate and could lose the full 2% if they are a significant outlier. Currently, a 13% 30-day all-cause mortality rate for heart failure places a hospital in the top 20%, whereas an 11% rate puts them in the bottom 20%.
U.S. News & World Report weighs mortality heavily in its ranking of hospitals, including the ranking for cardiology and heart surgery. Such rankings, when favorable, are now routinely used in hospital advertising. Improving mortality and reputation are often the only opportunities for hospitals to further improve their ranking because most other components of the U.S. News & World Report ranking are based on processes and structures that hospitals have already adopted. Hospitals are also rated by U.S. News & World Report on their care of heart failure patients where some can receive a grade of “high-performing” if they do well on heart failure mortality, among other factors.
Unfortunately, CMS and U.S. News & World Report do not consider DNR or other end-of-life issues in their risk-adjustment. CMS and U.S. News & World Report justify the lack of consideration of end-of-life issues by stating that these patients are likely to be measurably sicker than those not on comfort care, and their current risk-adjustment for severity of illness is adequate to capture this. However, the work by Bruckel et al. (2) argues against this, showing that despite using CMS-type methodology to adjust risk, adding in DNR made a large increase in mortality discrimination (c-statistic increase from 0.821 to 0.845). Although CMS and U.S. News & World Report (which uses CMS data) do not have access to early DNR data, it is feasible to collect as demonstrated by California.
Of note, Vizient (formerly the University Health System Consortium, or UHC) considers early DNR status when risk-adjusting mortality to rank hospitals. Hospitals self-report DNR status along with other patient characteristics, and although models change yearly, DNR is often an important predictor in the models of mortality. Furthermore, Vizient drops patients receiving hospice care from its denominator of mortality, unlike CMS and U.S. News & World Report. Although Vizient rankings are important to many hospital systems, they do not carry the financial rewards and penalties of CMS rankings. They are also not publically reported like the U.S. News & World Report rankings.
Adjusting for DNR status would not be so important if there was minimal variation in DNR rates across hospitals. However, Bruckel et al. (2) showed that hospital DNR rates vary widely, ranging from 0 to 54% with an interquartile range of 4% to 20%. A prior study using the California data found that the use of DNR orders for heart failure patients is increasing, but a wide variation is likely to persist for some time (3). Given the large variation in DNR orders at the hospital level, many hospitals may be not discussing DNR status with patients who wish this option. The SUPPORT study (Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatment) found that only one-half of all inpatients who desired not to be resuscitated had DNR orders written (4).
For better or worse, it appears patients are not paying attention to publically reported mortality rankings that are often an intense focus of hospital C-suites. In California, high- and low-mortality outlier hospitals for acute myocardial infarction saw no change in volume in acute myocardial infarction–related conditions or procedures after publication of mortality data (5). The best evidence for patient interest in mortality rates is from New York, where there was a shift from high-mortality to low-mortality providers of coronary artery bypass graft surgery (5). However, this shift dissipated 6 months after release of the rankings.
Use of mortality as a complimentary measure would be reasonable if paired with evidence-based process measures that have been shown to improve mortality through randomized controlled trials. At the physician level, CMS allows reporting of the use of angiotensin-converting enzyme inhibitors and beta-blockers in heart failure through the Merit Based Incentive Payment System. However, other life-prolonging treatments for heart failure (e.g., mineralocorticoid receptor antagonists) are not considered as measures of heart failure quality at the physician level, and none are used at the hospital level for value based purchasing. Although 30-day readmission for heart failure is reported, this measure is unlikely to be related to heart failure care (and is actually inversely related to 30-day mortality (6). We clearly need to adopt better measures of the quality of heart failure care. Until then, adjusting mortality for DNR status would be a step in the right direction.
↵∗ Editorials published in JACC: Heart Failure reflect the views of the authors and do not necessarily represent the views of JACC: Heart Failure or the American College of Cardiology.
Dr. Heidenreich has reported that he has no relationships relevant to the contents of this paper to disclose.
- ↵Plane Crash Rates by Model. Revised: December 15, 2016 AirSafe.com website. Available at: http://www.airsafe.com/events/models/rate_mod.htm. Accessed August 22, 2017.
- Bruckel J.,
- Mehta A.,
- Bradley S.M.,
- et al.
- Phadke A.,
- Heidenreich P.A.
- Hakim R.B.,
- Teno J.M.,
- Harrell F.E. Jr..,
- et al.,
- SUPPORT Investigators