Author + information
- Received May 17, 2017
- Revision received July 17, 2017
- Accepted July 27, 2017
- Published online September 25, 2017.
- Jeffrey Bruckel, MD, MPHa,∗ (, )
- Anuj Mehta, MDb,
- Steven M. Bradley, MD, MPHc,
- Sabu Thomas, MD, MSa,
- Charles J. Lowenstein, MDa,
- Brahmajee K. Nallamothu, MD, MPHd,e,f and
- Allan J. Walkey, MD, MScg
- aDivision of Cardiovascular Medicine, University of Rochester Medical Center, Rochester, New York
- bDivision of Pulmonary, Critical Care, and Sleep Medicine, National Jewish Health System, Denver, Colorado
- cMinneapolis Heart Institute, Minneapolis, Minnesota
- dDivision of Cardiovascular Medicine, University of Michigan Health System, Ann Arbor, Michigan
- eMichigan Integrated Center for Health Analytics and Medical Prediction, Ann Arbor, Michigan
- fAnn Arbor Veterans Affairs Center for Clinical Management and Research, Ann Arbor, Michigan
- gDivision of Pulmonary, Allergy, Sleep, and Critical Care Medicine, Boston University Medical Center, Boston, Massachusetts
- ↵∗Address for correspondence:
Dr. Jeffrey Bruckel, Division of Cardiology Ambulatory Care Facility, Ground Floor, University of Rochester Medical Center, 601 Elmwood Avenue, Rochester, New York 14618.
Objectives This study evaluated the effect of patient do-not-resuscitate (DNR) status on hospital risk-adjusted heart failure mortality metrics.
Background Do-not-resuscitate orders limit the use of life-sustaining therapies. Patients with DNR orders have increased in-hospital mortality, and DNR rates vary among hospitals. Variations in DNR rates could strongly confound risk-adjusted hospital mortality rates for heart failure.
Methods We identified a cohort of adults with primary diagnosis of heart failure by using the 2011 California State Inpatient Database, a claims database that captures “early DNR,” within 24 h of admission. Hospital-level risk-standardized in-hospital mortality was determined using random effects logistic regression. We explored changes in outlier status in models with and without early DNR status.
Results Among 55,865 patients from 290 hospitals hospitalized with heart failure, 12.1% (11.8% to 12.4%) had an early DNR order. Hospitals with higher risk-standardized DNR rates had higher risk-standardized mortality (ρ = 0.241; 95% confidence interval [CI]: 0.129 to 0.346; p < 0.001). Including DNR in models used to benchmark hospital mortality improved model performance (c-statistic from 0.821 [95% CI: 0.812 to 0.830] to 0.845 [95% CI: 0.837 to 0.853]; increased model explanatory power by 17%). Including DNR resulted in reclassification of 9.3% of hospitals’ outlier status. Agreement in hospital outlier designation between models with and without DNR was low to moderate (kappa coefficient: 0.492; 95% CI: 0.331 to 0.654).
Conclusions Accounting for DNR status resulted in a change in estimated risk-standardized mortality rates and classification of hospitals as performance “outliers.” Given public reporting of heart failure mortality measurements and their influence on reimbursement, accounting for the presence of early DNR orders in quality measures should be considered.
- Centers for Medicare and Medicaid Services
- Healthcare Cost and Utilization Project
- state inpatient database
- value-based purchasing
Heart failure is among the most common reasons for hospitalization in the United States, accounting for 967,000 admissions in 2010 (1). Medical care for heart failure is a frequent target for quality measurement and improvement programs because of its high morbidity, mortality, and costs to the health care system ($30.7 billion annually) (2). Heart failure hospitalizations are a focus of the Centers for Medicare and Medicaid (CMS) Value-Based Purchasing pay-for-performance program, with financial penalties for hospitals with high risk-adjusted mortality rates (3,4).
Many patients with advanced heart failure or a high burden of additional comorbidity may elect to forgo resuscitation in the event of cardiopulmonary arrest, a decision often documented with a do-not-resuscitate (DNR) order (5–7). Although DNR orders can be initiated at any time during a hospitalization, DNR orders within the first day of admission (“early DNR status”) are typically associated with patients’ pre-hospital frailty, burden of comorbidities, and goals of care (8). Thus, DNR status at hospital admission may function as a composite of prognostic variables, resulting in strong prediction of short-term mortality risk (9).
Because early DNR orders predict patient mortality and vary in prevalence among hospitals, patient DNR status is a potential source of unmeasured confounding in risk-adjusted mortality measurements. Failure to account for patient DNR status when determining hospital mortality rates may strongly impact hospital mortality measurements and, thus, hospital rankings (5,7,8,10–18). However, current approaches to assess and compare hospitals by using risk-standardized mortality rates do not account for patient early DNR status. We sought to evaluate how variations in patient DNR status are related to hospital risk-adjusted mortality rates for heart failure, first by quantifying variations in DNR rates among hospitals and then by determining how accounting for patient early DNR status changes estimated hospital risk-adjusted mortality rankings.
We used data from the California State Inpatient Database (SID), from the Agency for Healthcare Research and Quality Healthcare Cost and Utilization Project to identify all adult patients admitted with a primary diagnosis of heart failure to a nonfederal hospital in the state of California in 2011 (19). The California SID has been previously described in detail; briefly, it includes administrative claims data generated from all nonfederal hospital discharges in the state of California during a given calendar year (20). The California SID is the only existing population-level database in the United States that both contains detailed administrative claims information and captures patient-level DNR status. Heart failure hospitalizations were defined as any patient with a primary International Classification of Disease-9th revision-Clinical Modification (ICD-9-CM) diagnosis of heart failure as defined by the CMS 30-day heart failure mortality specification (Online Table 1a) (21,22). Similar to the CMS specification, we excluded patients who were transferred from another hospital, discharged alive within 24 h of admission, and not transferred to another acute care facility, or admitted to a hospital with <25 qualifying admissions during the year of study (21,22).
Exposures and outcomes
The California SID includes a unique variable that encodes patient DNR status within 24 h of admission. According to the California SID database documentation, the variable is defined as follows: “Source documentation indicates that if DNR = ‘Y’ then a ‘Do Not Resuscitate’ order was written at the time of or within the first 24 h of the patient's admission to the hospital” (23). This variable has been validated (demonstrating 84.3% agreement with chart abstraction) (24). Using the early DNR variable, we measured hospital-level rates of early DNR orders (defined as a percentage of total heart failure admissions with an early DNR order). We defined patient comorbidities using the Hierarchical Clinical Conditions (HCC), version 12 (25). For the purposes of risk adjustment, the HCC categories defined in the CMS measurement were included in the risk-adjustment models (as specified in the HCC-to-ICD-9 cross-walk available on the QualityNet Website) (26). We included all diagnoses recorded for a specific patient for 1 year prior to the index hospital admission. Some conditions were excluded as complications rather than comorbidities if they were not present on admission. The comorbidity dataset included data recorded in 2010 and 2011. The primary outcome was in-hospital mortality as determined from hospital records.
Patient demographic and comorbidity data were summarized according to patient DNR status and across quartiles of hospital-level DNR rates. Continuous variables were analyzed using Student t tests or analysis of variance and categorical variables with Fisher’s exact tests and Cochran-Armitage tests for trend across quartiles where applicable. We examined the unadjusted correlation among hospital DNR rates and in-hospital mortality rates (expressed as percentages), using Spearman correlation.
In order to account for patient characteristics that might have influenced differential DNR use, hierarchical mixed-effects logistic regression models with hospital-specific random intercepts predicting DNR were fit. We quantified the extent of risk-adjusted variation in DNR use among hospitals (with and without risk adjustment) using median odds ratios (MORs) (27,28). MORs estimate the median increased odds of having a DNR order between 2 similar patients admitted to randomly selected hospitals. Hospital DNR risk-standardized predicted-to-expected (P:E) ratios were generated by using models with 95% confidence intervals (CIs) indicating whether a given hospital over- or underused DNR compared with the expected rate of DNR use given hospital patient case mixture and comorbidity burden.
We mirrored CMS measurement methodology for generating risk-standardized measurements of heart failure mortality by fitting hierarchical mixed-effects logistic regression models with hospital-specific random intercepts predicting in-hospital mortality. The effect of DNR status on mortality was assessed by fitting a model with a fixed effect and hospital random slope for DNR status in addition to CMS comorbidities. The random slope was intended to attenuate differential use of DNR status at varying levels of patient severity of illness among hospitals (11,29). Hospitals were determined to be statistically high or low outliers if the CI for mortality, the P:E ratio, did not include 1 (indicating a statistically significant difference from an average hospital in the dataset with similar patient characteristics).
Comparisons of model fit between the CMS covariate model and the model including DNR status as a fixed effect and random slope were assessed using receiver operator characteristic curves, c-statistics, and an Akaike information criterion (AIC). Model calibration was assessed using a Hosmer-Lemeshow test of goodness of fit. We assessed the contribution of DNR status to model fit by portioning the AIC values, resulting in the percentage of incremental explanatory power conferred by including DNR (30).
Hospital DNR and mortality P:E ratios from each model were compared using Spearman correlation and visually by using linear regression lines. Mortality P:E ratios from the comorbidity-only, CMS-type model were compared using the DNR-adjusted models using Spearman correlation. Changes in hospital outlier status between comorbidity-only and DNR-adjusted models were assessed using Kappa agreement statistics. Hospital risk-standardized mortality were calculated by multiplying the P:E ratio by the average mortality.
We conducted several sensitivity analyses to evaluate the impact of our study assumptions on our results. First, to assess the impact of accounting for socioeconomic status on DNR use variability, we fit a model predicting DNR use, including terms for additional patient and hospital characteristics, including race, ethnicity, primary payer, and urban or rural location. Because the current CMS approach to mortality statistics does not allow for adjustment for socioeconomic variables, we elected to use the models not accounting for socioeconomic factors in the primary analysis. Second, to assess the impact of including a random slope for DNR use in the mortality model, we fit an additional model with random hospital effects only (without the random slope parameter). Third, we performed identical DNR use and mortality modeling processes on a subset of patients 65 years of age and older (the Medicare cohort). Because the CMS hospital mortality rankings and value-based purchasing only include patients eligible for Medicare, we wanted to evaluate results in a Medicare population.
Analyses were conducted using SAS version 9.4 software for UNIX (SAS Institute, Cary, North Carolina). Graphics were generated using JMP Pro 12 (SAS Institute) and R version 3.3.2 (Comprehensive R Archive Network Project, Vienna, Austria) ggplot2 software. Statistical significance was determined at the 2-tailed p level of <0.05 level, and 95% CIs are reported. Odds ratios (ORs) and relative risks are reported where appropriate. This study was approved by the University of Rochester Medical Center institutional review board and determined to be exempt from review. Additional details describing the study cohorts and statistical analysis are available in the Online Appendix.
The study cohort included 55,865 patients from 290 hospitals, with a principal diagnosis of heart failure. In the full cohort, 12.1% of patients (11.8% to 12.4%; n = 6,755) had a DNR order within 24 h of admission. Patient characteristics and comorbidities in the quartiles of hospital DNR use are reported in Tables 1 and 2. The top 10 primary heart failure diagnosis codes are listed in Online Table 1b. See Online Tables 2 to 5 for patient characteristics stratified by DNR status.
Most patient comorbidities were significantly associated with DNR use (in the context of a multivariate model) (Online Tables 6 and 7 for DNR model information). The strongest predictors of higher DNR use were age (OR per 10 years: 2.36; 95% CI: 2.27 to 2.44), dementia (OR: 1.77; 95% CI: 1.65 to 1.90), and metastatic cancer or leukemia (OR: 1.71; 95% CI: 1.40 to 2.10; p < 0.001 for all). Male sex (OR: 0.79; 95% CI: 0.75 to 0.84; p < 0.001), acute coronary syndrome (OR: 0.90; 95% CI: 0.83 to 0.98; p = 0.015), arrhythmia (OR: 0.94; 95% CI: 0.88 to 0.99; p = 0.03), and diabetes (OR: 0.85; 95% CI: 0.80 to 0.91; p < 0.001) were all associated with lower DNR use.
Unadjusted hospital DNR rates varied widely. The median (25th, 75th percentiles) hospital DNR rate was 8.7 (N = 3.9, 20.1), ranging from 0 to 53.8% (Figure 1A). The hospital to which a patient was admitted was strongly associated with receipt of a DNR order, with a MOR of 3.07 (2.73 to 3.41) for early DNR orders among hospitals (Online Table 8). In-hospital mortality was 3.1% (2.9% to 3.2%; n = 1,718 of 55,865 patients) in the primary cohort. DNR status was strongly associated with patient in-hospital mortality; patients with DNR orders experienced a 9.9% mortality rate (n = 666 of 6,755 patients) compared with 2.1% without DNR (n = 1,052 of 49,110 patients; adjusted relative risks: 3.63; 95% CI: 3.17 to 4.16) (Online Tables 9 to 11).
Model performance characteristics after adjusting for DNR
Adding DNR status to the comorbidity-only model resulted in a statistically significant improvement in model c-statistic from 0.821 (95% CI: 0.812 to 0.830) to 0.845 (95% CI: 0.837 to 0.853). The AIC portion was 0.170, indicating that the addition of the DNR variable improved the explanatory power of the model by 17.0% relative to the comorbidity-only model. Model calibration was poor for all models regardless of inclusion of DNR, based on the Hosmer-Lemeshow test results (Online Table 12). The addition of DNR status to the mortality models did not appreciably change the effect sizes or significance of the other model covariates (Table 3).
Association between hospital DNR rate and hospital mortality rate
In multivariable models that included all measured covariates except DNR status, risk-standardized hospital DNR rates and mortality were correlated (ρ = 0.241; 95% CI: 0.129 to 0.346; p < 0.001) (Online Tables 13 and 14). After adding DNR to multivariate models, risk-standardized hospital DNR rate and mortality rate were no longer correlated (ρ: −0.048; 95% CI: −0.162 to 0.068; p = 0.42) (Figure 1B).
DNR and hospital rankings
In models excluding DNR status, 22 hospitals (7.6%; 5.1% to 11.2%) were designated high-outlier hospitals (with significantly higher predicted mortality than expected). There were 16 low-outlier hospitals (5.5%; 3.4% to 8.8%) in the total cohort (with significantly lower predicted mortality than expected). After adjustment for DNR status, 50% (n = 11 of 22) of the high-outlier hospitals were reclassified as nonoutliers, and 1.2% (n = 3 of 252) of the nonoutlier hospitals were reclassified as high outliers (Figure 2). Similarly, after adjustment for DNR status, 75% of the low-outlier hospitals (n = 12 of 16) were reclassified as nonoutliers, and 0.4% of the nonoutlier hospitals (n = 1 of 252) was reclassified as a low outlier. There was low-to-moderate agreement in hospital outlier designation between models that did and did not include DNR status (kappa coefficient of 0.492; 95% CI: 0.331 to 0.654) (Table 4, Online Table 15).
Sensitivity and subgroup analyses
Socioeconomic status variables were significantly associated with DNR use, and accounted for some of the variation in DNR use among hospitals (Online Tables 6 and 7). The random intercept-only model (without random DNR slopes) performed marginally worse than the random intercept and slope model (Online Tables 9 and 10). The Medicare subcohort (including 40,414 patients from 278 hospitals) patients had higher rates of DNR use and higher mortality, but results paralleled the findings from the total cohort (Online Tables 9 to 11).
We explored the effect of accounting for patient DNR status in models that seek to compare hospital mortality rates among patients hospitalized with heart failure. Because patient DNR status within 24 h of admission was strongly associated with hospital mortality, and DNR order rates showed more than 3-fold variation among hospitals, accounting for patient DNR status strongly influenced hospital risk-standardized mortality rates. Our findings have significant implications for current quality measurement programs for heart failure, which rely heavily on risk-adjusted 30-day heart failure mortality to judge hospital quality and determine hospital reimbursements.
The variation in early DNR use among hospitals was striking and not explained by patient case mixture. Based on MOR results, the hospital to which a patient was admitted was more strongly associated with receipt of a DNR order than the strongest clinical predictors in the model (age, dementia, metastatic cancer). This finding mirrors previous research demonstrating that DNR use is strongly influenced by patient factors such as race, socioeconomic status, and education, as well as physician and institutional culture and practice norms (8,10,31–36).
In addition to wide variation among hospitals, patient DNR status was strongly associated with mortality risk. The strong association of DNR status with mortality despite adjustment with multiple additional comorbidities suggests that DNR status captures multiple unmeasured patient factors (e.g., frailty, prognosis, preferences for less invasive care) associated with mortality. Our findings extend those of the study by Baldwin et al. (9), who identified DNR status as the strongest determinant of 6-month mortality after critical illness. The combination of wide variation in DNR rates among hospitals and strong associations between DNR status and mortality act to confound hospital mortality estimates when unaccounted for in risk adjustment.
Risk-adjusted hospital mortality P:E ratios and outlier status are publicly reported, affecting hospital reputation and reimbursement. Although the absolute number of hospitals designated high-mortality outliers was relatively small compared with the total number of hospitals examined; one-half of the hospitals designated high-mortality outliers were reclassified as being no different from expected mortality after accounting for patient early DNR orders. This raises the question of whether measuring risk-adjusted mortality without accounting for the presence of DNR orders for patients at hospital admission truly reflects the quality of care being provided. Many patients have limited life expectancy for a variety of reasons including specific life-limiting diagnoses or general debility, and DNR use is entirely appropriate for these patients. Thus, penalizing health care systems with higher, and potentially more patient-centered, DNR use that subsequently leads to higher mortality may be inappropriate (37). Alternatively, some hospitals may overuse DNR orders, potentially affecting care delivery in a variety of ways (such as decreased use of evidence-based therapies for heart failure). Inclusion of DNR status in risk-adjustment models would likely result in an inappropriate misclassification of DNR overusers. This issue deserves future study prior to inclusion of DNR status in risk models.
Our data are from 2011, which is a less contemporary dataset, but the most recent available California SID data; no other source of population-based data from multiple hospitals that includes early DNR status is currently available, including CMS data. However, we do not expect that the association between DNR status and mortality would have changed markedly in more contemporary data. We attempted to replicate the CMS measurement methodology as closely as possible by using similar comorbidity classifications and model terms; however, due to the limitations of the data source, our models differ in several key regards (1-year measurement period rather than 3 years, lack of some predictor variables due to missing data). We did not have access to outpatient diagnosis codes, likely resulting in some degree of underestimation and misestimation of comorbidity burden for some patients. There also may have been some misclassification of DNR status, although prior work has shown this variable to be sufficiently reliable. Prior study has demonstrated an 84.3% agreement with chart review (with specificity and sensitivity rates similar to other administrative data-derived elements); however, the 15.7% of misclassified patients may further confound our results (24). This prior work demonstrated that death during admission increased the rates of both false positive and false negative DNR indicators, which would likely bias the result toward the null. Additionally, some of the variability may be driven more by coding practices than differences in DNR rates. Finally, our primary outcome of hospital mortality may differ from that of 30-day mortality. This may introduce some bias toward higher mortality for hospitals with higher severity and longer lengths of stay compared with 30-day mortality. We were only able to assess the effect of DNR status on patients in California due to the presence of this variable in the SID database. It has been observed that DNR status confounds a variety of different analyses; therefore, we suggest this variable be required as part of the standard SID and/or National Inpatient Sample database to facilitate future study (38). Despite these limitations, there are no currently existing data available to answer our research question.
Because mortality statistics are publicly reported and affect hospital reimbursement, accuracy in measuring and reporting these outcomes are essential. Failure to appropriately account for patient DNR status at hospital admission in risk models may discourage programs from eliciting patient preferences for life support therapy, as hospitals with more patients selecting limitations on life support have higher mortality rates that result in financial penalty. Further study is needed to examine methods to accurately measure orders for limitation in life-sustaining treatments to better account for variation in patient preferences in quality measurements.
COMPETENCY IN MEDICAL KNOWLEDGE: Among patients with heart failure, use of DNR orders varies widely among hospitals and is strongly related to mortality. Risk-adjusted mortality models for patients with heart failure that include DNR status have improved fit compared with models without DNR status. Accounting for DNR status impacts hospital risk-adjusted mortality for heart failure and results in a substantial amount of reclassification of outlier status. Capturing this variable may improve performance ratings.
TRANSLATIONAL OUTLOOK: Development of appropriate quality measures has proven to be complex and challenging. Our research helps demonstrate that significant unmeasured confounding of the measures exists. Further research is needed to ensure quality metrics accurately reflect patient-centered care.
For supplemental tables, please see the online version of this paper.
Dr. Bruckel serves as an advisor for AvantGarde Health. Dr. Walkey has received funding from U.S. National Institutes of Health/National Heart, Lung, and Blood Institute (NHLBI) grant K01HL116768. Dr. Nallamothu has received research funding from NHLBI grant 1R01HL123980 and Veterans Affairs Health Services research and development grant IIR 13-079; and has served on the Scientific Advisory Board for United Healthcare.
- Abbreviations and Acronyms
- Akaike information criterion
- confidence interval
- Centers for Medicare and Medicaid Services
- International Classification of Disease-9th revision-Clinical Modification
- median odds ratio
- hierarchical clinical conditions
- odds ratio
- P:E ratio
- predicted-to-expected ratio
- State Inpatient Database
- Received May 17, 2017.
- Revision received July 17, 2017.
- Accepted July 27, 2017.
- 2017 American College of Cardiology Foundation
- Pfuntner A.,
- Wier L.,
- Stocks C.
- Heidenreich P.A.,
- Trogdon J.G.,
- Khavjou O.A.,
- et al.
- ↵Centers For Medicare And Medicaid Services. Implementation and maintenance of CMS mortality measures for AMI & HF: frequently asked questions. 2007. Washington, DC: CMS. Available at: Https://Www.Cms.Gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Hospitalqualityinits/Downloads/Hospitalmortalityaboutami_Hf.Pdf. Accessed December 10, 2016.
- ↵Centers for Medicare and Medicaid Services. Hospital value-based purchasing. Washington, DC: CMS, 2015.
- Phadke A.,
- Heidenreich P.A.
- McAlister F.A.,
- Wang J.,
- Donovan L.,
- Lee D.S.,
- Armstrong P.W.,
- Tu J.V.
- Walkey A.J.,
- Weinberg J.,
- Wiener R.S.,
- Cooke C.R.,
- Lindenauer P.K.
- Kelly A.G.,
- Zahuranec D.B.,
- Holloway R.G.,
- Morgenstern L.B.,
- Burke J.F.
- Walkey A.J.,
- Weinberg J.,
- Wiener R.S.,
- Cooke C.R.,
- Lindenauer P.K.
- ↵Healthcare Cost and Utilization Project. State inpatient databases. HCUP. 2005-2011. Rockville, MD: Agency for Healthcare Research and Quality. Available at: www.hcup-us.ahrq.gov/sidoverview.jsp. Accessed December 1, 2016.
- Healthcare Cost and Utilization Project
- Krumholz H.M.,
- Wang Y.,
- Mattera J.A.,
- et al.
- Centers For Medicare And Medicaid Services
- ↵Healthcare Cost and Utilization Project. Central distributor SID: description of data elements (DNR). HCUP. August 2008. Rockville, MD: Agency for Healthcare Research and Quality. Available at: https://www.hcup-us.ahrq.gov/db/vars/siddistnote.jsp?var=dnr. Accessed December 1, 2016.
- ↵Centers for Medicare and Medicaid Services. Qualitynet resources–claims-based mortality measure resources. CC-to-ICD-9 crosswalk. Baltimore, MD: CMS. Available at: https://www.qualitynet.org/dcs/ContentServer?c=Page&pagename=QnetPublic%2FPage%2FQnetTier4&cid=1182785083979. Accessed January 5, 2016.
- Merlo J.,
- Chaix B.,
- Yang M.,
- Lynch J.,
- Rastam L.
- Mehta A.B.,
- Cooke C.R.,
- Douglas I.S.,
- Lindenauer P.K.,
- Wiener R.S.,
- Walkey A.J.
- Harrell F.
- Chang D.W.,
- Brass E.P.
- ↵Cutler D, Stern A, Skinner J, Wennberg D. Physician beliefs and patient preferences: a new look at regional variation in health care spending. Boston, MA: Harvard Business School Working Paper, no. 15-090;2015.
- Bradford M.A.,
- Lindenauer P.K.,
- Wiener R.S.,
- Walkey A.J.