Quality Measurement in Cardiac Surgery
History of Quality Measurement in Cardiac Surgery
Release of Unadjusted Coronary Artery Bypass Grafting Outcomes by the Federal Government
Cardiothoracic surgeons have been systematically measuring quality and using those data to improve performance since the late 1980s, longer than any other healthcare specialty. The proximate stimulus for these efforts was the seminal publication by the Health Care Financing Administration (HCFA, the predecessor of the Centers for Medicare and Medicaid Services, or CMS) of hospital mortality reports (at the time, referred to by some as the “death lists”) beginning in 1986.[1] In addition to overall results, several individual conditions and procedures were profiled, including coronary artery bypass grafting (CABG) surgery.
The Society of Thoracic Surgeons Response
Unfortunately, the outcomes used in these reports were minimally risk adjusted,[2],[3],[4],[5],[6] leading many hospitals and cardiac surgery programs to complain to The Society of Thoracic Surgeons (STS) that the acuity of their patients was not being accounted for adequately. STS responded by forming an Ad Hoc Committee on Risk Factors, led by Dr. Nick Kouchoukos. In their subsequent report,[7] the authors presciently defined a number of critical principles of performance measurement (often referred to today as provider profiling) that have changed little over the succeeding decades:
“In order to arrive at an estimate of ‘quality of care,’ critical analysis of the population under study is required. Such analysis should include, but should not be limited to, determination of hospital mortality rates. All of the risk factors that are predictive of operative mortality must be identified and subjected to appropriate statistical analysis before comparisons of mortality rates between institutions can be made. Although the HCFA analysis included such variables as age, sex, and race (indicators of case mix), it did not, in the opinion of some critics, adequately assess case severity, a serious deficiency when examining mortality.....
Thus, comparison of raw mortality rates among hospitals, without adequate information regarding the distribution of these and other important variables associated with increased mortality, and correction for differences in the frequency distribution of these variables if they exist, is inappropriate and misleading.... Analyses incorporating these and possibly other variables permit stratification and identification of subgroups of patients that are at different levels of risk for coronary artery surgery. Such analyses will also permit computation of risk models and expected operative mortality rates. Comparison of these expected or predicted operative mortality rates with observed mortality rates from different institutions will provide a more meaningful assessment of quality of care than the data provided by HCFA. Postoperative complications and other indices of hospital morbidity should also be incorporated into any assessment of quality of care. Such data are not available in the HCFA study.”
In this 30-year-old statement from STS, the foundations of responsible, scientifically valid, healthcare quality measurement were clearly articulated, and they remain central to STS practice today. Risk adjustment is essential for outcomes measures, and this must include all important risk factors. Optimally, robust, standardized clinical data such as those available from the STS National Database are utilized rather than claims data (often referred to as administrative or billing data), which often lack clinical granularity and specificity. A broadly representative benchmark population is necessary so that the results of each provider can be compared with what would have been expected for their patient mix (see risk adjustment section below). Finally, the authors correctly observed that mortality alone, even if risk adjusted, is not an adequate end point. Many serious postoperative complications (eg, disabling strokes or dialysis-dependent renal failure) can have profound impacts on the lives of surviving patients, but these adverse occurrences would not be captured if quality were measured only using postoperative mortality as the sole outcome metric. Further, as mortality rates for most cardiac surgical procedures have declined to very low levels, the relatively small number of these endpoints makes it statistically challenging to differentiate quality among programs based only on deaths. As described in a subsequent section, these observations were the impetus in 2007 for the development of the family of STS multi-domain composite measures that incorporate both postoperative mortality and complications and, for some procedures, process measure compliance.
Development of the Society of Thoracic Surgeons Database
The recommendations of this ad hoc committee led to the recognition that a standardized clinical data source, collected prospectively by trained data managers, was necessary for robust risk adjustment and national benchmarking. STS leaders, led by Dr. Richard Clark, called for the development of a national cardiac surgery clinical registry,[8] and in response, the STS National Database was implemented in 1989 [9][10].
Society of Thoracic Surgeons Risk Models
During the same time frame that the STS National Database was being developed, Dr. Fred Edwards introduced some of the first models for cardiac surgery risk adjustment, using Bayesian approaches.[11],[12],[13],[14][15] Multiple iterations of these models have been introduced over the succeeding years, most of which used logistic or hierarchical regression modeling.[16],[17],[18],[19],[20],[21],[22],[23],[24],[25],
Confidential Feedback of Society of Thoracic Surgeons Results to Database Participants
The motivation for development of the STS National Database and its associated performance measures was the public release of flawed, inadequately risk-adjusted CABG mortality results by the federal government. Although created in response to misleading public report cards, the STS data and performance reports were initially used only for confidential feedback of nationally benchmarked results to participants, thereby facilitating performance improvement. The salutary effects of this approach were demonstrated by steady declines in rates of adverse outcomes (eg, mortality) and observed to expected (O/E) ratios, and increased use of evidence-based practices such as internal mammary artery grafts for CABG.[26],[27]
Feedback of detailed performance information to participants remains a core function of the STS National Database today, some 3 decades after its initiation. For each major procedure such as CABG, a participant’s score is provided for each performance domain, together with the STS star rating (corresponding to better or worse than expected, or "as expected"), and the distribution of scores across all STS participants (Figure 1). For multi-component domains such as complications and perioperative medication use, a "drill down" report is provided with scores for each individual complication or medication (Figure 2). This information is intended to provide guidance for improvement actions. An additional report screen provides information regarding the participant’s mean reported frequencies of patient comorbidities (which reflect the participant’s case-mix complexity and impacts calculation of their risk-adjusted outcomes), including how the current year’s frequencies compare with previous years and with the frequencies reported by similar programs, other centers in the same region, and STS overall (Figure 3). Finally, a summary of risk-adjusted mortality and complication results is provided, with comparisons to previous years, similar programs, regional programs, and STS overall (Figure 4).
Regional Data-Driven Collaboratives and Performance Improvement
At about the same time the STS Database was initiated, the Northern New England Cardiovascular Disease Study group (NNECVDSG), a consortium of 5 regional cardiothoracic programs in northern New England, developed their own data registry and risk models for CABG surgery.[28],[29] They used these data as the basis of a highly structured, best practice collaborative program that substantially improved results in the region.[30],[31] Some studies from that era have demonstrated that NNECVDSG experienced comparable improvements from confidential feedback and performance improvement as other states did with public reporting.[32],[33] Highly successful registry-based cardiac surgery quality initiatives have subsequently been developed in Michigan[34] and Virginia[35],[36],[37] and have dramatically improved both clinical outcomes and costs.
There's more to see -- the rest of this topic is available only to subscribers.