Is your surgeon better than my surgeon? Even if that were true, with all the nuances involved in performing a surgical procedure and the infinite number of iterative complications that can occur due simply to the design and function and complexity of the human body, would it even be possible to know? Well, there are at least two companies that believe they have found the Holy Grail of scoring physicians on their quality of care.
Several years ago, a company by the name of Consumers’ Checkbook (Checkbook.org) filed a FOIA request with HHS to make available line item data with physician identifiers from the CMS database. HHS refused, Consumers’ Checkbook sued, and the district court ruled in its favor. HHS appealed the decision and was supported by the AMA as well as other medical societies. In the end, the United States Court of Appeals for the District of Columbia held that physicians could maintain the confidentiality of their payments from Medicare, and reversed the decision. Ultimately, however, HHS threw in the towel and subsequently released the data, or at least made it available at a cost. As a result, pretty much anyone who has a computer and the cash has access to both physician and hospital claims data, and even though they supposedly only contain de-identified data, the fact that an entity can objectively follow a patient based on treatment and/or admission date, procedures performed, and overall diagnoses as well as follow-up treatment and/or admissions really challenges just how private our data really are.
Yesterday, I received an email announcing that Consumers’ Checkbook will be publishing a Surgeon Rating report that claims to identify the quality of a given surgeon based on, from what I read, a completely flawed methodology, rendering their results all but useless. In that email, it also identified that ProPublica (propublica.org) also published what they are calling their “Surgeon’s Scorecard.” And while they went to a great deal of time and effort to create a far more concise methodology than Consumers’ Checkbook, the whole idea of being able to somehow rate a physician’s quality from incomplete and often inaccurate data is, in itself, fatally flawed.
First and foremost, we have no idea the Medicare mix for these physicians, so even if they had perfect data (which they don’t) and even if they had a perfect methodology (which they don’t) and even if they could ensure patient compliance with physician’s orders (which they can’t), in the very best-case scenario, they might be able to make some inference to Medicare patients only (which they don’t). Considering, as well, that not all Medicare patients and all procedures were included in the study, the noise begins to block out the signal.
ProPublica contracted with a group of physicians to review medical records and made a valiant effort to qualify whether a patient suffered a complication (morbidity or mortality) as a result of the initial procedure performed. Much of this was based on whether the patient died within a given time period of the procedure or whether the patient was readmitted to the hospital within 30 days. Unfortunately, neither of these is an accurate marker for creating a nexus of complications to the surgical procedure performed, nor, in particular, to the surgeon that performed the procedure. Checkbook.org, based on what was published on its website, only looked at whether the patient died within 90 days of the procedure, which in and of itself is a completely worthless and useless metric.
For any of this to make sense, we need to first start by understanding the general health of the patient, which would include an accurate reporting of all ICD codes associated to that patient. And not just for that given visit or procedure, but in general for that patient across a more extensive time period. Replacing a hip on a 70-year-old male is problematic enough, but when the patient has diabetes, COPD, ESRD, or a host of other comorbid conditions, the iatrogenic risk can increase geometrically, a fact that ProPublica tries to capture by looking at the admission record for the patient but is simply too static to do. Ignoring the health history of the patient is, in my opinion, flawed and pretty much invalidates much of the value in the findings. After speaking with one of the analysts from ProPublica, I do understand that they attempted to risk-adjust the patient based on things like procedure codes, age, etc., but again, the capture was for a single event and did not include any historical data on a given patient’s diagnoses or treatments. They even included demographic information, but on the hospital, not the patient.
From what I understand, both companies used the 100 percent Part A database made available by HHS (at a cost, of course). There is a lot of information at the claim and line level contained within the data, and while no PHI was obtained, there are patient identifiers that allow the analysts to follow a given patient’s admission patterns following the procedure of interest. What is missing, however, is the ability to reconcile the admission diagnoses to the Part B diagnoses that were rendered either as part of or at some time prior to that surgical event. I believe that it is common knowledge amongst coders as well as auditors that there is a noticeable degree of disparity between the physician’s ICD-9 codes when admitting the patient and the hospital’s ICD-9 codes that are on the admission form and used to assign the DRG. Relying upon the assumption that the coding is correct also creates a great deal of noise with regard to the data analyses. According to the most recent CERT study, some 12 percent of all claims reviewed were considered to have been coded in error; and these are claims that were paid at first and then reevaluated. How many more slipped through the system without having had the benefit of a complete audit?
Another problem has to do with using readmissions as an indicator of a complication. Readmission within 30 days, even if for the same diagnosis, is not necessarily a complication of the original procedure. In this group of patients (Medicare-specific), chronic problems are relatively common and regular admission to a hospital, for many patients with chronic problems, could likely indicate nothing more than they have, well, chronic health problems. In the immortal words of Sigmund Freud (maybe), “Sometimes a cigar is just a cigar.”
We also need to do a bit of a root cause analysis when it comes to readmissions and death as the result of a complication due to a given procedure. Important note: I am not saying that it doesn’t happen, I am just saying that you cannot adjudicate responsibility to the provider with the data available. I spent a dozen years as a physician’s assistant and have worked with tens of thousands of physicians over the past 40 years and one thing that I know for a fact is that patients tend to be non-compliant when it comes to following the physician’s orders. They don’t exercise as instructed. They don’t take their medications as instructed. They don’t avoid the shower after surgery. They don’t keep the wound area clean. They don’t rest as they should. The point is, even if a patient were to suffer a complication of a given surgical procedure, who is to say that the fault can be assigned to the physician? I have worked with hospitals that, no matter how good the surgeon, are so poorly run that complications are inevitable. So throw in hospital-related mistakes and patient non-compliance and it isn’t difficult to imagine that the noise within the data make any specific conclusion improbable, if not impossible.
In an online video I saw in which the ProPublica product was being discussed on The Today Show, one patient compared this to calculating the batting average for a baseball player. As an analogy, the comparison is totally absurd. In baseball, we have all of the data all of the time and we have it accurately and in real time, unlike the almost useless data made available by the government. There are an infinite number of things that can go wrong with the human body and at most, there are a few thousand ways to report what was done to the patient based on a finite and often over-limited set of descriptions and codes. The only way a study like this would have any value is if a case-by-case analysis were to be performed and there are more than a few scholarly articles out there that have taken this route. I have designed many studies throughout my career, including clinical trials, and I can tell you that the results of these two analyses would never make it past the first stage of a peer review.
One area that will likely be overlooked involves the potential audit risk and civil liability assigned to given physicians. I have seen audits conducted in the past that challenged whether a physician was providing the quality of care required under a payer contract. In addition, I see this as another opportunity for auditors to fleece practices based on their assessment of whether the procedures were performed as expected, resulting in a demand for total or partial repayment. Also, I expect that in upcoming medical malpractice lawsuits, plaintiffs’ attorneys will rely heavily on the scorecard for the surgeon in question, but of course, only if the scorecard report is negative. If it turns out to be positive, watch how fast the attorneys will attack the methodology to discount the benefit it could provide for the defendant.
Here is what these types of inaccurate and damaging reports do: They discourage physicians from seeing higher-risk patients. And often, these are patients that are socially, demographically, or financially challenged. Just look at what happened in New York in 1989. It became the first state to publish mortality rates of heart surgeons, creating two scorecards: one for coronary bypass and once for angioplasty. An article entitled “Heartless,” published by New York Magazine, says the creation of this report card system created a “chilling” effect, which was to discourage surgeons from treating the sickest patients, in order to “prop up their personal success records.” While I totally disapprove of that type of behavior, I understand why it is done. In addition to lives being on the line, so are careers. With all due respect to ProPublica (and no respect due Consumers’ Checkbook), it is my opinion that not only are their reports and scorecards worthless with regard to their goals, but they are irresponsible, reckless, and will potentially do more harm than good.
Our healthcare system and our healthcare providers are under attack and it threatens to discourage qualified and caring individuals from entering this field. At some point (likely soon), we are going to hit a breaking point and if we don’t do something now to come to the defense and aid of our providers, we may find that the damage done will be irreversible.
And that’s the world according to Frank.
About the Author
Frank Cohen is the Director of Analytics and Business Intelligence for DoctorsManagement. He is a health care consultant who specializes in data mining, applied statistics, practice analytics, decision support, and process improvement. He is also a member of the National Society of Certified Healthcare Business Consultants (NSCHBC.ORG).
Contact the Author
Comment on this Article