Crowdsourcing Compliance: The Wrong Way to Do Things

By
Original story posted on: November 29, -0001

On April 9, the Centers for Medicare & Medicaid Services (CMS) made public a database that for years has been both highly coveted and highly protected. Prior attempts to get it released were stridently challenged in court, as it was believed that releasing the data, with all of its flaws and limitations, would do more harm than good. Well, not much has changed, except now the data is freely available on the CMS website.

The database contains utilization, billing, and payment data on pretty much every person and organization that billed Medicare in 2012. I use the term “pretty much” because there is one caveat: the file does not include data for services that were performed on 10 or fewer beneficiaries. So, as CMS indicates in its document, “users should be aware that summing (up) the data in the file may underestimate the true Part B . . . totals.” Should? How about a more emphatic “will?” After I cleaned up the data a bit and removed non-physician data, the result was a sum that comes up 685 million procedures short, or around 30 percent of the total in the Medicare claims database. That means that we are starting off with approximately 70 percent of the procedures actuallysubmitted to Medicare in 2012. And this value represents a huge range. For example, for anesthesiologists the match rate was only 34 percent, with the best match rate going to pathology, at 88.37 percent. And if you just look at the dollars, it gets even worse. General surgery, for example, has a paid match rate of just over half, at 53 percent, with emergency medicine reporting over 9 percent. The average paid match rate for all of our data is just over 84 percent.

So why publish the data in the first place? Some say that you can use it to determine the potential experience of a physician in performing certain procedures, using that as a proxy for quality. Right! Unfortunately (or fortunately, depending on which side you are on), the data does not include payer mix. Therefore, you have no idea whether the practice comprises 10 percent Medicare or 100 percent Medicare. Why is that important? Well, let’s say that the database shows that Provider 1 performed a procedure 10 times in 2012, but Provider 2 reported it 100 times. Those focused on quality are saying that Provider 1 must have more experience and therefore a higher quality rating. Really? How about if Provider 1 is 4 percent Medicare and actually performed the procedure 1,000 times, but only reported it 10 times to Medicare, while Provider 2 is 85 percent Medicare and just performed so many of the procedure. In this case, because payor mix is not available, there is no way to determine true quantities indicating how many procedures doctors really perform. And even if you could, where are the outcome results? I mean, do we really measure quality by quantity? I think not.

We also don’t know the overall acuity, or health risk, of the patient population. Without ICD codes, there is no way to know why a beneficiary was seen and what co-morbidities were present. This further invalidates any assumptions one might make regarding quality or quantity. What further complicates the guesswork is not knowing whether there are any NPPs at play, and if so, whether they are billing incident-to under the provider’s NPI. The bottom line is, with exception of data about the most extreme cases, volumetric data is mostly useless.

But what has happened (and maybe this is exactly what CMS wanted) is the creation of a niche market of whistleblowers and minions of quasi-analysts who think they can use incomplete data to identify potential fraud and abuse, quality of care issues, and waste within the Medicare system. In fact, what we see happening now is exactly why physician advocates were fighting the release of this half-baked data set: to prevent the general public (and the media) from making unfounded assumptions about medical providers.

So, is there anything redeeming about the database? I’m glad you asked! The answer is a resounding “yes.” For this we turn to ratios, which tend to level the playing field for all participants. For example, using the data from the new file, I calculated that the average ratio of the number of services provided per each unique beneficiary for a family practice physician is about 5.86 to 1. That means that for every unique beneficiary for that provider during 2012, he or she reported about six different procedures. The beneficiaries reported around three visits apiece for that year, so the average family physician reported about two procedures per visit, per beneficiary. The reason this is useful is because it is analogous to payer mix. Granted, it still doesn’t take into consideration acuity, but as we eliminate co-variants, we reduce the noise, making cause-and-effect analyses much easier.

Using this example, I went ahead and looked up usage by procedure code by family practice physician and saw that codes 99213 and 99214 were the most commonly reported. The next most frequently reported code, which is not an E&M code, is 36415, with a ratio of 2.03. The codes with the highest ratios were J-codes (drug codes), which actually makes sense. A good use for this database, then, is for an individual physician to take a look at what their procedure-to-beneficiary ratio is and compare it to the control. Remember, the ratio is tied to payer mix, so whether you have 10 Medicare beneficiaries or 1,000 Medicare beneficiaries, with the exception of the types of procedures that vary, the ratios indicating what you do for your beneficiaries shouldn’t be similar. There are 77,792 family practice doctors within the database, and their ratios of services-to-unique-beneficiaries range from a low of 1 to a high of 2,030. But when I investigated the few that had the highest ratios, I found that their billing was done under a group NPI and TIN, not an individual provider.

The moral of this story? You can always find a way to lie with statistics, but why bother? If you are looking for trouble, maybe you should be doing something else. If you are looking to find fraud and abuse, this isn’t the best data set to use. If you are looking to reduce waste, well, you need to look at a lot more than just this database; you need to consider operational expenses, location, patient type and average age, technology, acuity, etc. As I stated, there are good uses for this database as it pertains to physicians being able to see what types of procedures are being performed on patients for similar specialties. This could certainly be a tool to improve quality of care. And even though you will likely read about some villainous physicians who were caught using this crowdsourced method, it is likely that the most prevalent result will be a lot of people wasting a lot of time with nothing to show for their efforts.

About the Author

Frank Cohen is the senior analyst for The Frank Cohen Group, LLC. He is a healthcare consultant who specializes in data mining, applied statistics, practice analytics, decision support and process improvement. Mr. Cohen is also a member of the National Society of Certified Healthcare Business Consultants (NSCHBC.ORG).

Contact the Author

frank@frankcohengroup.com

To comment on this article go to editor@racmonitor.com

This email address is being protected from spambots. You need JavaScript enabled to view it.