December 16, 2015

CMS Goes Predictive: Why Improving Intelligence Improves Outcomes

By

Beginning in July 2011, the Centers for Medicare & Medicaid Services (CMS) entered the world of predictive analytics. According to a press release, effective June 30, 2011, a total of 100 percent of all Medicare fee-for-service claims were (and now are) passed through a complex set of predictive algorithms prior to being paid.

And while I am not privy to the exact algorithms that they use, predictive analytics follows some pretty, well, predictable processes – and if the system works the way it was designed, then it represents a paradigm shift in CMS’s efforts to identify fraud and abuse. In general, a predictive algorithm is designed to analyze the variables within a unit (in this case, a claim) and guess (or predict) the likelihood that that unit meets some criteria inherent within the algorithm. For the purposes of fraud and abuse, the algorithm is designed to predict the likelihood that any given claim meets the CMS’s criteria for fraud or abuse. In my work on predictive modeling, it is common for the algorithm to “score” the value, in essence establishing a sort of prioritization with regard to the prediction.

For example, the algorithm may score a given claim as not very likely to meet the criteria (maybe a probability of 25 percent), having an even probability of meeting the criteria (maybe a probability of 5 percent) or very likely to represent fraudulent or abusive activity (maybe a probability of 85 percent). Imagine that there are billions of claims going through the system every year and CMS, like any organization, has some finite limitations with regard to its available resources to manage the process. Their auditors and auditor contractors have access to this data, and since it would be unlikely that they could even review the majority of them, they establish a threshold above which they are motivated to pursue a claim. If, for example, a claim is kicked out with some high score, the agency will pull a more comprehensive set of claims for a given time period associated to the NPI or TIN number for that claim. Now, the auditor has the ability to not only look at a given claim, but to test for any trends that might indicate a broader problem of overpayment. So two things are occurring here. First, claims suspected to constitute fraud and/or abuse are not being paid, and a deeper dive occurs into the providers or practices from where these claims originated.

The rest becomes history; if the potential for collection exceeds the cost of the audit (the ROI), then it is likely that the audit will proceed and you will get a letter asking you to provide the documentation to support the claims that have been pulled by the auditor. So, how effective is this system? It’s difficult to say, since not all of the data is released; however, according to a CMS report to Congress, the Fraud Prevention System (FPS) identified and prevented $820 million in improper payments in the first three years of implementation. In the words of CMS, “the Fraud Prevention System helps to identify questionable billing patterns in real time and can review past patterns that may indicate fraud.” It would be hard to find anyone that would support or endorse healthcare fraud, and those involved in statistical modeling and predictive analytics would likely agree, at least in general, that this is definitely a step in the right direction. 

The FPS presents a significant challenge to healthcare providers: the need to step up their own game when it comes to self-identification of claims at risk for audit. Until recently, probe audits were the way to go, as they are relatively easy to implement and at the very least, they met the criteria found in many audit plans. But probe audits rarely identify risk. In fact, unless targeted to a specific code, modifier, DRG, or other unit of interest, they are all but useless. From my perspective, the only functional purpose for conducting a probe audit nowadays is to be able to just “check the box,” that is, to meet only the requirements of your audit plan. If that is truly all you are interested in, then by all means, keep doing what you’re doing. But if you really want to better assess and understand risk, you need to engage in some form of compliance analytics, and that is where CMS wants to lead you. Some practices select audit units based on frequency utilization; the more a provider does, the greater the likelihood that the code or modifier gets selected for internal audit. The problem is that utilization does not equate to risk. In fact, if you are going to depend solely upon volume, you would be better off using RVU volume, as RVUs can be converted to dollars by multiplying by the time-appropriate conversion factor.

A more effective method would be to create a variance analysis that compares your physicians against the utilization for the nearest peer group using the national Medicare data set. Again, if you are going to do this, I would recommend that you use RVUs instead of frequency. In that way, you can calculate RVU differentials, convert these into dollar differentials, and use that to establish audit priorities. While this method still does not measure or assess risk, it is the closest you can get without turning to true predictive analytics. And you shouldn’t have to pay very much (if anything at all) to conduct either the utilization or variance analysis, as this data is readily available on the CMS website – and there are some vendors that, for a reasonable price, will provide the data already segregated by specialty in a .csv for .xls format. Academics often want to compare their physicians against other academic physicians, which, while a great idea for analyzing performance and productivity, has nothing to do with assessing risk. Remember, at least for now, auditors are using national data that isn’t specific to any particular group or type of physician, so while you may get an idea as to how your physicians perform compared to other academic types, it won’t do anything to enhance your ability to predict risk.

In line with the issue of greater granularity in comparisons, we also are seeing a move toward greater granularity in the raw data. Namely, CMS is exploring the use of taxonomy codes, and they are doing this for two reasons. The first is to eliminate false positives when identifying claims and providers at risk. Right now, those academic doctors being compared to a general counterpart may, in fact, increase their risk of being audited.

The problem for CMS is that this increase in risk does not translate into an increase in actual overpayments, and as such, they either don’t find much in the way of recoupment, or, even worse for them, their findings are getting reversed on appeal. Using taxonomy codes will allow them to compare sub-specialty docs to other sub-specialty docs, reducing the number of false positives that do nothing more than confound their expected value calculations. For example, comparing a spine surgeon to a general orthopedic surgeon will almost always appear as the former exhibits higher risk – even though when compared to another spine surgeon, that might not be the case. Getting access to high-volume quality data by taxonomy code is going to present a significant challenge to the provider. But hang in there, because as this becomes the rule (rather than the exception), you will see some commercial companies providing the data. 

So, what to expect in 2016?

Well, moving to taxonomy codes is all but decided. With the possibility of a new contractor taking over the FPS project, we also may see a change in the algorithms themselves. The Recovery Auditors (RAs) likely will be tying into this database as well as the Comprehensive Error Rate Testing (CERT) study to create a more laser-like identification of providers at risk, and I expect that they will be more involved in the use of extrapolation, since this has been such a cash cow (when it stands) for CMS and the auditors.

I also expect to see more referrals of 100-percent error rate audits to the Zone Program Integrity Contractors (ZPICs) for evaluation of fraud and/or abuse. As well, I hear that CMS is training administrative law judges (ALJs) to identify suspected fraud, encouraging them to also make referrals directly to the Office of Inspector General (OIG) based on their hearing of an appeal. In general, CMS is stepping up its game, increasing its dependence upon electronic and digital intelligence to identify providers and provider organizations that present the best opportunity for recoupment.

The big question is how the providers will respond. Sticking to your guns with respect to probe audits and utilization studies is fast becoming a failed strategy, so I would encourage exploration of alternatives, including the use of predictive analytics in order to level the playing field.

And that’s the world according to Frank.

About the Author

Frank Cohen is the director of analytics and business intelligence for DoctorsManagement and adjunct Professor of Statistics for the School of Graduate Nursing at Robert Morris University.  Mr. Cohen’s specializes in data mining, applied statistics, practice analytics, decision support and process improvement. 

Contact the Author

fcohen@drsmgmt.com

Comment on this Article

editor@racmonitor.com

This email address is being protected from spambots. You need JavaScript enabled to view it.