June 6, 2013

It’s not That I’m Paranoid, it’s That I’m not Paranoid Enough!

By

I often engage in discussions with peers about the sophistication level of auditing entities, and here I am referring specifically to healthcare.

This list is long, but it includes ZPICs, RACs, MACs, MICs, CERT, HEAT, PERM, the OIG, the DOJ and some that even I have never heard of before. Each, while likely sharing a similar goal (to reduce medical waste due to fraud and abuse), have different missions, and those missions often are tied to incentives.

For example, a RAC is paid a commission, so the more they are able to find wrong with providers, the more money they receive. As such, from a physician’s perspective, a RAC is likely to be more interested in RVU variance than frequency variance. Why? It’s because RVUs at risk can be converted to dollars at risk very quickly, and when calculating the expected value of an audit (essentially, an ROI), the more dollars at risk, the greater the commission. For ZPICs, which primarily look to refer providers up the chain (to the OIG) for potential fraud, commission is not at issue, but rather performance – which could be defined as how much money is put back into the trust fund. For a ZPIC, frequency likely is a greater pull than RVU because of the potential for application of the federal false claims civil penalties, which can amount to $5,500 and $11,000 per claim. If you look closely at the work product of each entity, you can begin to develop a better idea of the specific mission in which each is engaged.

Irrespective of the missions, at some point, some organization is going to get audited. In fact, as I write this, there are likely hundreds of audits proceeding for hundreds if not thousands of providers and other healthcare organizations. Some are publicly known, and some are being conducted in secret. What every administrator and manager should know is that, before providers realize that they are being audited, the auditor has known about them for some time. The auditing entity likely has been looking at their data in advance of any notification or determining the ROI of conducting an audit on them. It’s kind of scary, actually, but if you choose to be in partnership with a payer, whether government or private, you give up any expectation of business privacy and volunteer to have everything you do subject to scrutiny. An area of question and speculation revolves around the data that is being reviewed.

In my line of work, I often come in contact with folks who are involved in the audit side of things, from assistant U.S. Attorneys to OIG investigators to FBI agents. And while we are often on different sides of the table, there are always conversations that hint of the sophistication they wield in order to determine who to audit.

I work within the belief that they are, in fact, telling me the truth. I know there is a tendency to think that since this is the government, they may not be smart enough to go this route, and that their decision to audit instead is based more on gross assumptions (like overall size of an organization, or its geographic reach). But I can tell you that not only is this an incorrect assumption, believing it increases the risk of being unprepared when your audit is announced.

When I conduct pre-audit risk assessments on medical practices, I employ fairly sophisticated statistical techniques. I want to know when a provider’s actions cross the line from acceptable to anomalous, from potentially an issue to an outlier – and not just an anecdotal outlier, but a statistical outlier. The idea is this: never miss a potential risk and avoid as many false positives as possible. This is a tall order, and many of my peers tell me that I go overboard on my analyses, asserting that the auditors simply are not sophisticated enough to meet these statistical expectations. Well, they are wrong, and I have seen enough evidence of it to keep me moving in this direction. 

Recently, a client sent me a letter she received from a company that works for SafeGuard Services, LLC, a Program Safeguard Contractor (PSC). A PSC is a company contracted by CMS to perform Medicare data analysis and data mining to seek out suspected incidents of fraud, waste and abuse. In this vein, SafeGuard provides data and information to auditing entities such as ZPICs. The report they received is called a CBR, or comparative billing report, which compares their billing and utilization patterns to their peers. I know many of you have seen these, as they represent a quite common first step toward identifying potential billing and coding issues for practices. Permit me to provide some excerpts from this particular CBR:

“Based on the most recent data from the Comprehensive Error Rate Testing (CERT) Program, 8.4 percent of those E/M payments were identified as being billed at the wrong code level …”

This is one of those “I told you so” moments for me, as I have been screaming for some time about how the auditors use CERT as a baseline for determining both risk and ROI. The report goes on to say:

“We encourage you to conduct an audit on your own claims and refund any overpayments to the appropriate Medicare Administrative Contractor (MAC).”

I will let you figure out what they are really saying here. A common question from providers has to do with the risk that self-disclosure creates. For example, if you conduct an internal review, find problems, and self-disclose with refunds, do you run a higher risk of being audited in the future? My friend Patrick Marion, a former OIG special investigator and a principal of Compliance Concepts, Inc. tells me that there isn’t any evidence this is so; however, there also isn’t any evidence of the opposite. The problem, however, is not the risk of audit if you self-disclose, but rather the risk if you don’t (and subsequently get caught).

Sorry, I tend to get a bit distracted during these conversations. Back to the point of the article:

Under the methodology section of the CBR, the date range is identified as dates of service and the codes reviewed are listed; in this case (and in most others), the review was for new (99201 – 99205) and established (99211 – 99215) office visits. It also specified that claim lines selected have “allowed amounts greater than zero, were performed in an office setting and have a specialty of (omitted).”

Stand by, because we are getting to the kicker. But first, one more quote from the methodology section:

“The utilization measures analyzed in this CBR are: l) the number of E/M services rendered by you per CPT code, 2) the average number of E/M services rendered per beneficiary, per CPT code, and 3) the percentage of each high level E/M codes (99204, 99205, 99214, and 99215) rendered among the same code grouping. The results are displayed in graphs and tables.”

Next, the CBR features a series of charts and graphs that show practices’ utilization rates compared to their peer groups (their specialties) for both the state they are in and the nation. Then comes Table 1, which is a statistical comparison of the average number of E/M services rendered per established beneficiary by the practice to the average number of E/M services rendered per established beneficiary (again, listed by state and nationally, per CPT codes).

So now it gets really interesting! The purpose is to identify situations in which the utilization for each specific provider within a group is significantly higher than that of their peers. And when I use the word “significant,” I am referring to statistical significance. Here is the exact wording from the notation for this and other tables within the CBR:


 

“A t-test was used in this analysis; a p value ≤ 0.05 indicates that we are at least 95 percent confident that the difference is significant. If a peer group has less than 30 providers, a t-test comparison was not performed and your significance will be listed as ‘N/A.’ Alternately, if your significance is ‘N/A,’ and your average is also ‘N/A,’ a t-test was not performed because you did not render any services and are not part of the peer group.”

What? There are a lot of problems here, such as not identifying from where the comparative data originates, or why a physician is not in the peer group, or what the size of the peer group is so the practice can validate variance and error. The point is that this auditing entity (and pretty much all others) is employing more sophisticated statistical testing to identify when and if a provider is “statistically significantly” different from its peer group – and that, my friends, is perhaps the main reason that providers get audited!

So, what’s an administrator to do?

For one thing, get educated. We all should have a basic understanding of elementary statistics such as positional measurements, variance, error, and basic statistical tests (as are found in the t-test used in the CBR).

Second, conduct a risk assessment! Do it yourself or hire someone; either way, be sure that the level of statistical sophistication at least equals that of the auditors.

And finally, semper paratus: become prepared by conducting an internal review of those areas deemed to contribute the most to risk. Ignorance is not only “no excuse,” but it likely could move an unsuspecting practice from the world of simple mistakes to the arena of civil or criminal penalties. 

So, are you paranoid, or are you not paranoid enough?

About the Author

Frank Cohen is the senior analyst for The Frank Cohen Group, LLC. He is a healthcare consultant that specializes in data mining, applied statistics, practice analytics, decision support and process improvement.

Contact the Author

frank@frankcohengroup.com

To comment on this article please go to editor@racmonitor.com

This email address is being protected from spambots. You need JavaScript enabled to view it.