February 14, 2012

Extrapolation v. a Complete Review—a Tough Choice for Practices

By

As discussed in prior articles, extrapolation is a statistical technique that is used to infer (or estimate) the results from studying a sample of the universe to the entire universe.

While there are those that object to its use, when applied properly, it can be an efficient, accurate and cost-effective way of determining potential overpayment amounts in an audit.  Again, as discussed in prior articles, the statistical validity of the random sample is critical to an accurate extrapolation.  Because this technique, in effect, amplifies the results from the sample, even a small mistake (e.g., $10), when applied to a large universe of claims (e.g., 30,000) can result in a huge error in estimated overpayment (i.e., $300,000).

So, without rehashing the issue of testing for randomness, how can we get a pretty good idea without doing a lot of math?  In essence, what would the error rate look like if the sample was randomly drawn as opposed to being biased?  This is a major point of contention among many of the practices I work with, and, the truth is, it's not that difficult to conduct a small test.  It is, however, a hugely important aspect since the recovery audit contractor (RAC) statement of work (SOW) says that an extrapolation is permitted when there is evidence of a sustained or high level of error.  And while the SOW does not legally define "sustained" and "high level," knowing whether the auditor uses these definitions based on a biased sample is very important.

How It Works

Let's take an example of a practice that has an initial audit conducted of 30 claims. Of these, the auditor determines that 15 were overpaid. Remember: When a claim is audited (i.e., at the claim level), the audit is really done on the individual lines within the claim. So let's say, for the sake of argument, there are an average of five claim lines per claim, for a total of 150 claims.

While I know this is a bit of a stretch, again for the sake of argument, let's say that only one claim line in those 15 claims determined to be overpaid contributed to that decision (meaning that, of the five claim lines in each claim, for the 15 found to be overpaid, only one claim line in each was found wanting).  So what we have, then, is 15 of the 150 claim lines with an overpayment determination.

Now, instead of a 50 percent overpayment rate (15 out of 30 claims), the overpayment rate should really be seen as 10 percent (15 out of 150 claim lines). But because the audit was at the claim level and not the claim-line level, the auditor can try to use that to warrant an extrapolation.

Spotting Biased Samples

The other issue involves the rate of error from a different perspective.

In the 2010 CERT study, CMS sampled about Part B 32,000 claims and reviewed about 31,000 claims. Of these, the agency determined that approximately 10.2 percent were paid in error. This would put the 95 percent confidence interval at somewhere between 9.67 percent and 10.34 percent. For every 100 practices audited, 95 would have a claims error rate of somewhere between 9.67 percent and 10.34 percent.

If the results of your audit are like those above, where the claims level error rate is 50 percent, either you are one of the unlucky 5 percent of practices or the sample has been biased. Even at a 99 percent confidence interval rate, 99 out of 100 practices would likely have an error rate of between 9.56 percent and 10.45 percent.

There are lots of ways to test for randomness and several of these have been discussed in prior articles I have written, but suffice it to say that, using the smell test above, pretty much any practice can at least determine whether there is a sustained or high level of potential bias in the sample.

Taking Action

The last part of this is determining what to do if this happens to you.

I have worked on several cases where the practice objected to the extrapolation and demanded a full audit or all claims. This is a huge hassle for the practice and potentially an expensive venture. But it's at least as big a hassle and more expensive for the payer.

What payer do you know that wants to audit 20,000 claims? If you have been following along with some of the past articles, you know that we discussed this in regards to the number of full time equivalents (FTEs) and the time required to audit all of the practice's claims.

However, if you think that the sample is biased, and there are tens if not hundreds of thousands of dollars at risk, consider a full audit as an option.  My experience is that that payer does not want to engage in this type of a brute-force audit situation and will either throw out the extrapolation (assessing damage at face value) or negotiate a lower amount.


 

As in the past, no one can make this decision for you but it is always a good idea to seek competent counsel when there is a lot at stake. Don't go this round unless you are ready to go the distance; bluffing doesn't always work and if you are called on it, expect a less than pleasant experience.

Bottom line? Take a hard look at what the auditor is telling you are the facts, but remember that there are three sides to every audit: yours, the auditor's, and, somewhere in the middle, the truth.

About the Author

Frank Cohen is the senior analyst for The Frank Cohen Group, LLC. He is a healthcare consultant who specializes in data mining, applied statistics, practice analytics, decision support and process improvement.

Contact the Author

frank@frankcohengroup.com

To comment on this article please go to editor@racmonitor.com

Increased Audit Scrutiny Aimed at Skilled Nursing Facilities

Frank D. Cohen, MPA, MBB

Frank Cohen is the director of analytics and business Intelligence for DoctorsManagement, a Knoxville, Tenn. consulting firm. He specializes in data mining, applied statistics, practice analytics, decision support, and process improvement. He is a member of the RACmonitor editorial board and a popular contributor on Monitor Monday.

This email address is being protected from spambots. You need JavaScript enabled to view it.