November 1, 2010

RAC Stats: What to Look for After the Audit is Done, but Before it’s Over

By

fcohen100From the perspective of undergoing an audit by a RAC (or any entity, for that matter), it is important to understand the drivers that pointed to your organization in the first place. Was it due to overuse of certain codes? Were there some aberrant billing patterns that grabbed someone's attention?

Maybe you just have a large practice and the auditing agency knows that due to the overwhelming complexity of merely operating it, you provide a gold mine of possible coding and billing errors. But after the audit has been conducted, knowing the "why" somehow seems to lose importance once you are facing the possibility of a large overpayment demand. In the movie "Dennis the Menace," after Dennis had created a mess (as he usually did), Mrs. Wilson was telling Mr. Wilson about how sometimes, things just happen. But Mr. Wilson responded as many of us feel, saying "in a crisis of this magnitude, there must be someone to blame." And while at some point you will want to address this in order to manage future risk, the task at hand is to mitigate whatever damage already has been done.

From the RAC's perspective, there are two principle components to an audit: qualitative and quantitative. Qualitatively, they review the charts and/or other medical records and documentation.  Quantitatively, they determine how much was paid to the practice in error, which translates to how much they say the practice needs to pay back. These two principle components contain a plethora of far more complex tasks, and while both are of equal concern, I will leave the first to the more qualified coding community. In this article, I want to discuss the second component: determining the damage estimate.

Three Step Process

Step 1:

There are three major steps involved in performing the overpayment estimate, and interestingly enough, the first step begins long before the documentation review is initiated. The RAC begins by selecting what they often refer to as a Statistically Valid Random Sample (SVRS). "Statistically Valid" refers to the sample size (whether it is large enough to be representative) and type (whether it is aggregated or stratified), and "Random Sample" refers to the method used to select which entities will be reviewed.

Rarely have I seen an audit that actually was genuinely statistically valid, and surprisingly, I have almost never seen one for which the sample was truly random. As it pertains to the latter, if the sample is not random, it can have a huge effect on the outcome of the audit. The audit could be focused on claim lines, on claims (which contain the claim lines), members (which could contain multiple claims), or even treatment events (which could contain multiple members). After the sample is selected, the audit begins, involving a qualitative analysis of any number of criteria. Most recently, I have seen audits through which the RAC representative is determining the medical necessity for the diagnostic test or treatment. In many cases, they are looking to see whether the documentation is adequate enough to support the reported procedure code. In any case, the purpose is to make a judgment as to whether the amount paid was done so appropriately or in error.

Step 2:

We now begin the second part of the damage analysis: calculating the point estimate for overpayments. To many, this seems like a pretty straightforward and simple process: add up the estimated overpayments and divide by the number in the sample, then calculate the average. For example, let's say that 30 charts are selected for review. Of these, the RAC finds that 15 of them resulted in a combined overpayment of $4,275. What does this mean? Well, it could mean that the average overpayment is $142.50 per chart. Or it could mean that there is an average overpayment of $285 per overpaid chart, which, in our example, is about 50 percent of all charts. In some cases, the auditor may extract the number of claims per chart or, when claims are the target of the audit, the number of claim lines per claim. As another example, let's say that, in the sample selected, there is an average of 4.6 claims per chart reviewed. The auditor then might estimate that there is an average overpayment of approximately $31 per claim (138 total claims in the sample divided into the overpayment amount of $4,275). My experience is that the auditor normally will use whichever method biases the results in their favor since they are, after all, paid a commission on what they recover. On many occasions, I have asked the RAC to tell me about its methodology, and the answer I normally get is that this is a "trade secret" - hardly the response I would have expected from an unbiased federal contractor.


Step 3:

The final step is to extrapolate the results of the audit to the universe of charts (or members, or claims, or claim lines, etc.) of the entity being audited. Let's follow up on our example above to see just how an extrapolation audit might occur. Let's say that, for this practice, there are a total of 3,600 charts within the practice universe and of these 30 had been selected for review. If we accept the estimate of $142.50 per chart as the overpayment amount, then we would multiply this figure by the total number of charts in the universe (3,600) to get a total overpayment estimate of $513,000. If we were to use the overpayment estimate of $285 for each damaged chart, which accounted for 50 percent of all reviewed charts, our point estimate would be the same ($285 X (3,600 X 50 percent). Even if we were to use the estimate of claims per chart, we would end up with about the same (4.6 claims per chart X 3,600 charts = 16,560 claims X $30.97 per claim).

It's in this step, the extrapolation estimate, that we see how the entire model begins to break down. First of all, what if the sample was not random? For example, what if, in our sample, the number of claims per chart was significantly higher than it was for the universe? For example, in the sample we saw 4.6 claims per chart, but what if, in the universe of charts for our practice, the average number of claims per chart was only 3.2? More than likely that would mean that the overall estimate of overpayment per chart is less than what was estimated using the sample. How significant is that?  Well, if we estimated $31 per claim and there were only 11,520 claims, instead of the 16,560 estimated from the sample, the damage estimate would have been $356,774 instead of $513,000. And if the number of claims lines per claim (and hence per chart) was even smaller, the estimated overpayment of $513,000 would be even more egregiously overestimated. Even if the sample was in fact random and did in fact accurately represent the universe for the practice, there is always the risk of sample error, which mostly is dependent on the size of the sample.

The concept is actually quite simple: the smaller the sample, the greater risk for potential error when inferring to the universe. Conversely, the larger the sample size (assuming randomness, of course), the lower the risk for potential error is. For example, if I want to figure out who Americans are going to vote for in the next presidential election, I only would have to survey 783 people to have a 5 percent margin of error either way. If I wanted to narrow that to 3 percent, I would need to nearly triple the sample size to 2,178 people, and if I wanted to be accurate to within 1 percent, I would have to ask 19,620 people. In the case of a RAC audit, these estimates of sample error come after the audit (aposteriori) and not in advance (apriori). As such, our point estimate of overpayment, no matter how it is calculated (charts, claims, claim lines, etc.), is subject to sample error - and the smaller the sample, the greater the potential error. Because of this, most auditors will base their overpayment demands on the lower boundaries of what is called the Confidence Interval, or CI. The CI is an estimate of the level of confidence we have that the number we have calculated is accurate. For example, when the auditor calculated that the average overpayment per chart was $142.50, s/he based this on their findings of the total overpayment divided by the number of charts (30), which in this case is our sample size. Using a statistical package such as SAS, MiniTab, SPSS or even MS Excel, we would calculate that, using a confidence interval of 90 percent, the range of our estimate is actually somewhere between $113.10 and $171.90. We also can say this in two ways. For one, I can say that I am 90 percent confident that the real average overpayment per chart for the universe of charts in my practice is somewhere between $113 and $172. Another way to look at this would be as follows: if I took 100 samples of 30 charts and conducted the same tests, for at least 90 of those samples the true average would be somewhere between $113 and $172.

In its simplest form, this means that there isn't any statistically significant difference between these values. As such, for most government audits (and private audits, for that matter), the auditing agency tends to favor using the lower boundaries of the confidence interval to estimate the overpayment demand, as this eliminates much of the need to negotiate based on the statistical part of the audit (at least from their perspective). So, in our example, instead of a point estimate of $513,000, the overpayment demand more likely would be $407,160 ($113.10 X 3,600).


Summary

In summary, there is a lot a practice can do to defend itself against improper use of statistics when undergoing a RAC audit. First of all, you can do a basic test to determine the randomness of a sample. Try this: calculate the average paid amount per unit within the sample and compare it to the universe of units. Let's say you calculate that the average paid amount per claim for your practice is $525, but in the sample, it's $750: that's a problem. In this case, it is likely that the overpayment estimate per claim for the sample will be biased upwards, resulting in an overpayment demand that is unreasonably high. I would do the same with other metrics, such as claims per event, or claim lines per claim, or even RVUs per event. The second point is to make sure you validate the point estimates that are being reported by the auditing agency. Know when to use a median rather than a mean. Make sure that an estimate is based on the total sample, not just the portion for which the units were found to be overpaid. Finally, verify the extrapolation calculations. Make sure that they include a confidence interval calculation and verify that it was done correctly. Demand the data in a usable format such as an Excel file rather than some fuzzy, poorly imaged PDF file that many prefer to provide (for obvious reasons).

It's up to you to decide just how far you want to go to defend yourself against a RAC audit. If the numbers are low, you may choose to go down without a fight. But if they are high and you have as much to gain as you do to lose, consider working with a statistician or someone who is experienced with statistical analyses.

It's your money, and it's your choice, but as my attorney friend would always end his letters, govern yourself accordingly.

About the Author

Frank Cohenis principal and Senior Analyst for The Frank Cohen Group, LLC.  He is a certified Black Belt in Six Sigma and a certified Master Black Belt in Lean Six Sigma. As a consultant and researcher, his areas of expertise include data mining and analysis, predictive modeling, applied statistics and evidence-based decision support.

Contact the Author

frank@frankcohengroup.com

To read article entitled, "Fall: Falling into Leaves, Landscape Not the Only Things that Change in the Fall!," please click here

Frank D. Cohen, MPA, MBB

Frank Cohen is the director of analytics and business Intelligence for DoctorsManagement, a Knoxville, Tenn. consulting firm. He specializes in data mining, applied statistics, practice analytics, decision support, and process improvement. He is a member of the RACmonitor editorial board and a popular contributor on Monitor Monday.

This email address is being protected from spambots. You need JavaScript enabled to view it.