June 13, 2011

Another Way to Skin a Cat(fish)

By

fcohen100In the past I have discussed issues involving audit results based on extrapolation. As a refresher, extrapolation is a statistical technique used to infer a conclusion about a universe of data from a sample randomly extracted from that universe, or at least something like that.

In our world, an extrapolation occurs when it is unreasonable and/or not cost effective to audit every single claim within a practice. For example, let's say that a practice submits 10,000 claims a year and an audit is initiated on claims filed during the past three years. In this case there would be 30,000 claims at risk for audit, but it would be crazy even to think about actually auditing that many claims. In this case, the standard operating procedure would be to sample randomly some number of those claims (say, 100) and use the results from that sample to infer the results to the population.

Continuing with our example, let's say that of those 100 claims, 25 were found to have been paid in error. From the standpoint of extrapolation, we then could infer that approximately 25 percent (plus or minus what is known as the "sample error") of the claims in the universe also likely were paid in error. Most often, at least for government audits, the sample error is translated to a confidence interval, and the lower 90 percent of the confidence interval is used to infer the error to the population. In this case, the 90 percent confidence interval range is 18 percent to 33 percent. I would state this by saying I am 90 percent confident that the true error rate for the universe is somewhere between 18 and 33 percent. Another way to interpret this is like so: for every 100 samples of 100 claim lines that I take, the error rate on 90 of them will be between 18 percent and 33 percent.

In order to avoid arguments about the issue of sample error, most auditors will settle on the lower bound of the 90 percent confidence interval (in this case, 18 percent). One point to note here is that most of the time, the overpayment amounts are not measured in proportions, but rather in dollars.  So let's say that of the 25 claims found to have been paid in error, the total overpayment amount is $2,500, or around an average of $100 per overpaid claim (or, more likely, $25 per claim sampled). Once again we still have the issue of sample error, so using a couple of other values (standard deviation and number of units in the sample) we would calculate the 90 percent confidence interval to be around a mean of $25. For the sake of simplicity, let's say that the 10 percent error comes out to $10. In this case, the range would be $15 to $35, and once again, to avoid argument, the auditor would use the lower bound of $15. To complete the extrapolation, we simply multiply the $15 by the 30,000 claims to come up with a total estimated overpayment of $450,000. As a side note, some auditors will extrapolate using the proportion of overpaid claims and the average overpaid amount per overpaid claim. This method is a bit less accurate, as it has to factor in the sample error for both the proportion of claims in error as well as the sample error for the overpayment on the overpaid claims.

While there are a lot of places between the audit notice and the audit results in which to make mistakes, in theory the process is pretty simple. That is, until we introduce the concept of stratification.

Stratification

Stratification by definition is the process of layering, and in our case it defines a situation in which the universe of claims is broken out into different subsets sometimes called "sample frames."

Stratification is supposed to be based on some logical distribution of unique features, but I find that this is not always the case. Most often, strata are defined by criteria like code category (i.e. E/M, surgery, pathology, etc.), paid claim amount, date range or other factors.  When this happens, the sample sizes are diluted a bit for each stratum, but the extrapolation is based on the universe of claims for that stratum only.

In our example, let's say that the universe was divided into three strata separated by paid claim amount. Claims paid at under $100 are designated as stratum 1, claims greater than $100 but less than $500 are stratum 2 and claims paid at $500 or more are stratum 3.

The sample size is 30 claims per stratum. So consider that stratum 1 has a universe of 20,000 claims, stratum 2 a universe of 8,000 claims and stratum 3 a universe of 2,000 claims.

In order to estimate the total overpayment amount, we use the same technique as above, only we apply it to each stratum individually and then add together the extrapolated overpayment estimate for each to get the grand total. Again, in theory at least, pretty simple.

The Nitty Gritty

After an 800-plus word introduction, here's the important part of the story. In a few of the audits I have reviewed of late, I came across a situation in which a claim being audited was incomplete. Remember, at the claim level there can be more than one claim line. For example, if I see a patient, perform a procedure and maybe run a lab test or two, there can be three or four claims lines that make up the claim. So when we talk about the provider's paid amount per claim (a very common metric), we are referencing the paid amount per claim line within that claim. This discrepancy, when it occurs, often just looks like an innocent mistake, and in some cases the client just wanted to let it go since it would make sense (at least at first glance) that the fewer claim lines within a claim, the lower the overpayment amount. Why? Because the claim paid amount would be lower.


By Way of Example

Here's an example from our mock practice: claim X is reported with four claim lines and a claim paid amount of $485. When the practice looked at the actual claim record (or the chart), it found that there were actually five claim lines, and if the fifth had been included the paid claim amount would have been $535. The first thought is "great, they forgot to include a claim line so that eliminates the possibility that it was paid improperly, and adds to the overpayment amount."  It's definitely a logical assumption. But let's take a closer look at this.

A Closer Look

In its current position, claim X falls into stratum 2, which has a universe of 8,000 claims. Not only does it fall into this stratum, it is at the higher end of the stratum's range. So the overpayment amount determined for this claim goes into the bucket, which, when averaged, will use the universe of 8,000 claims to calculate the extrapolated amount. Just for the sake of argument, let's say that all of the claim lines were determined to have been paid in error, and as such, the overpayment amount is reported as $485. That's only $15 shy of the highest amount that could have been found, since the top of the range for this stratum is $500. Now let's look at this from the other side. Let's say that the fifth claim line was included and the entire claim was paid in error, increasing the overpayment amount from $485 to $535 (an increase of $50). Again, logic says that the former case is better than the latter. But is it really? In the former case we have an overpayment amount at the upper range of the stratum factored into a universe of 8,000, while in the latter case we have an overpayment at the bottom of the stratum factored into a universe of only 2,000.

Whether these omissions occur on purpose or not, when a sample is stratified based on a paid claim amount, it is important that practices review the claims in the sample to make sure that all of the claim lines are included. If they are not, it would be a good idea to try to reconstruct at least some of them to determine whether the exclusion results in a change to a lower stratum, and whether that might result in a higher overpayment estimate. There are many ways to skin a cat(fish), and I am afraid we have just discovered another.

About the Author

Frank Cohen is the senior analyst for The Frank Cohen Group, LLC. He is a healthcare consultant who specializes in data mining, applied statistics, practice analytics, decision support and process improvement.

Contact the Author

frank@frankcohengroup.com

To comment on this article please go to editor@racmonitor.com

Contract Involvement in the Medicare Appeals Process

Frank D. Cohen, MPA, MBB

Frank Cohen is the director of analytics and business Intelligence for DoctorsManagement, a Knoxville, Tenn. consulting firm. He specializes in data mining, applied statistics, practice analytics, decision support, and process improvement. He is a member of the RACmonitor editorial board and a popular contributor on Monitor Monday.

This email address is being protected from spambots. You need JavaScript enabled to view it.