EDITOR’S NOTE: The following is the third article in a five-piece series titled “Dirty Tricks,” covering abuse in audits and statistical extrapolation in the healthcare industry.
I’ve been helping providers fight against statistical extrapolations since 2001. It has not been a pleasant experience. In almost all cases, the auditor, a Zone Program Integrity Contractor (ZPIC), failed to meet the requirements of the Program Integrity Manual (PIM).
Anyone who has been audited is familiar with statistical extrapolation. The auditor takes a sample of a few patients, or claims, usually 30 or so, and then reviews them to see if they are valid. If 75 percent of the claims in a sample are rejected, the auditor will ask that the provider return 75 percent of the money collected over a number of years. That’s the extrapolation.
Providers have the right to examine the statistical work and verify that it was done fairly and accurately. But to do this, the provider needs two things: first, it needs to have a complete record of the statistical work, and second, it needs to employ a credible scientific expert who understands statistics and who can see through the often amateurish and deceptive work of the contractor.
This series examines the numerous dirty tricks contractors use to stop providers from exercising their rights. In February (http://www.racmonitor.com/rac-enews/1963-dirty-tricks-contractors-hiding-their-work.html), we examined Dirty Trick No. 1: Don’t explain anything at all; Dirty Trick No. 2: Make it impossible to analyze the information provided; and Dirty Trick No. 3: Delay every request for information. Earlier this month (http://www.racmonitor.com/news/special-bulletins.html) we covered Dirty Trick No. 4: Sending faulty CDs; Dirty Trick No. 5: Leave out critical files; and Dirty Trick No. 6: Hide or block important information. All of these tricks involve the discovery phase, which is when you are attempting to figure out what the Recovery Auditor (RA) actually did. And don’t be surprised if you never really understand it at all.
So now you are ready to look at the statistical work. You want to check if a valid statistical methodology was used; that’s a good place to start. But what is “valid statistical methodology?” That’s not defined in the PIM, at least not in a single place. In fact, the opposite is true.
Dirty Trick No. 7: Manufacture Data out of Thin Air.
One of the first things a contractor must determine is the size of the sample to take. There are formulas for this, and they tell you what sample size to use for a particular level of accuracy in your extrapolation. The more accurate you want to be, the larger your sample should be. If your sample is too small, then the work becomes wildly inaccurate and unfair.
Without quoting the formula, still we can examine what input it requires. One of the most crucial forms is the variation in the variable being estimated. The technical term is “standard deviation of the variable being estimated,” but we can just say “variation.” If there is a lot of variation, the sample needs to be larger.
If you are going to predict the presidential election outcome in California, you would need to poll people in many different voting districts. If you took a small sample only around Berkeley, you would not end up with something that is very representative of our Golden State, where only around 55 percent of the population includes native English speakers. There is much “variation,” so to be accurate you would need a larger sample taken from many different places.
The same is true with Medicare or Medicaid audits. The variable you are trying to estimate is the amount of “overpayment.” So in order to use the formulas to determine the sample size, you need to know the variation in overpayments. But the RA often never bothers to determine this. Instead, they skip this step altogether. They will simply make up a number and plug it into the formulas that determine sample size. They manufacture data out of thin air. That’s the dirty trick.
How do they do this? Often, they will use the variation of some other variable, such as the paid amounts (not the overpayment amounts). This variable, which is not the correct one, will have much less variation. And what does that mean? Remember the rule: if there is more variation, you need a larger sample; so if there is smaller variation, then you can get by with a smaller sample. So the contractor will perform a “bait and switch.” It will take this different number and plug it into the complicated formulas used to determine sample size. And of course, a small sample means more profits for the RA, because it has less work to do to collect your money.
It all looks very official, but it is deceitful, and methodologically wrong – dead wrong. But they do it every day.
Dirty Trick No. 8: Take Shortcuts and Skip Steps.
Another favorite dirty trick used by contractors is to completely ignore statistical methodology and simply skip over steps. This is very common. It happens in many places, but since we are talking about sampling, let’s focus there. What do the RAs do? They don’t bother to even pretend to calculate what the required sample size is. Instead, they simply settle on a number and use that as the sample size without giving it a second thought. A favorite number is 30. They like 30 for some reason. We have had cases for which the sample size used was 30, but in order to achieve the accuracy claimed by the RA in its boilerplate documentation, they would have had to use a sample size of 732. That means they used a sample that is around 25 times too small.
And this brings us back to the PIM. It doesn’t help. The contractors often argue that it gives them flexibility in sample size, and that any sample size is OK. It’s not. A “valid statistical methodology” does not mean using a sample size that is 25 times too small and based on skipping steps or making up data to place into crucial formulas.
This type of fast-and-loose work on the part of RAs is common, and it is the same type of work that is driving healthcare providers into bankruptcy or crippling their operations through heavy litigation costs.
In the next part of this series, we will examine a few of the consequences of these abuses. Don’t expect things to look any better there either.
About the Author
Edward M. Roche is the founder of Barraclough NY LLC, a litigation support firm that helps healthcare providers fight against statistical extrapolations.
Contact the Author
Comment on this article