February 21, 2013

A Case for Statistically Valid Random Samples


Through the years I have been a strong physician’s advocate, and nothing has stoked that fire in me more than the recent increase in aggressive auditing of physician practices.

It is my opinion that the behavior of many government-auditing agencies, more specifically RACs of late, has bordered on illegal. In some cases I have seen, the RACs appear to act like early 20th-century mobsters, extorting money from business owners for “protection” (that is, protection from the mobsters themselves). The only difference is that the mobsters made it clear that this was extortion, while the auditors try to present the illusion of propriety. 

Unfortunately, as several of my past surveys have shown, far too few practices appeal overpayment findings. This is due to a number of reasons, from the cost to not having the staff to just not understanding how the appeals process works (the latter being the most common). And I get that mentality, since the process can be a bit intimidating, especially when it seems that only the provider, and not the auditing agency, is bound by the rules.

In 2010, for example, 906 million claims were processed by carriers and Part B MACs, of which 93 million (10.25 percent) were denied. In that same year, only 2.4 million (2.6 percent) of these claims yielded an appeal request. Of that figure, more than half were submitted by physicians, and according to a fact sheet put out by the Centers for Medicare & Medicaid Services (CMS), 53.1 percent were reversed in favor of the provider. 

For Medicare, there are five levels in the appeals process. The first level is redetermination by a Medicare carrier, a fiscal intermediary (FI), or a Medicare Administrative Carrier (MAC). At this level, the claim is reviewed by personnel who are not those who made the initial claim determination, but they do work for the same company (CMS) under a similar mission.

The second level is reconsideration by a qualified independent contractor (QIC); this occurs if you aren’t happy with the decision issued at the first level. QICs are independent contractors and act as mediators, if you will, to help settle appealed issues.

The third level requires a hearing in front of an administrative law judge (ALJ), and if the ALJ cannot issue a determination (or if you don’t agree with it), you can go to the fourth level, which is a review by the Medicare Appeals Council (MAC) that usually involves personnel from some other payer. Beyond that, you can escalate your appeal to a judicial review in federal district court, but the amount in question must be at least $1,400 (for 2013).

Is there a point to all this, you might ask? You certainly would have the right to ask, since I didn’t intend this to be an article on appeals! Note, however, as cited in the short paragraph above: Some providers have committed hundreds of thousands of dollars and many years to get to the end of that road, and only sometimes with success. So whether or not you should appeal is up to you, and it’s a decision that involves a lot of business, resource and financial considerations. But if you appeal, know too that you have a better than 50 percent chance of getting your appeal overturned. The first red flag should be this: How can an auditor be wrong (make an auditing error) more than half of the time? If they are wrong that often, then it should be obvious that at least starting the appeal process is a good idea. In fact, if anyone were making that many errors in coding, it would be considered a “high and sustained error rate.”

Let’s shift gears for a moment – and if you hang with me for a few more sentences, I promise to link all this up to a logical conclusion. As some of you may know, I am a bit of an evangelist when it comes to validating what are supposed to be statistically valid random samples (SVRS) drawn for an audit. This is critical when the audit involves extrapolation used for calculating damages. In past articles I have discussed how to conduct a “smell test” and have gone over ways to determine whether a sample should have been stratified. The reason this is so important is that, in the statement of work (SOW) put out by CMS, it stipulates two things: First, for extrapolation to be performed, the auditor first must determine that there is a “high or sustained level” of errors. Second, the extrapolation should be based on the presence of a SVRS. The former is an undefined metric; there is no quantifiable value that defines “high” or “sustained.” Even worse, such a finding cannot be disputed, either administratively or legally. As has happened with some of my clients, if the auditor finds an error rate of less than 5 percent, for example, it still can push for extrapolation because its threshold of, say, an error rate of 3.5 percent to define “high” cannot be disputed. When that happens, the only thing you can do short of engaging in a battle with the coding experts is show that the sample is neither statistically valid nor random.

Let’s fast-forward to Docket Number M-11-869 —and a decision by the Medicare Appeals Council. This was a case of an extrapolated audit in which the Program Safeguard Contractor (PSC) AdvanceMed audited 60 claims it stated were drawn randomly from a universe of claims in question. In their findings, AdvanceMed downcoded or denied 227 of the 235 line items (codes) reviewed. Based on the extrapolation that was performed, Cigna, a Medicare Administrative Contractor (can anyone say bias?), requested a return of $211,218. The provider disagreed with the findings, and after hiring a statistician to review the report, appealed based on the determination that the sample drawn by the auditor did not meet the test of a SVRS. In the 20 pages that outline the results of the appeal, there are lots of issues that are brought to light. The statistician for the plaintiff complained that the precision level (15 percent) was not in adherence with Medicare policy, noting that it should have been 10%. The statistician for AdvanceMed argued that there isn’t any written Medicare policy about the precision level, but agreed that the manual provides that “in most situations, the lower limit of a one-sided, 90 percent confidence interval shall be used as the amount of overpayment to be demanded for recovery from the provider or supplier.”

Then, to my horror, the central limit theorem was raised as a defense of pitifully small and significantly skewed distributions. I guess no one involved ever read “Cochran’s Rule for Simple Random Sampling” (J.R. Statist. Soc. B (2000) 62, Part 4, pp. 787-793). Here, I invoke Clarke’s fourth law, which states “For every expert, there is an equal and opposite expert.” The point is that this appeal was carried all the way to the ALJ level, where, in the first hearing, the ALJ ruled in favor of the provider. Taken from the case summary, based on the “unanimous expert testimony,” the ALJ determined that “the sampling methodology utilized by the PSC did not comply with Medicare requirements and therefore the extrapolated overpayment calculation was invalid.”  This didn’t invalidate the case, in that the provider was still liable for overpayment at face value of at least some of the claims. The ALJ concluded that the statistical sampling methodology used to calculate the extrapolated overpayment was invalid, and therefore only the actual overpayments could be recovered.


But wait, there’s more! CMS invalidated the ALJ’s findings due to a legal technicality and then referred the matter to a MAC. The MAC felt that the PSC was not given ample notice of the hearing, and therefore its failure to participate in the hearing was cause enough to vacate the ALJ’s findings. The MAC then referred the case back to the ALJ, and the second hearing included testimony by the PSC’s statistical expert. The result? “The MAC found that CMS’s referral did not address the inaccuracies and uncertainties reflected in the data the PSC provided to the independent expert.” Furthermore, “The council concurs in the ALJ’s decision to invalidate the extrapolation in this case, but for reasons that differ in some manner from the reasons given by the ALJ in his decision. The council determines that the extrapolation is insufficiently reliable because of shortcomings in the way the samples were drawn or the frames were sorted, and concerns about the PSC’s provision of inconsistent data to the independent expert reviewing the sampling without explanation to the statisticians or to the council. The appellant remains financially liable only for the overpayments on individual claims in the sample.”

Who said there are no more stories with happy endings? The sad part has to do with how much time and money must have been spent by both the provider and the government to circle back to the conclusion that should have been reached in the first place. How much longer can the government permit its contractors to engage in this type of egregious behavior? And just what does it take to encourage providers to appeal findings when they disagree with the findings? If there is a moral to this story, it should be that we should never take such abuse lying down. I don’t know how much more evidence we need to be convinced that the frequency and aggressiveness of audits is only bound to increase, and those who willfully remain unprepared will deserve what they get. 

Being a conspiracist, I would like to think that this is all part of a grand plot by the government to take over total control of our healthcare system, including our providers. But alas, I close with Hanlon’s razor, which suggests that we “never attribute to malice that which can be adequately explained by stupidity.”

Alternatively, this sometimes takes the form of Finagle’s law, which states, “Do not invoke conspiracy as an explanation when ignorance and incompetence will suffice, as conspiracy requires intelligence.”

About the Author

Frank Cohen is the senior analyst for The Frank Cohen Group, LLC. He is a healthcare consultant who specializes in data mining, applied statistics, practice analytics, decision support and process improvement.

Contact the Author


To comment on this article please go to editor@racmonitor.com

This email address is being protected from spambots. You need JavaScript enabled to view it.