September 21, 2016

Future of Medicare Audits, Part II: Defending Against the Tyranny of Algorithms

By

EDITOR’S NOTE: This is the second in an exclusive RACmonitor three-part series documenting the implementation of cyber audits by third-party auditors  

In Part I of this series, we reviewed how the number of Medicare audits has increased by almost 1,000 percent in the past five years, and how virtually no decisions by administrative law judges (ALJs) are being issued within the statutory time frame. 

We also discussed how Recovery Auditors (RAs) have started to rely on big-data mining of hospital claims to generate large numbers of diagnosis-related group (DRG) downgrades. This is costing hospitals plenty, not only in reductions of payment revenue, but also in constantly increasing costs of defending against audits.

The use of computer algorithms has drastically reduced the cost of conducting audits, but there has been no corresponding reduction in defensive costs for hospitals, and this is an example of what military people call “asymmetric warfare” – whereby the cost of defense is always disproportionately greater than the cost of offense. It is an impossible game to win.

We will now examine a few of the legal issues presented by the need to defend against not an audit, but against an algorithm.

 The Medicare Program Integrity Manual (MPIM) specifies that the decision to conduct an audit is “not reviewable” in a hearing. This means that even if a provider is being profiled or targeted through an artificial intelligence algorithm, they are fair game, no matter what the reason.

This lack of review ability does not extend to the review itself. That is handled by the appeal system. The typical appeal has little success in the first two levels – reconsideration and redetermination – so the grass gets mowed by the ALJ. Appeals generally are based on a claim-by-claim argument regarding each patient or procedure, combined with a refutation of the statistical extrapolation, which is almost always based on shoddy work.

This litigation profile will change. Why? Rather than challenging the expertise or judgment of the audit reviewer that rejected a claim, the argument instead will be aimed at discrediting the algorithm responsible for the claim rejection.

 But since these algorithms do not make decisions based on medical logic, but only on a pattern of statistical probabilities, the arguments against them will by necessity be couched in quasi-mathematical terms. To do so will require resorting to an entirely different type of expert, and an understanding of what we might call “algorithm law.” And for the most part, many of today’s health law attorneys are ill-prepared to litigate this type of case.

We have seen a number of court fights in which providers have been unable to examine the algorithms used in setting payment rates. In Banner Health, the court refused discovery for the formulas being used. But in University of Colorado Health at Memorial Hospital, plaintiffs were given access to formulas and source files for calculating outlier payments, including the MedPAR files. There was a similar fight in Lee Memorial Hospital v. Burwell. Boston Medical Center was litigation over the algorithm used for hospital reimbursement. In the Mylan Laboratories case, the argument was over how the algorithms were used to calculate the wholesale acquisition cost and average wholesale price of a number of drugs.

In the Strom case, the defendants argued that the “algorithm used by plaintiff to identify false claims (was) insufficiently accurate, and at most (could) be said to identify claims that ‘were probably false rather than identifying particular claims that actually were false.’” Unfortunately, the court decision did not rule on this specific argument.

What are a few of the possible arguments that we can expect to be used to fight off these big-data techniques? Let us suggest a few 

Counterargument No. 1: Algorithms are not medical judgment and are not “audits.” The original idea of an audit is that claims will be analyzed by suitably qualified personnel with relevant medical knowledge. Algorithms are different, and their decisions regarding “downcoding” of DSRs are statistics based on data, but not medical parameters.

Counterargument No. 2: Algorithms are biased because they attempt to fit the data to a least-cost basis, but never to a higher-cost basis.

Undoubtedly, there will be other counterarguments. We would like to hear from you with any ideas.

But regardless of what happens on the appeals trail, we can say that use of big data and artificial intelligence algorithms will continue to displace medical decision-making, and will continue to substitute for medical judgment. 

EDITOR’ NOTE: In Part III, Edward Roche will examine the longer-term public policy implications of this unstoppable trend.

 

About the Author

Edward M. Roche is the founder of Barraclough NY LLC, a litigation support firm that helps healthcare providers fight against statistical extrapolations.

 

Contact the Author

Roche@barracloughllc.com

 

Comment on this Article

editor@racmonitor.com

This email address is being protected from spambots. You need JavaScript enabled to view it.