EDITOR’S NOTE: This is the first in an exclusive RACmonitor three-part series documenting the implementation of cyber audits by third-party auditors.
The number of Medicare audits is increasing. In the last five years, audits have grown by 936 percent. As reported previously in RACmonitor, this increase is overwhelming the appeals system. Less than 3 percent of appeal decisions are being rendered on time, within the statutory framework.
It is peculiar that the number of audits has grown rapidly, but without a corresponding growth in the number of employees for Recovery Audit Contractors (RACs). How can this be? Have the RAC workers become more than 900 percent more efficient? Well, in a way, they have. They have learned to harness the power of big data.
Since 1986, the ability to store digital data has grown from 0.02 exabytes to 500 exabytes. An exabyte is one quintillion bytes. Every day, the equivalent 30,000 Library of Congresses is put into storage. That’s lots of data.
Auditing by RACs has morphed into using computerized techniques to pick targets for audits. An entire industry has emerged that specializes in processing Medicare claims data and finding “sweet spots” on which the RACs can focus their attention. In a recent audit, the provider was told that a “focused provider analysis report” had been obtained from a subcontractor. Based on that report, the auditor was able to target the provider.
A number of hospitals have been hit with a slew of diagnosis-related group (DRG) downgrades from internal hospital RAC teams camping out in their offices, continually combing through their claims data. The DRG system constitutes a framework that classifies any inpatient stay into groups for purposes of payment.
The question then becomes: how is this work done? How is so much data analyzed? Obviously, these audits are not being performed manually. They are cyber audits. But again, how?
An examination of patent data sheds light on the answer. For example, Optum, Inc. of Minnesota (associated with UnitedHealthcare) has applied for a patent on “computer-implemented systems and methods of healthcare claim analysis.” These are complex processes, but what they do is analyze claims based on DRGs.
The information system envisaged in this patent appears to be specifically designed to downgrade codes. It works by running a simulation that switches out billed codes with cheaper codes, then measures if the resulting code configuration is within the statistical range averaged from other claims.
If it is, then the DRG can be downcoded so that the revenue for the hospital is reduced correspondingly. This same algorithm can be applied to hundreds of thousands of claims in only minutes. And the same algorithm can be adjusted to work with different DRGs. This is only one of many patents in this area.
When this happens, the hospital may face many thousands of downgraded claims. If it doesn’t like it, then it must appeal.
Here there is a severe danger for any hospital. The problem is that the cost the RAC incurs running the audit is thousands of time less expensive that what the hospital must spend to refute the DRG coding downgrade.
This is the nature of asymmetric warfare. In military terms, the cost of your enemy’s offense is always much smaller than the cost of your defense. That is why guerrilla warfare is successful against nation states. That is why the Soviet Union and United States decided to stop building anti-ballistic missile (ABM) systems — the cost of defense was disproportionately greater than the cost of offense.
Hospitals face the same problem. Their claims data files are a giant forest in which these big data algorithms can wander around downcoding and picking up substantial revenue streams.
By using artificial intelligence (advanced statistical) methods of reviewing Medicare claims, the RACs can bombard hospitals with so many DRG downgrades (or other claim rejections) that it quickly will overwhelm their defenses.
We should note that the use of these algorithms is not really an “audit.” It is a statistical analysis, but not done by any doctor or healthcare professional. The algorithm could just as well be counting how many bags of potato chips are sold with cans of beer.
If the patient is not an average patient, and the disease is not an average disease, and the treatment is not an average treatment, and if everything else is not “average,” then the algorithm will try to throw out the claim for the hospital to defend. This has everything to do with statistics and correlation of variables and very little to do with understanding whether the patient was treated properly.
And that is the essence of the problem with big data audits. They are not what they say they are, because they substitute mathematical algorithms for medical judgment.
EDITOR’ NOTE: In Part II of this series, Edward Roche will examine the changing appeals landscape and what big data will mean for defense against these audits. In Part III, he will look at future scenarios for the auditing industry and the corresponding public policy agenda that will involve lawmakers.
About the Author
Edward M. Roche is the founder of Barraclough NY LLC, a litigation support firm that helps healthcare providers fight against statistical extrapolations.
Contact the Author
Comment on this Article