February 3, 2010

RAC-Approved DRG Validation Issues: Who and What’s At Risk Now

By

ernieThe RACs recently posted more than 560 of the current 746 MS-DRGs approved for DRG Validation, offering analysis of DRG coding, principal diagnoses, secondary diagnoses, and all diagnoses and procedure codes that could affect the DRG selection. While none of the RACs appear to be approved for medical necessity review so far, according to the CMS RAC Review Phase-in Strategy published by CMS in June 2009, they are expected to be approved sometime in "calendar year 2010."


All four RACs have posted issues approved for DRG validation, but their lists of approved/chosen DRGs vary widely. The RAC for Region D (western states), Health Data Insights (HDI), has posted the most, with 560 DRGs - almost 80 percent of all DRGs. The RAC with the next highest number of posted issues is Connolly Healthcare, the RAC for Region C (south and southeastern states), with 60. The Connolly list appears to be much more selective than the HDI list.


This stark difference is what piqued my curiosity and drove me to perform some statistical analysis, to compare the lists and to try to answer the question "why is Connolly more picky than HDI, or what do they know that the rest of us don't?"

 

What The RACs Target & What CMS Approves

 

It's not difficult for anyone familiar with the RAC program to understand what the RACs are after: profits. They are private-sector companies contracted by CMS and paid "contingency fees" for detecting and tracking incorrect reimbursements for Medicare services provided to beneficiaries. Some have described the program as "an aggressive form of bounty hunting."


In a February 2008 letter to Rep. Lois Capps in the U.S. House of Representatives, the AMA stated its belief that "the RAC program is fatally flawed." The letter goes on to describe the program in no uncertain terms:


"The RAC program is draconian, time-consuming, and devoid of efforts to improve the Medicare system. Moreover, it is based upon perverse incentives. RACs are not compensated by CMS. Instead, they receive a share of the funds recovered from alleged overpayments, otherwise known as ‘contingency fees.' At best, this type of compensation system lends itself to the possibility of questionable audit results, with ‘borderline' claims being pursued and investigated. At worst, it forces physicians, whose time is better spent caring for patients than reviewing old documents and pursuing appeals, to simply yield to unproven RAC claims."


In a July 2009 letter to the American Medical Association (AMA), even CMS acknowledged that the RACs operate in a "relatively uncommon" fashion for a government contractor. CMS claimed in that letter to have taken several steps to protect against "overly aggressive reviews."


One such step mentioned in the letter was the "New Issue Review board," which also was touted by Commander Marie Casey, deputy director of CMS's Audit Division, in an interview with RAC Monitor last fall. During that interview, Commander Casey described the process this way:


"The New Issue Review Process requires that the RAC submit a proposal for widespread review in one or more states. CMS then either (a) approves the issue as submitted for review, (b) gives a conditional approval for review in a smaller area, (c) gives a conditional approval with some caveats, or (d) declines to approve the issue as submitted. For example, CMS may decline an issue for automated denial, where CMS thinks the issue might need to be a complex review instead."


Unfortunately, no further details about this process seem to be available.

 

How Does An Issue Get Selected for Review?

 

CMS provides a set of RAC questions and answers on its Web site, albeit a rather limited set of 33. One of the questions covers this subject:


"Feedback:
How will the Recovery Audit Contractors (RACs) determine which claims to review?


Answer:
The RACs will use their own proprietary software and systems as well as their knowledge of Medicare rules and regulations to determine what areas to review."


All that answer does is beg more questions, so let's look at another source.


The RAC Statement of Work (SOW), published in November 2007, includes this statement about how a RAC selects claims to review:


"The RAC shall adhere to Section 935 of the Medicare Prescription Drug, Improvement and Modernization Act of 2003, which prohibits the use of random claim selection for any purpose other than to establish an error rate. Therefore, the RAC shall not use random review in order to identify cases for which it will order medical records from the provider. Instead, the RAC shall utilize data analysis techniques in order to identify those claims most likely to contain overpayments. This process is called "targeted review". The RAC may not target a claim solely because it is a high dollar claim but may target a claim because it is high dollar AND contains other information that leads the RAC to believe it is likely to contain an overpayment." - SOW, pp 8-9.


Keep that last sentence in mind - targeting a claim with a high dollar value IS allowed, as long as there is information available for the RAC to use as evidence to support a suspicion that an overpayment may have occurred.



How Does An Issue Get Approved by CMS for RAC Review?

 

There are no publicly published criteria, no known format or template, for how a RAC submits an issue to CMS for approval. In the SOW, there is mention of a "validation process," but no details are offered:


"Once the RAC has chosen to pursue a new issue that requires complex or automated review, the RAC shall notify the PO of the issue in a format to be prescribed by the PO. The PO will notify the RAC which issues have been selected for claim validation (either by CMS or by an independent RAC Validation Contractor). The RAC shall forward any requested information in a format to be prescribed by the PO. The PO will notify the RAC if/when they may begin issuing medical record request letters (beyond the 10 test claims) and demand letters on the new issue. The RAC shall not issue any demand letters on issues that have not [been] approved by CMS." - SOW, p 21.


So we are still in the dark about this process. However, we do have lists of what already has been approved by CMS, and we have the list of all the DRGs, so perhaps we can compare the two, use what we know about the dollar amounts involved and see what falls out of our analysis - which is exactly what I have done. (Please be aware that for this article, I use the term DRG to mean MS-DRGs.)


I present the results of my analysis here, but leave the readers to draw their own conclusions.

 

Methodology & Assumptions

 

The assumption I make in this analysis is that the RACs are looking for profits. Also, since they are paid a percentage of the dollar amounts recovered from the denials they document, it stands to reason that they will be interested in denials of claims involving high dollar amounts. High dollar amounts are reimbursed for DRGs with high relative weights. Relative weights are assigned to each DRG by CMS based on its assessment of a facility's expenditure of resources to care for a patient with the respective diagnosis. Each facility also is assigned a "Medicare blended rate," which is a dollar figure derived by CMS each year. At the risk of oversimplification, the amount reimbursed to a facility for a given DRG is the facility's blended rate multiplied by the DRG's relative weight.


CMS publishes two bits of information we can use to compare our lists with the list of all DRGs: one, the relative weights of each DRG, as described above and published by CMS each year; and two, the national ranking of each DRG, which is determined by the number of discharges for each DRG nationwide (with the rank of No.1 corresponding to the highest number of discharges for FY 2009).


To analyze the lists of approved issues, I first compared the distribution of relative weights for the DRGs in each list with the same distribution for all DRGs. The figures can be compared more easily by using percentages instead of raw figures, which is what I have done below.


I sorted the lists by relative weights (RWs), then grouped them into six ranges of values. Then it is a simple matter to calculate the percentage of the DRGs in each list within a specific range of RWs.


Since only two of the RACs had a significant number of approved DRG issues at the time of this writing, we only included those two lists (Connolly's and HDI's) in this analysis.

 

Analysis of the DRG Relative Weights

 

Here is a table of the results from analyzing the relative weights. The findings are expressed in percentages:

02-03-10-ernie-table1


Notice that for the list of all DRGs, 73 percent have a relative weight of less than 2.0 and only 13 percent have a relative weight of more than 3.0.


Also notice that the HDI list has almost identical distribution to the list of all DRGs. This is to be expected, however, since HDI already has been approved for 80 percent of all DRGs. The Connolly list is another story, though, showing statistically significant differences.


 


For Connolly, 30 percent of the DRGs they targeted have RWs above 3.0, and 55 percent have RWs above 2.0.


The differences become more obvious and alarming if we graph these figures as curves.


02-03-10-ernie-chart1


The graph shows the distribution curve for the three lists. As mentioned before, the curves of the HDI list and the list of all DRGs are almost identical.


The graph clearly shows, however, that the Connolly list is heavily skewed toward higher-paying DRGs by more than a 2-to-1 ratio (except around the 1.0 to 2.0 RW range). When the red line appears above the black dotted line, it means that Connolly is targeting those DRGs heavily. When the red line appears below the black dotted-line, it means that Connolly targets few DRGs in that range.


For example: 31 percent of all DRGs appear in the range marked "RW < 1.0" (meaning DRGs with RWs between 0 and 1.0), while only 4 percent of the DRGs in Connolly's list appear in that range. Connolly must not be very interested in the DRGs with such small weights. On the other hand, 28 percent of Connolly's targeted DRGs have RWs of 2.0 to 3.0, while only 13 percent of all DRGs appear in that range. Connolly seems VERY interested in those DRGs.


The graph indicates that the Connolly list is heavily weighted toward the higher-paying DRGs and skewed away from the lowest-paying DRGs.

 

How Many Claims Are At Risk?

 

It is tempting to think, "perhaps not many of our claims will be hit, since not too many of our claims have those high-paying DRGs." Not so fast! Let's look at the other figures I mentioned above - the discharge ranks.


Keep in mind that the RACs are likely to go "fishing" in the biggest ponds first; that is, they are likely to target areas with the largest number of claims to be reviewed. As the CMS statements and publications make clear, the RACs are not restricted from going after the high-value claims as long as they can show evidence to support their suspicions that money can be recouped from them.


So let's compare the rankings of the approved DRGs in each list with the rankings for all DRGs. Again, the figures can be compared more easily by using percentages instead of raw figures.


First, I sorted the lists by the published rankings (in number of discharges for FY 2009), then I grouped them into six ranges of rankings, and finally I calculated the percentage of the DRGs in the lists within each range.


 

Analysis of the DRG Rankings


This time, let's just go straight to the graph; keep in mind there's no need to add the HDI list since it is so similar to the list of all DRGs:


02-03-10-ernie-chart2

 

Of course, the list of all DRGs just shows a steady number (13 percent) of DRGs in each range - except the last one, since it has a smaller number of DRGs. If a RAC shows no preference in the DRGs it targets, then its list should produce a curve similar to this one: very flat. Then again, we know that RACs want to be efficient, and therefore likely will have curves that are skewed to show preference for claims with more cases (or in this graph, more discharges). Connolly's curve does not disappoint us in that respect.


The curve for the Connolly list shows a clear preference by the Region C RAC: 44 percent of the DRGs it targets are ranked between 1 and 300. Now to be honest, I did expect them to be targeting more DRGs in the 1-100 range. But then... it's only January. And this trend made me stop and think.

 

Small & Medium Size Facilities Now at Risk

 

Something occurred to me when I saw this second graph, and I thought about it for a while: Connolly will be casting a wide net. That is, since the rankings of its approved DRG issues are skewed into the 100-300 range, that means that smaller facilities are at risk - perhaps more than what's been expected.


I can recall speaking to many CEOs and CFOs of small facilities this past year and hearing this refrain time and time again: "...they'll probably go after the big guys first, so it may be a while before they get around to me; I'm just a little guy."


Perhaps all those "little guys" might want to take a look at a list of their top 25 DRGs and see how many of them are on the Connolly list. Perhaps the RACs already have done that.


Get the PDF versions of the two graphs, including the raw data, HERE.

 

What Does All This Mean?

 

Does this mean that Connolly is only going after high-dollar claims? No, but they certainly have signaled a predilection for efficiency, if nothing else. Plus, since CMS approved so many DRGs with higher RWs, there must be plenty of evidence of erroneous payments in those DRGs.


Does it mean that HDI is not targeting high-dollar claims? No; since their list is so all-inclusive, there is no way to tell from this data alone where their intentions lie.


Does it mean providers should only worry about high-dollar claims? Certainly NOT, but those with RWs between 1.0 and 3.0 are certainly targets, which shouldn't surprise anyone.


What we also know it means is that the RACs are in full swing now and are coming to a facility near you very soon. Any day now, in fact, if they're not there already.

 

Conducting Your Own DRG Validations

 

Look at their lists (especially Connolly's) and compare them to your top 25 DRGs. Then start auditing. The sooner you find problems internally, the sooner you can fix them and the fewer the RAC can target for recoupment.


eduTrax® and RAC University just conducted a complete Webinar on this subject, providing a template and multiple examples of how to do just that. See how to order "Dissecting a DRG like a RAC: Do-It-Yourself DRG Validation" online or on CD, HERE.


About the Author


Ernie de los Santos is the chief information officer at eduTrax®. He joined the company at its inception and has been responsible for the creation, development and maintenance of the eduTrax® portals - a set of Web site devoted to providing knowledge, resources and compliance aids for U.S. healthcare professionals who are involved in revenue cycle management.

This email address is being protected from spambots. You need JavaScript enabled to view it.