January 4, 2011

Analyzing E/M Codes for RAC Risk

By

fcohen100Recently I received a call from one of my physician clients advising me that they were undergoing a RAC audit of – now get this – E/M codes.

This is one of the first RAC audits I have encountered in which E/M codes were the focus of a review. According to my client, the RAC auditors requested 120 charts for CPT codes 99214, 99233 and 99223, all of which are at the top of the CERT list of improperly paid procedure codes.

This isn’t much of a secret, and looking through the 2009 CERT report, it is pretty clear what codes will be common targets during the coming years. What isn’t clear, however, is how a practice can conduct an initial analysis of its E/M code utilization in order to assess the level of risk it may face. Risk, interestingly enough, is a term that, without the concept of utility, has little meaning or value. For example, a large practice wouldn’t blink at a RAC overpayment demand of, say, $30,000, while this same amount could prove catastrophic to a solo primary care doctor. Utility creates a boundary for risk based upon the ability of the practice to absorb a consequence. 

Assessing Risk

When trying to assess the risk of an audit or a review, a practice needs to put itself in the auditor’s shoes. What are they looking at when trying to determine the potential for recovery? When it comes to E/M coding, this is really a bit easier than one might think, and like the concept of utility, it involves building a set of boundaries that helps define the meaning of risk.

Many of us have been involved in conducting basic E/M utilization analyses. Usually, we take a look at the utilization of a particular E/M code as a ratio of all E/M codes in a category and then compare that to some control group: most often the CMS national physician database. We then take a look at the distributions (or graphs) in order to get some idea of how similar or different we are from the control group.

Let’s consider an example. Suppose we are internal medicine practice and we want to get an idea as to what our established office visit codes (99211 through 99215) look like compared to those of our peers (assuming that the control group does, in fact, do a decent job of peer analysis). The first step most of us take is to create a distribution graph such as the one below:

Table 1: Established Office Visits

Cohen-Chart-3

Now here’s the question: does this comparison demonstrate a risk for the practice? Initially, most of us would say “yes” based solely on the fact that the utilization of the 99214 and 99215 codes appears to be higher than that of the comparison group. What if I told you, though, that this practice only reported 10 established office visit codes? Now do you consider the above distribution to pose a risk? Most likely not, since the best the RAC might be able to do in this case is a few dollars in recovery.  When looking at risk and considering the idea of utility, we have to consider the volume of codes reported within each category, and you can’t do that simply by looking at graphs. We also don’t get that kind of information just by considering the variances since, without the tabular data, they still don’t factor into frequency.

Variance graphing does, however, give us a much better idea about the differences between similar codes within a category, as shown below:

Table 2: Variance of Established Office Visits

Cohen-Chart-4

In this case, it is pretty easy to see that the majority of the impact comes from the 99215 code.  Once again, however, if there were only 10 E/M codes reported, it would be a stretch to identify this as any kind of significant risk.


So what do we do to assess the risk of audit or review? We have to factor the variance and the frequency and come up with some type of metric that lets us make a reasonable call. To do this, there are two steps we have to go through. The first is to measure the actual variance between the practice’s distribution and the control group, which in this case comes from the CMS database. We start with a basic frequency distribution table like the one below:

Table 3: Utilization distribution comparisons

Cohen-Chart-1

In this table, we simply compare the utilization distribution for each of the practice’s established visit procedures with the national distribution figure for the same specialty. The next step is to create a proportion distribution calculation that will give us an average RVU value for both the practice and the national average. We do this by multiplying the individual percentages by the RVU for each procedure, thus generating the sums as follows:

Table 4: Average RVU Values

Cohen-Chart-2

Now we can see that the practice reported an average RVU for all established office visits of 2.93 while the national average for the same code set is 2.41. This means that the practice is reporting utilization of established E/M codes at around 122 percent of the national average. Remember, this doesn’t mean that the practice is utilizing established visit codes more, rather that they are shifting utilization of established office visits to the higher codes. Put another way, this practice’s distribution of established office visit codes is about 22 percent higher than that of their peers.

Frequency of Visits

Let’s get back to the issue of frequency now, since that is the second part of the equation. To calculate a risk value, we want to multiply the total volume of established office visits by the variance (22 percent) between the practice and the national average. If, for example, this practice only reported 10 established office visits, then its “risk score,” as I call it, would be 2.2. If, on the other hand, it reported a total of 3,560 established office visits during the data period, its “risk score” would be 783: a figure that not only is significantly higher, but within the “high risk” range.

From a scoring perspective, this is how I determine severity of risk:

•    High Risk    > 100
•    Medium Risk    50 – 100
•    Low Risk    20 to 50
•    No Risk         < 20

In our case, with a variance of 22 percent, the practice would need to report at least 455 established office-visit codes to be considered at high risk. But what if the variance were to change? Let’s say that the variance is 9.6 percent? This would mean that the practice would tend to be shifted right about that figure, or put another way, it has around a 10 percent “overutilization” tendency toward the higher codes. In this case, the practice would need to report nearly 1,050 new office-visit codes in order to be considered at high risk. On the other end, if we had a practice that reported a variance of 65 percent, it only would need to report 150 established visit codes to be considered at high risk.


In summary, the idea is to balance volume with variance in order to determine the potential risk for audit or review. Remember, RACs get paid a commission on what they recover, so the greater the probability that they will find more, the greater the potential risk to the practice. This type of a model allows practices not only to identify the location of potential risk, but it also gives them the tools they need to determine priorities and allocation of resources when it comes to internal reviews.

About the Author

Frank Cohen is the senior analyst for The Frank Cohen Group, LLC. He is a healthcare consultant who specializes in data mining, applied statistics, practice analytics, decision support and process improvement.

Contact the Author

frank@frankcohengroup.com

To comment on this article, please go to: editor@racmonitor.com

To read article entitled, "Part 1: Harnessing Health Information in Real Time: Lessons from the Financial Services Industry to Mitigate Healthcare Waste, Fraud and Abuse," please click here

Frank D. Cohen, MPA, MBB

Frank Cohen is the director of analytics and business Intelligence for DoctorsManagement, a Knoxville, Tenn. consulting firm. He specializes in data mining, applied statistics, practice analytics, decision support, and process improvement. He is a member of the RACmonitor editorial board and a popular contributor on Monitor Monday.

This email address is being protected from spambots. You need JavaScript enabled to view it.