Two thousand thirteen looks to be a bang-up year as it pertains to audit success, at least from CMS' perspective. According to its own data, it is on track to recoup over $3 billion in payments made to healthcare providers. And with the addition of the fifth RAC and consolidation of the MAC and the ZPIC to the new UPIC, 2014 should result in even higher recoveries.
But what's good for the goose is not always good for the gander and in this case, the provider turns out to be the gander. In all, last year, I handled around 40 post-audit analyses and conducted pre-audit risk analyses for nearly 2,000 physicians, and in all of those but one (and this one is very important), the focus was on the provider; whether it was a physician, nursing home, hospital, or whatever, both our pre-audit risk assessments and the payer's (government and private) audits were based on data generated by that provider.
It's that one that really caught my attention and created a flurry of research for me between Christmas and New Year's Day. That one, instead of focusing on the provider, focused on the patient (or patients, as it were). When this happened, I found myself acting more like a hedgehog than a fox (see Philip Tetlock, "Expert Political Judgment"). See, the hedgehog is highly focused in one direction and thinks and acts based on that narrow approach. The fox, on the other hand, has many models in her brain and as such, has a more diverse approach to thinking and problem-solving. As Tetlock posits, "Hedgehogs tend to flourish and excel in environments in which uncertainty and ambiguity have been excluded, either by actual or artificial means," while the fox tends to flourish in more complex and unpredictable environments.
Hmmm. I think I have gone a bit tangential here, so back to the point. In working post-audit statistical analyses, I have pretty much fallen into a fairly predictable pattern. Look at the universe, the sample frame, test for appropriateness of stratification, sample size, validate randomness, etc., etc., etc. Don't get me wrong; it's not like each case isn't unique, because it is.
But around the complexities and differences of these analyses is a model that works pretty well. In this one case I mentioned prior, where the focus was on the patients, I was taken a bit by surprise since these were not the types of metrics that I was used to seeing. For example, on average, how many E/M visits were reported per patient per year and what did the distribution look like (as opposed to the distribution and number of E/M visits per provider per year). I know that this concept of utilization per patient population is not necessarily new, but it was in this approach, and it went even deeper. For example, the analysis looked at the number of times a patient was seen by the physician for any reason, and counted the number of 99214 visits during that time as a percentage of the total. So, in this case, a patient visits the physician 23 times in a year (that's almost once every two weeks) and each visit was a 99214; a highly unlikely (and unbelievable) scenario, according to the auditor.
The audit continued, estimating the amount of time (from the RUC study) that a patient spent with that provider (rather than looking at the total minutes for the provider) and what procedures were reported. It was a fascinating review and as I was analyzing all of the data, it even occurred to me how the data could be used to improve quality of care and reduce costs (even though that is not the purpose of the article). For example, one patient reported nearly 30 ambulance transport charges in a given month and after we verified that this was true, I worked with the practice to figure out the root cause and through some creative care initiatives, that number decreased to zero during the next month. It's pretty cool to think that while mitigating risk, we might also be able to contribute to quality.
So now, while not everyone will agree with me, I am back to being a fox (cognitively speaking, of course). I once again have many models in my brain and I am now looking at utilization in a completely different way as it pertains to pre-risk assessment and post-audit analysis. I am fully expecting to see more and more of these types of patient-centered data runs that will be presented as evidence for recoupment, even if the visits were justified. And now, instead of using sampling to show that the provider exercised proper judgment and documented appropriately to support both the code and the necessity of the service or procedure, I am concerned that auditors will use patient data as the core of their analysis. The problem is that it presents a greater challenge when it comes to defending the audit. Currently, we could pull a simple random sample of claims or claim lines to assess risk (or defend a finding). Under this patient-centered model, we would have to pull a random sample of claims (or lines) for each patient for that same provider (this amounts to a multi-stage cluster sample), which just makes things more difficult and more complicated all around.
My advice? Start looking internally a bit differently. Run some analyses on procedure or service per patient and try to identify what defines an extreme situation. There are some statistical calculations you can make to identify extreme values, but I am just not sure whether they apply here or not. So I am going to keep a close eye on this and see whether it is an isolated event or whether it looks to become a more comment pattern. If it turns out to be the latter, well, you can count on hearing from me again soon
About the Author
Frank Cohen is the senior analyst for The Frank Cohen Group, LLC. He is a healthcare consultant who specializes in data mining, applied statistics, practice analytics, decision support, and process improvement.
Contact the Author
To comment on this article please go to firstname.lastname@example.org