As if being a healthcare provider wasn’t tough enough, payers continue to use bad statistics to challenge how evaluation and management (E&M) services are billed.
While not a new payer tool, the Comparative Billing Report, or CBR, is being used by more payers that ever before. In the last month alone I have received dozens of CBRs addressed to my physician clients from both private and government payers. This may just be a coincidence, but I think that it is related to the May 5 release of the 2014 Centers for Medicare & Medicaid Services (CMS) physician utilization file.
CBRs, as stated above, can come from both private payers as well as Medicare and Medicaid. For Medicare, they are issued by a government contractor called eGlobalTech. According to its website:
“Comparative Billing Reports (CBRs) are educational tools administered by the Centers for Medicare & Medicaid Services (CMS). They are developed and disseminated under contract by eGlobalTech, a woman-owned federal services firm based in Arlington, Va.”
They also state the following about the CBR:
“The CBR is just one tool that CMS uses in its ongoing efforts to protect the integrity of the Medicare Trust Fund. Other efforts include:
- Educating providers about Medicare’s coverage, coding, and billing rules;
- Reviewing claims before they are paid to assure compliance with coverage, coding, and billing rules (called prepayment review); and
- Reviewing claims after they are paid (called post-payment review) to identify and collect overpayments made to providers.”
Cigna sees it a bit differently. Their letter begins like this:
“We routinely review claims data to help ensure coding and payment accuracy. To underscore the importance of correct practices, we have implemented an evaluation and management (E&M) correct coding program. Through this program, we review available Cigna billing data to compare a healthcare professional's E&M coding practices with their peers within the same primary specialty and in the same community.”
In one of the eGLobalTech CBRs recently issued to one of my clients, they discuss the issue of “average minutes per day.” To calculate this, they use the minutes found in the official American Medical Association (AMA) Current Procedural Terminology® (CPT) code manual.
While I don’t have the space in this article to go over the pros and cons of those time estimates, it is important to at least understand their shortcomings. In general, there has never been any evidence published to support those values. I have searched the world over and never found a single study that validates “typical times,” as they are most often referred to by payers. In fact, I have read numerous studies that authoritatively invalidate those time estimates. Here’s a hint: they all end in a “0” or a “5.”
As a statistician, I can tell you with authority that there are eight other digits available, and it’s no coincidence that they are not used. Also, AMA never provides a range of values, only a point estimate, so highly efficient practitioners are lumped into the same basket as highly inefficient providers. In the same manner, providers with older and sicker patients are lumped into the same category as providers whose patient population is significantly younger and healthier.
So they take these terribly inaccurate time estimates and multiply them by the frequency with which the code is reported. Now, this may work for estimating exposure, but as stated above, it is useless for estimating actual time spent seeing patients. Then they compare your minutes per day for these procedures to your peer group’s minutes per day, even though they have no clue about the overall health or needs of those patients represented in the peer population.
They do the same with metrics such as “average total services per year rendered to your beneficiaries,” which is a bit more reliable; however, it is nothing more than a test for your location on the “wall of shame” rather that any kind of reliable statement of efficiency or productivity. One of my favorites is that, under each table, they write the following: “a t-test was used in this analysis, alpha = 0.05.”
This is supposed to be some kind of evidence that they employ proper statistics in order to validate their findings, but p-values are highly controversial, and without providing additional metrics such as sample size and frame size, they as well are of little to no value. And even worse, the t-test is the wrong test!! The results are being reported as proportions or ratios, which calls for the use of a proportions test, not a t-test, which is used to compare the mean values and standard deviations of two different data sets.
In the Cigna CBR, they produced a set of E&M utilization graphs: those graphs that pretty much anyone who has ever worked in a compliance, billing, or coding role has seen. But here, Cigna is purporting to use “available Cigna billing data to compare a healthcare professional’s E&M coding practices with their peers within the same primary specialty and in the same community.”
Now, I’m not an attorney, but even so it would be nice for them to define “available Cigna billing data” or “primary specialty” or “same community.” In looking at the report, I couldn’t figure out if Cigna was using only their private pay data or data from CGS (Cigna Government Services), a Medicare FI. And did “primary specialty” mean the assigned CMS specialty code, or was it some internal indicator used by Cigna? One problem here is this: if they are using only primary specialties, the data could be quite inaccurate. For example, are they comparing a spine surgeon to a general orthopedic physician? To what peer group are they comparing a pediatric cardiologist? A pediatrician or a cardiologist?
Cigna also has a column called “performance index,” yet nowhere in the document is this index defined. They also don’t seem to be clear with regard to their expectations. In the body of one letter, it stated the following: “according to this data, for one or more of the CPT codes evaluated, your E&M submission patterns differed from your peer group by at least 0.5 standard deviations.” What a confounding statement! In fact, I am completely puzzled by it, since 0.5 standard deviations is probably within the margin of error. For that, I would say something like “congratulations. One or more of your E&M codes was found to be within the margin of error when compared to your peers! Let the party begin!” Think about it: an outlier doesn’t even start until you reach three times a standard deviation, so at one half, you’d think they would be sending out gift certificates to Starbucks.
The bottom line is this: payers are using these CBRs to attempt to intimidate providers into submission. But if you want to intimidate someone, then you should at least do so with reliable facts, and not this type of garbage. And for any practice worth its salt, you don’t need some contentious and statistically unacceptable report to encourage them to conduct regular internal reviews of their billing and coding. At least for my clients, this is a given. The downside is that these distractions take away from valuable time that should be spent in appropriate administrative and clinical activities.
In some cases, you can just throw these reports in the garbage. But for some, you may have to respond or conduct a self-audit. Before you do, however, I would demand some information from the payer. For example, I would want to see exactly how the calculations were conducted, and this would include not only their point estimates, but variances as well. I would also ask to see the results of their statistical tests, whether they be t-tests, proportion tests, or whatever else they claim to use. And I would want a copy of the data set being used for both your analysis as well as that for the peer group. If this is really about education and transparency, as they tout it to be, then I can’t see why a payer wouldn’t be willing to produce this data. But they won’t, because it is really about something completely different.
It’s like this: if it’s a problem, then the payer should do something about it. If it’s not a problem, then allow the providers to continue to police themselves.
Just as in any other business, physicians don’t want to give back money they were paid for providing services to their patients. I take offense that they call these educational tools that are used to enhance “cooperation and transparency.”
For the practice, take notice: payers are continuing to find new ways to try to bully and intimidate you into coding lower that what is appropriate. How do I know that? Well, I have never seen a CBR that encouraged a practice to review their coding and billing because it was statistically significantly below what was expected. From a statistical perspective, since you are compared against a peer average, at least half of the codes would be below the average, but I never see that. Do you?
My good friend Henry P. Shaw would say “if what you’re doing is working, keep doing it. If not, then do something different.” This applies to this situation. If your internal audits show that your coding is justified (that is, you meet documentation guidelines and medical necessity), then keep doing what you are doing. If not, then get some training and education so that you can do it right.
And that’s the world according to Frank.
About the Author
Frank Cohen is the director of analytics and business intelligence for DoctorsManagement, a Knoxville, Tenn.-based consulting firm. Mr. Cohen specializes in data mining, applied statistics, practice analytics, decision support, and process improvement.
Contact the Author
Comment on this Article