Hot topics come and go – and healthcare is no exception.
A few years ago, predictive analytics (PA) was making its rounds, and I remember seeing several vendors at a few national healthcare conferences touting their use of this advanced statistical technique. In fact, at one conference, a backdrop used the words “predictive analytics” as a mention of one of their service offerings.
I was both skeptical and curious, so I asked the person at the booth to help me understand how they used PA techniques, and what the benefits were to the end-user. This guy wasn’t even sure what I was asking, and when we got around to the meat of it, he admitted that they didn’t “really” employ PA techniques, but mostly estimated based on benchmark data.
I was disappointed, but certainly not surprised. I’m not saying that nobody is employing PA, I’m just saying that not everyone who says they are really is. See, PA is part of a larger field of artificial intelligence (AI), and while I have been employing AI techniques for many years now, I am aware that most people either don’t understand what AI is about or have a mistaken impression of what it is and how it is applied to folks in our industry.
To begin, we should understand that just like “predictive analytics” was the sexy new term for a few years, artificial intelligence seems to be what everyone is abuzz about now. What’s important is that all in our industry, and particularly those in leadership roles, understand what AI is and how it impacts the work we do.
I work closely with Sean Weiss, a partner for Doctors Management and a major player in the area of billing and coding compliance. He tells me that the most common questions he has been getting from his clients include “how does this (AI) impact our practice/organization, and what do we need to do to prepare for the future?”
For most in the healthcare industry, the history of AI revolves around clinical aspects of care, such as in the fields of genomics, diagnostics, and disease identification and association. But there is a lot more to healthcare than just the clinical side of things, and for many of us, AI’s impact will be in the non-clinical areas of finance, recordkeeping, documentation, and compliance.
From the non-clinical perspective, we see AI being used in areas involved in driving down the costs associated with the delivery of patient care. One key example is AI associated with electronic medical/health record (EMR/EHR) systems through the analysis of “unstructured” data, such as the textual components of medical records and chart documentation. For example, one company, CloneSleuth, uses components of AI to examine the textual components in a chart to determine whether record “cloning” or replication is occurring for a given provider, or even across a set of providers. They do this by examining the written text in a patient’s chart (excluding specific items such as names, dates of service, etc.) to see whether there is a high degree of similarity within the unstructured data for multiple records. In one way, it is similar to some of the plagiarism applications that colleges and universities use to ensure that a student’s thesis is original in form, style, and text. We also see this technique in text-to-speech applications that allow the use of natural language (i.e. “pregnant patient complains of nausea and vomiting”) to convert the use of those words into specific code sets (i.e. ICD-10-CM code 021.9).
One tool in the world of AI involves the use of machine learning algorithms. As it sounds, these are algorithms that tend to “learn” over time, based on data streams. In essence, these algorithms “tweak” themselves based on a feedback loop that helps identify when predictions were correct versus incorrect. For example, I have been doing research on building AI-based algorithms that can predict the likelihood (or probability) that a given claim has been coded (prospective) or billed (retrospective) in error by analyzing a large number of variables that are found on the claim form (i.e. HCFA 1500 or UB-04). To make the initial call, I use some predictive algorithm, such as a neural network or support vector machine (SVM), analyzing some large quantity of claims data. Then, I turn those claims over to a certified coder/auditor who audits the claims and returns their findings. Let’s say that out of 100 claims that I identified as being coded/billed in error, the auditor finds that I was correct for 90 of them, but incorrect for 10 (and remember, s/he doesn’t just give a pass/fail finding, but rather addresses the specific reasons for the findings). These results are programmed back into the algorithm, and we depend upon that algorithm to determine the difference between those that were improper and those that were not. In this case, we can see the algorithm “learn” from its mistakes and “tweak” itself to catch those issues in the future. Using an automated method of a feedback loop, we can see how the algorithm will continue to “learn” until it reaches a point of optimal performance. One of the benefits of these types of AI techniques is that they are “smart” enough to capture rule changes and incorporate them automatically into their framework.
For me, however, the big problem with AI is how we define it. Most all of us understand it to be “artificial intelligence,” and many fear that AI is going to result in a mass replacement of human talent. I, for one, reject that theory, for the most part. Don’t get me wrong, I do see how some robotics projects are replacing humans that perform standardized and repetitive tasks; however, the machines don’t build, program, or maintain themselves – that requires human intervention, and because of this, rather than using the term “artificial” in association to intelligence, I prefer the term “augmented” intelligence. Augmented Intelligence is a paradigm that is designed to assist or “augment” the work that healthcare administrators and staff do, rather than replace the person. If engaged properly, non-clinical AI efforts will help to reduce workloads, targeting provider components that are not as productive and increasing productivity.
For example, it could take an individual auditor between 1,500 and 2,500 hours (0.72 to 1.2 FTEs) to audit 10,000 charts that mixed evaluation and management (E&M) encounters with surgical or diagnostic procedures: a daunting and expensive task, to say the least. And when the auditor is done, he or she may have calculated a repayment that you didn’t want, or one that may not have been necessary. Depending upon the method used for sample selection, you many also have missed a huge compliance risk mitigation opportunity. Using predictive analytics, which incorporates AI technology, you could review the claims for those 10,000 charts in a matter of seconds, spitting out those that would be most likely to be billed in error or subject to an external audit, depending on the purpose of the algorithm. Take the latter case – if I could tell you which procedure codes or modifiers physicians were most likely to be audited for, you could pull a more focused and high-risk sample and proactively audit them yourself. In this case, you may have needed to audit only 100 encounters rather than the 10,000 to get the same result, with a huge increase in efficiency and a huge reduction in costs. This approach to risk-based auditing is significantly more efficient and accurate than, say, random probe auditing, and a whole lot more cost-effective than it would be to pay someone for all the hours that would be required to audit all the claims. In fact, using AI techniques is exactly how Medicare and private payors identify high-risk audit targets.
While AI has had a history of benefits on the clinical side of healthcare, its application to non-clinical areas of healthcare, which include administration, management, human resources, revenue cycle management, and perhaps most importantly, risk management and regulatory compliance, can also be beneficial. In my opinion, at least for the foreseeable future, AI simply isn’t smart enough to replace humans, particularly within the healthcare industry, but it is smart enough to make our jobs easier, more efficient, and more cost-effective. One of my favorite quotes in this area comes from Alan Perlis (April 1, 1922 – Feb. 7, 1990), former computer scientist and professor at Purdue University, Carnegie Mellon University, and Yale University, as well as the first recipient of the Turing Award. He said, “a year spent in artificial intelligence is enough to make one believe in God.”
And that’s the world according to Frank.
Listen to Frank Cohen report this story live during the next edition of Monitor Monday, 10-10:30 a.m. EST.