I’m not sure how many people paid attention to the sad news over Memorial Day weekend, but Dr. John Forbes Nash Jr., a brilliant mathematician, economist, and one of those few people who made substantial contributions to the way our society works, died in a tragic automobile accident on the New Jersey Turnpike. He and his wife were ejected from a taxi as it crashed attempting to pass another vehicle. You can Google “John Nash” and get all the background information you want, so I am not going to get into those details here. What I really wanted to discuss was how his work in game theory ultimately has proven to be invaluable in the healthcare field, particularly when it comes to complex decision modeling and of all things, compliance risk.
Dr. Nash was perhaps most famous for the Nash equilibrium, which occurs when all of the players in a given game know each other’s strategy to win and no individual player changes his or her strategy for personal gain. In essence, Dr. Nash showed that, in most iterative and non-iterative multi-player games, the optimal decision for any player is based on the strategies (or decisions) of the other players in the game.
Let’s take a simple example: that of rock-paper-scissors, a game commonly played by children (and of course, adults when they have a bit too much to drink!) In this game, each player hides a hand behind his or her back, and on the count of three (or something like that), produces a hand in one of three configurations. A closed fist represents a rock; holding the hand open and flat represents a piece of paper; and showing the index and middle finger represents a pair of scissors. In this game, paper covers rock (win), scissors cuts paper (win), and rock smashes scissors (win). So, if you show a rock and I show paper, you lose and I win. If you show a rock and I show scissors, you smash my scissors and you win and I lose. And so on and so on.
So, what’s the best play? Well, if we had achieved a Nash equilibrium, I would know which of these you would produce for each round in advance, and you wouldn’t change that strategy for personal gain. So, for example, if I knew you were going to produce a rock, I would produce paper. But if you knew I was going to produce paper in response to thinking that you were going to produce a rock, then your best move would be to produce scissors to cut my paper. But then, of course, the equilibrium is broken. This is the precursor to the famous prisoner’s dilemma, which is a stalwart of game theory. In this game, there are two players, and after committing some crime together and getting arrested, they are faced with making a decision that will affect their freedom over a course of time. If player one rats out player two, then player one goes free and player two gets five years. The same is true for player two. But if they both rat out each other simultaneously, they both get five years; and if they keep their mouths shut, then they both get six months. The optimal decision? Again, it depends on how well you know the other person’s strategy. If you think the other person will rat you out, then you rat them out first. If you are confident they will keep their mouth shut, then ratting them out gets you set free and keeping your mouth shut gets you six months. The point is, when there are multiple players in the game, getting to the optimal solution is often very difficult, if not impossible. In any case, it is always dependent on knowing (or accurately guessing) what the other player is thinking and what they are likely to do.
OK. Six hundred and twenty words later, let’s get to the point of this article. I have studied game theory most of my adult life, and I teach courses in complex decision theory that are based exactly on this science. Healthcare is a complex industry, defined by the numerous interdependencies and large number of players. A physician sees a patient, but who pays the bill? Usually it’s some combination of the patient and a third party. And who administers the claim? It’s often a different party. And there is the electronic health record (EHR) that is used to generate the code (and sometimes the bill), and the lab or imaging center that contributes additional information for a diagnosis. Each of these “players,” if you will, has skin in the game, and each has some strategy to “win” (to get paid or not have to pay) based on each patient encounter. That’s a lot of players, and unfortunately, we simply don’t know each other’s strategies. And even more unfortunately, because we are competing for a shrinking pot of gold, each has an incentive to change his strategy for personal gain; it’s the opposite of the Nash equilibrium.
Let’s look at this a bit more linearly, shall we? A physician sees a patient and submits a claim to the payer. The payer has at its disposal some 200 reason codes and 670 remark codes that it can use to either deny or underpay the physician for the services he or she provides. Very few physicians, if any, really understand the complexity of the millions of edits each payer has access to when it comes to paying or denying a claim. As such, the physician does his or her best to anticipate, maybe from experience, what defines a “clean claim” for a given payer. Remember, there are hundreds of payers and thousands of plans and millions of edits applicable to a given physician. The payer, then, may have these claims processed through a third-party administrator, who gets a piece of the savings they generate over the expected payout from the payer. They have their own strategy to win, which is defined as how much they earn, and they certainly are incentivized to find creative ways to save money (i.e. not to pay) and even more incentivized to keep those strategies a secret.
Let’s say that the payer pays the physician some expected amount for the services provided. Then, two years later, the physician gets a letter saying that, for one reason or another, the payer determined that that claim shouldn’t have been paid, and it wants the physician to pay back the money. There are a dozen auditing entities, and while they all have a similar goal (to return as much money back to the trust fund as possible), they all have their own strategies. A Recovery Auditor (RAC), for example, gets a commission on the overpayments it finds, so they tend to go after high-valued targets, often identified through the results of other studies, such as the Comprehensive Error Rate Testing (CERT) study. CERT identifies for the auditors which codes, DRGs, devices, beneficiary types, etc. are most likely to be identified as paid in error, giving them specific targets that produce high-value results. The Zone Program Integrity Contractor (ZPIC), on the other hand, investigates fraud and abuse, and in most cases, a targeted healthcare provider has been referred by some third party (perhaps a RAC?) Since ZPICs are investigating fraud and abuse, they have the power to impose civil penalties under the False Claims Act (FCA), with such penalties amounting to as much as $11,000 per claim. So while the RAC is interested in RVU values (because they can convert RVUs at risk to dollars at risk), the ZPIC is more interested in frequency, because their recoveries are tied to claims volume.
Now we are at 1,300 words, and there is so much more to say, but so little space! Bummer! The point is, we rarely if ever see a Nash equilibrium in our industry, and much of the risk providers face when it comes to compliance is tied to this idea of complex decision theory, or trying to decide what actions to take based on a best guess as to the strategies of the other players in the game. I am certain that Dr. Nash, when he wrote his famous article “Non-Cooperative Games” in 1951, did not predict it would apply to healthcare.
I often imagine a world in which the Nash equilibrium is the rule, where the provider and the payer work together to ensure the most efficient and highest quality of care in the world. It’s a world where the claims cycle is so simplified that providers actually get paid what they are promised, where auditors tell you in advance what they are going to be looking for so the provider has a chance to correct errors and move forward with cleaner claims. It’s a world where the rules and regulations are simple enough that everyone knows each other’s strategy to win. And it’s a world where winning is defined as a more efficient, less expensive, and higher-quality healthcare system. Yet, alas.
And that’s the world according to Frank.
About the Author
Frank Cohen is the director of analytics and business intelligence for DoctorsManagement. He is a healthcare consultant who specializes in data mining, applied statistics, practice analytics, decision support, and process improvement. Mr. Cohen is also a member of the National Society of Certified Healthcare Business Consultants (NSCHBC.org).
Contact the Author
Comment on this article