- Product Headline: Learn Legal Secrets for Combating Denials and Winning Appeals
- Product Image:
- Product Description:
- Product Link:
Most compliance decisions involve balancing risk.
EDITOR’S NOTE: The following is a summary of a recent broadcast segment on Monitor Monday by the author.
Generally speaking, people do a terrible job evaluating risk. Let’s look at a real-world example. There have been some recent accidents involving automated vehicles, including one in which the vehicle struck and killed a pedestrian.
We asked Monitor Monday listeners to assume we could predict the annual number of vehicle-related deaths in the U.S. if all cars were automated. What would be a “tolerable” number of deaths? That is, what are the most deaths you would permit in a year before banning automated cars as “unsafe?” Think about it for a moment. What number would you choose, and how would you reach the conclusion? Here are the figures from the poll of listeners asked that very question:
|1. 1 death per year||38%|
|2. 40 deaths per year||31%|
|3. 400 deaths per year||12%|
|4. 4,000 deaths per year||9%|
|5. 30,000 deaths per year||4%|
|6. 100,000 deaths per year||3%|
The question is designed to get at an interesting quirk of perception. When an automated car kills someone, our brains tend to think those automated cars are dangerous and we should ban them. But each year in the US somewhere around 40,000 people are killed in collisions with a motor vehicle. Many of those are a result of human error. While 38 percent of respondents indicated one death is too many and fully 69 percent would have agreed to ban automated cars if they resulted in 40 deaths a year, if you consider the question scientifically, we’re better off with automated cars if they kill fewer people than already die on the roads. In short, if they kill 40,000 or fewer people, automated cars represent an improvement over the status quo.
What does this have to do with healthcare risk? We often focus on the risk associated with action, while discounting the risk associated with inaction. Change seems riskier than the status quo. For example, when I’ve recommended that a client voluntarily disclose a situation to the contractor, I am often asked “won’t we get in trouble?” The answer, of course, is that the self-disclosure may result in further questions.
In my experience, additional scrutiny is quite rare, but it’s certainly possible. But focusing on the risks associated in the disclosure is terribly misguided. A better question is whether the risk associated with the voluntary disclosure is higher or lower than the risk of staying quiet. In fact, the analysis is even a bit more complicated.
Often, we must compare the small risk of an event that has a very high cost with the larger risk of an event with a very small cost. For whatever reason, our brain seems to place more weight on risks associated with change than they do with the risks that come with the status quo. It seems we also undervalue the probability of something happening if that something hasn’t happened already. The fact that you have not yet had a car accident, or not yet been caught breaking a Medicare rule, offers little insight about the risk of either in the future.
Another quirk of the way our brains work is that we feel worse about losing something we already hold than gaining something new. Michael Lewis’s book The Undoing Project does a great job of explaining research by Daniel Kahneman and Amos Tversky. (If you’ve never read a Michael Lewis book, you should! While I preferred The Big Short, Moneyball, and The Blind Side, The Undoing Project is still a very good read.) This psychological trait can cause leaders of an organizations to hesitate to pay back money already received. Now, to be clear, I think my job as a lawyer is to help healthcare organizations keep money when it has provided a valuable service to a patient. The desire to keep that money isn’t flawed. But it is necessary to analyze the situation objectively, without emotion or bias.
Most compliance decisions involve balancing risk. After you conduct a review of your coding, you have to decide whether to share the results. If you do, a recipient may choose to take the results and use them in a whistleblower case. But failure to share the results may cause people to be unduly suspicious.
Either choice comes with risk. Balancing that risk is a key part of running a compliance program. When you evaluate that risk, make sure you do it objectively and rationally.
Just as automated cars may seem scarier than current cars, some compliance actions may seem scarier than they are.
Listen to David Glaser every Monday on Monitor Mondays, 10-10:30 a.m. EDT.