A Tyranny of Algorithms: Part III: Litigating Algorithms and Artificial Intelligence

EDITOR’S NOTE: This RACmonitor series examines a new type of auditing in healthcare – the use of algorithms. In the first article in this series, “Artificial Intelligence is Coming to Medicare Audits – Is Your Legal Team Obsolete?” Mr. Roche reported on statistical sampling being supplanted by more complex methods based on machine learning and artificial intelligence. In his previous article, Mr. Roche reported on the recent study published in Science Magazine about the algorithm used by Optum that reportedly has a built-in racial bias. In this article, Mr. Roche discusses litigating algorithms and artificial intelligence.

Litigating against algorithms and artificial intelligence is becoming a big business.

Apple recently introduced a new credit card. It was designed with an improved user interface. The card is merged into the Apple operating system, has a more flexible payment policy than most, and gives the user instant reporting.

I applied for the Apple card through the Wallet application on my iPhone and was granted an account within about three seconds. A few days later, a beautiful all-white card arrived, without any numbers on it, only an embedded chip. In order to be granted credit, much consideration was needed. My credit and on-time payment history had to be examined. How was the decision made so fast?

Obviously, an algorithm made the decision, not people. A by-product of artificial intelligence (AI) is that humans are less and less necessary. Compared to a computer, humans are stupid, unreliable, can’t remember things, and make lots of mistakes. They also take too many coffee breaks and occasionally go on strike, call in sick to really attend baseball games, or generally have a bad attitude. As the title of one book put it, “Humans Need Not Apply.” 

Last week, Apple received a complaint. The algorithm uses to assign credit limits discriminates against women. A husband and wife applied, but the wife was given a lower credit spending limit. It appears that the variable “sex of applicant” was not included as a factor in the algorithm. So naturally, Goldman Sachs, a venerable powerhouse on Wall Street, was confused.

The Apple card incident is only one of a number of problems involving algorithms. Amazon’s AI recruiting tool was said to discriminate against women. Facebook’s ad system was sued by the U.S. Department of Housing and Urban Development, which claimed it was limiting who sees housing ads. Likely, Facebook was doing what its advertising customers wanted it to do, and paid for it to do –– target ads at customers with a specific demographic and economic profile that the advertiser has learned are most likely to purchase the goods or services being sold. Anyone who has purchased ads from Facebook quickly realizes the amazing specificity possible with its ad system.

Must all advertising be to everyone? Should it be illegal to use targeted advertising? Does an American have a “right” to see an ad, even if it is not appropriate for them and they are unlikely to benefit from it? This would be a peculiar type of logic since many pay-for-play services are based on the proposition that the consumer pays to avoid ads!

Some have charged that AI facial recognition technology can be more accurate on Europeans than on Africans or Asians. There are other legal problems as well. In Brady v. Maryland, 373 U.S. 83 (1963), the U.S. Supreme Court ruled that in order to be fair, the prosecution must turn over all evidence that might exonerate the defendant, instead of hiding it. However, in Lynch v. State, 260 So. 3d 1166 (Fla. Dist. Ct. App. 2018), a suspect was found by investigators using a facial recognition system. When attorneys tried to have the evidence of other suspects found by the same algorithm included in the trial as Brady material, this was denied.

If AI reads X-rays or lab results and generates a series of recommendations, but makes a mistake, can the algorithm be sued for malpractice? If so, who pays? The doctor? The hospital? The software vendor? One wonders how Medicare will pay for diagnostics made by machines instead of humans. Does an AI system need to take out a liability insurance policy? Will any insurance company write such a policy?

The problem is worldwide. In the Netherlands, the government used a risk profiling system to prevent social security, employment, and tax fraud. A total of 1,000 individuals were targeted for increased surveillance. They sued for discrimination.

In Michigan, an algorithm disqualified individuals for food assistance if it found an outstanding felony warrant. Some persons were improperly mismatched. This led to a class-action suit. The Court in Bauserman v. Unemployment Ins. Agency, 503 Mich. 169 (2019) ruled that the algorithm violated the provisions of the Supplemental Nutrition Assistance Program (SNAP), as well as due process owed to the plaintiffs. The benefits were reinstated.

In Idaho, the Medicaid program used an algorithm to allocate benefits for persons with developmental disabilities. Another class-action suit was launched. The Court in K.W. ex rel. D. W. v. Armstrong, 298 F.R.D. 479 (D. Idaho 2014) ordered the State to disclose the algorithm, and change it so people get more money. Note that in the private sector, the nuts and bolts of AI and algorithms generally is hidden behind a wall of trade secrecy.

In Louisiana, a Mr. Hickerson was convicted of gang-related criminal conspiracy and sentenced to 100 years in prison. Law enforcement had a risk-assessment algorithm that generated social-networking graphs mapping Hickerson’s links to suspected gang members. The problem is that when they used it, no links had been found. Hickerson claimed this material should have been presented as Brady material. The Court in State v. Hickerson, 228 So. 3d 251 (La. Ct. App. 2018), denied the motion to include this AI evidence.

In case after case, AI algorithms are considered by the parties involved to have credibility. In some, defendants are attempting to have such evidence excluded from the trial; in others, they are attempting to have it included. In other cases, the AI algorithm is disparaged as being discriminatory or inaccurate.

There is a tendency in the United States to want to see the legislative branch become more involved in every aspect of life. To some, this is good; to others, anathema. It is to be expected that with AI and algorithms, lawmakers have started to look for ways to put the federal government and its thick fog of regulatory control into the software of this new innovative technology.

For example, the Algorithmic Accountability Act of 2019 empowers the Federal Trade Commission to regulate algorithms throughout the U.S. economy. It was designed  “to direct the Federal Trade Commission to require entities that use, store, or share personal information to conduct automated decision system impact assessments and data protection impact assessments.”

The New York City Council recently passed an algorithmic transparency bill. The title of the bill is “A Local Law in relation to automated decision systems used by agencies.” It is noted that “this bill would require the creation of a task force that provides recommendations on how information on agency automated decision systems may be shared with the public, and how agencies may address instances where people are harmed by agency automated decision systems.”

Where is all this going?
Algorithms and AI are spreading throughout decision-making in both the public and private sectors. Algorithm decision-making is more predictable, faster, and cost-effective. It leads to innovation and efficiency. There is a race between nations to see which dominates this new wave of technology. Although the United States has been the center of much innovation, China has made the dominance of AI a critical national objective. There is a risk that the stifling specter of litigation and the dead hand of government regulation will impede innovation, and lead to the loss of this important technology. These are important considerations, but in the United States, there is no single entity empowered to set a coherent national policy.

On the litigation side, there are some difficult issues that must be solved. How can an algorithm that does not use sex as a variable discriminate on the basis of sex –– or race, or sexual preference, or something else? The original concept of discrimination is that some factor about a person –– age, color, criminal record, disability, generation, sex, religion, social class –– leads to treating them in a way worse than how other people are treated. It was originally an act with intent (mens rhea) and something done by a human, not a machine.

How can an algorithm be a form of discrimination? It is not a human, and may not have mens rea. In the United States, we have developed the concept of inferred discrimination. This means that discrimination can be found even when it is unintentional or unconscious. This might be called “implicit” discrimination. This standard is not unlike that used in the Inquisition when victims were burned at the stake even though they were unaware of their sin, and had committed no intentional act. The practical result is that companies using algorithms can be held responsible for discrimination, even if there is no intent.

Although it is unlikely to happen elsewhere, in the United States we are heading into a future when algorithms will be tweaked to become tools for social engineering instead of strictly rational decision-making.

Print Friendly, PDF & Email
Facebook
Twitter
LinkedIn

Edward M. Roche, PhD, JD

Edward Roche is the director of scientific intelligence for Barraclough NY, LLC. Mr. Roche is also a member of the California Bar. Prior to his career in health law, he served as the chief research officer of the Gartner Group, a leading ICT advisory firm. He was chief scientist of the Concours Group, both leading IT consulting and research organizations. Mr. Roche is a member of the RACmonitor editorial board as an investigative reporter and is a popular panelist on Monitor Mondays.

Related Stories

Leave a Reply

Please log in to your account to comment on this article.

Featured Webcasts

Leveraging the CERT: A New Coding and Billing Risk Assessment Plan

Leveraging the CERT: A New Coding and Billing Risk Assessment Plan

Frank Cohen shows you how to leverage the Comprehensive Error Rate Testing Program (CERT) to create your own internal coding and billing risk assessment plan, including granular identification of risk areas and prioritizing audit tasks and functions resulting in decreased claim submission errors, reduced risk of audit-related damages, and a smoother, more efficient reimbursement process from Medicare.

April 9, 2024
2024 Observation Services Billing: How to Get It Right

2024 Observation Services Billing: How to Get It Right

Dr. Ronald Hirsch presents an essential “A to Z” review of Observation, including proper use for Medicare, Medicare Advantage, and commercial payers. He addresses the correct use of Observation in medical patients and surgical patients, and how to deal with the billing of unnecessary Observation services, professional fee billing, and more.

March 21, 2024
Top-10 Compliance Risk Areas for Hospitals & Physicians in 2024: Get Ahead of Federal Audit Targets

Top-10 Compliance Risk Areas for Hospitals & Physicians in 2024: Get Ahead of Federal Audit Targets

Explore the top-10 federal audit targets for 2024 in our webcast, “Top-10 Compliance Risk Areas for Hospitals & Physicians in 2024: Get Ahead of Federal Audit Targets,” featuring Certified Compliance Officer Michael G. Calahan, PA, MBA. Gain insights and best practices to proactively address risks, enhance compliance, and ensure financial well-being for your healthcare facility or practice. Join us for a comprehensive guide to successfully navigating the federal audit landscape.

February 22, 2024
Mastering Healthcare Refunds: Navigating Compliance with Confidence

Mastering Healthcare Refunds: Navigating Compliance with Confidence

Join healthcare attorney David Glaser, as he debunks refund myths, clarifies compliance essentials, and empowers healthcare professionals to safeguard facility finances. Uncover the secrets behind when to refund and why it matters. Don’t miss this crucial insight into strategic refund management.

February 29, 2024
2024 ICD-10-CM/PCS Coding Clinic Update Webcast Series

2024 ICD-10-CM/PCS Coding Clinic Update Webcast Series

HIM coding expert, Kay Piper, RHIA, CDIP, CCS, reviews the guidance and updates coders and CDIs on important information in each of the AHA’s 2024 ICD-10-CM/PCS Quarterly Coding Clinics in easy-to-access on-demand webcasts, available shortly after each official publication.

April 15, 2024

Trending News

Happy National Doctor’s Day! Learn how to get a complimentary webcast on ‘Decoding Social Admissions’ as a token of our heartfelt appreciation! Click here to learn more →

SPRING INTO SAVINGS! Get 21% OFF during our exclusive two-day sale starting 3/21/2024. Use SPRING24 at checkout to claim this offer. Click here to learn more →