Senate finds Medicare Advantage insurers use AI to deny elderly care
Many of the country’s largest Medicare Advantage (MA) insurers are using artificial intelligence (AI) and other predictive technologies to increase savings by denying care to elderly patients, a new US Senate report has revealed.
The report, published by the Senate Permanent Subcommittee on Investigations, details how three MA insurers — UnitedHealthcare, Humana, and CVS — are deploying AI, algorithms, and other predictive technologies to help restrict care, in particular post-hospital recovery services, to seniors. These three companies control nearly 60% of the MA market, which has grown to cover more than 30 million seniors in the U.S.
Increasing savings by denying admissions
The aggressive deployment of AI by CVS to help cut costs in its MA division reveals a troubling pattern. While maintaining stable denial rates, the company dramatically increased the number of patient cases subjected to AI-powered prior authorization reviews — a 58% jump that far outpaced its enrollment growth of 40%.
The Senate report confirms CVS’s strategic use of AI to maximize its savings from “denying prior authorizations requests its MA beneficiaries submitted for inpatient facilities.” While these savings reached $660 million in 2018, primarily from “denied admissions,” CVS sought to further boost them by programming its AI systems to specifically target cases with “a significant probability to be denied.”
Initially, CVS tested a more lenient AI model designed to maximize approvals, but the results would have been alarming to anyone prioritizing the company’s bottom line — CVS discovered that the AI granted approvals for post-acute care at ten times the rate of hospital admissions. To course correct, CVS labeled as “mistakes” any approvals the AI granted that the company believed should have been denials instead. Under pressure to cut costs, CVS abandoned plans to reduce prior authorization requirements, with internal documents stating the lost savings would be “too large to move forward.”
By April 2021, CVS launched a post-acute analytics automation initiative. This initiative leveraged AI to help reduce spending on skilled nursing facilities. While initial projections suggested modest savings of $4 million annually, within seven months the company dramatically revised its estimates and predicted the AI program would save more than $77 million in three years by limiting patient access to care.
Automating patient care denials
UnitedHealthcare implemented AI automation in its care approval process, leading to a sharp increase in post-acute care denials from 10.9% (2020) to 22.7% (2022). Internal documents reveal that in April 2021, the company approved “Machine Assisted Prior Authorization” which, while still requiring human verification, reduced case review times by up to 10 minutes, according to internal committee reports.
Taking it further, an internal committee in the following month preliminarily approved an automated healthcare economics model. The system achieved faster processing times but also increased denial rates by identifying previously missed contradicting evidence. During this period, UnitedHealthcare transferred its MA post-acute care services to naviHealth, whose predictive algorithm has been associated with AI-driven care denials.
Internal documents show that by December 2022, despite knowing their automation systems were increasing denial rates, UnitedHealthcare continued expanding their AI initiatives. At this time, company employees explored using machine learning to predict which claims denials patients might contest.
The Senate investigation suggests that, based on gathered evidence, it was likely not United Healthcare’s goal to reduce incorrect denials, but rather to “identify cases which may result in an appeal” and “take action earlier,” suggesting the company wanted to better defend its denials as opposed to improving its accuracy issuing them.
A link to algorithm-driven cost-cutting
Humana’s implementation of training focused on costs and denial strategies increased the insurer’s rejection rate for long-term care hospital stays by 54% between 2020 and 2022. While the Senate report did not offer conclusive evidence of AI playing a role in boosting this rejection rate, it did connect Humana to automated technology which, according to evidence offered by the report, can be used to help “determine the needed extent of post-acute care for a patient.”
The Senate report highlighted Humana’s standard for “Corporate-Augmented Intelligence,” updated in 2022, that stated Humana would “ensure responsible use” of predictive technologies such as AI by “having the clinicians who use them retain decision-making authority in order to exercise appropriate levels of informed judgment in clinical matters.”
However, another Humana standard, “Ethical Usage of Augmented Intelligence,” made it clear that third parties working with Humana are permitted to use artificial intelligence when providing services to the insurer. One third party, the outside contractor naviHealth, had maintained a contractual relationship with Humana throughout the period of the Senate investigation. This agreement included the use of a “clinical support tool,” LiveSafe, that was permitted to access Humana’s own health information and report back to Humana “at an aggregate level.” LiveSafe is now known as nH Predict, the same algorithm connected to UnitedHealthcare and “linked to automated denials of post-acute care.”
Based on this information, the report stresses that Humana’s training materials alone would be enough to cause an increase in denials, but also concludes:
“These documents indicate that Humana was investing in automating technologies and was aware of their potential for abuse. The portion of the Ethical Usage guidelines devoted to third parties specifically referred to ‘Artificial Intelligence’ rather than ‘Augmented Intelligence’ suggest that naviHealth, as a contractor, may have had greater latitude to exclude humans from the decision-making process.”
Putting humans back in charge of patient care decisions
With the largest MA insurers increasingly deploying AI and other predictive technologies to deny coverage for post-hospital care, more seniors are faced with the impossible choice of paying out of pocket for necessary treatment or going without. The impact on healthcare providers has been severe, as well. Dozens of health systems have announced plans to stop accepting MA patients, citing both the prior authorization process and AI-driven denials as top reasons.
The Senate report makes several recommendations to address this worsening issue. Regarding AI, it demands stronger oversight of how insurance companies utilize AI when making healthcare decisions. It also recommends expanding regulations for insurers’ utilization management committees “to ensure that predictive technologies do not have undue influence on human reviewers.” The report notes that the Centers for Medicare & Medicaid Services (CMS) currently lacks crucial rules around the separation of AI predictions and patient care decisions.
“There is a role for the free market to improve healthcare delivery to America’s seniors,” the report concludes, “but there is nothing inevitable about the harms done by the current arrangement.”