Explore our Topics:

Health plans are a primary target of AI legislation in 2025

States are starting to enact AI legislation, and guardrails for health plans are among their primary goals.
By admin
Nov 24, 2025, 12:37 PM

It’s hard to know where to begin when considering how to legislate artificial intelligence. Innovation always moves faster than policy, and AI has evolved so quickly that lawmakers never really had a chance to set the rules before it got away from them. 

With the current federal administration more interested in preventing legislation around AI than enacting it, it’s up to state governments to start laying a legal foundation for AI in healthcare and elsewhere to safeguard users and ensure that technology is being used in an ethical, equitable, and appropriate manner.  

But where to start? AI is being used in all areas of the healthcare ecosystem already, from ambient listening in exam rooms to administrative offices handling everything from coding to claims to conversations with patients.  

For state legislatures, it makes the most sense to follow the money. 

There’s always been a great deal of sensitivity around how AI will fit into the clinical environment, and clinicians themselves have been extremely active (and fairly conservative) in self-governing the adoption of clinical decision support tools and other solutions that touch patient care. 

But when it comes to the financial side of healthcare, particularly how health plans are leveraging AI tools to make coverage determinations for beneficiaries, it’s pretty much been a free-for-all.  

With profits at stake and shareholders to satisfy, health plans have been rapidly adopting AI algorithms for claims adjudication, prior authorizations, and other “operational efficiencies” that can end up hurting consumers if applied incorrectly. 

A recent survey by the National Association of Insurance Commissioners (NAIC) found that 84% of health plans surveyed in 16 states are using AI or machine learning in some capacity, primarily for tasks such as utilization management, prior authorizations, and fraud detection.  

And while 92% of these plans claim to have governance plans in place, there have already been a string of high-profile lawsuits alleging that improper use of AI tools to review the necessity of medical services and adjudicate claims has caused material harm to beneficiaries.  

So it makes sense that many of the AI laws enacted in 2025 are focused on the financial side of the industry – and it’s encouraging to see that interest in governing the use of AI among health plans is a bipartisan affair. 

In Arizona, for example, Representative Julie Willoughby (R), an emergency room trauma nurse, authored legislation amending an existing law to say that before an insurer can deny a claim submitted based on medical necessity, the medical director must individually review the denial and exercise “independent medical judgement.”  The medical director “may not rely solely on recommendations from any other source,” including artificial intelligence tools, to make the determination. 

A jointly authored bill in Maryland tackles similar issues among certain insurance carriers, pharmacy benefit managers, and other private review agents, requiring that these entities report on a quarterly basis whether adverse decisions involved a prior authorization or step therapy protocol involved the use of an artificial intelligence tool. The law, enacted in May, also requires that AI tools not replace the role of a human healthcare provider, and that any AI-powered technology be applied fairly and equitably while being open to audit. 

Entities subject to the law will need to submit written policies and procedures about how their AI tools are being used to make determinations and what oversight will be provided. Lawmakers will also require regular reviews of the technology to ensure it is functioning appropriately. 

And in Texas, lawmakers from both sides of the aisle put forth legislation stating that a utilization review agent “may not use an automated decision system to make, wholly or partly, an adverse determination, and that “the commissioner may audit and inspect at any time a utilization review agent’s use of an automated decision system for utilization review” to keep health plans accountable. 

Plans can still use AI for administrative support and fraud detection, but the surprisingly strong language – in Texas, nonetheless – indicates that state governments are taking the issue of potentially improper use of AI very seriously indeed. 

It’s encouraging that states are making moves to proactively protect consumers, and even more so that it’s becoming one of the rare issues that both parties can agree upon. As AI becomes ever more deeply embedded in both the clinical and financial sides of the healthcare industry, it will be crucial for state governments to continue enforcing rules of the road to make certain that these powerful tools are being used with the benefit of the patient in mind. 


Jennifer Bresnick is a journalist and freelance content creator with a decade of experience in the health IT industry.  Her work has focused on leveraging innovative technology tools to create value, improve health equity, and achieve the promises of the learning health system.  She can be reached at [email protected].


Show Your Support

Subscribe

Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.