Explore our Topics:

The FDA’s plans for AI regulation: Safety, rigor, and strong oversight

The FDA’s perspective on AI regulation in biomedicine is that we need more of it, from all stakeholders, and as soon as possible.
By admin
Oct 24, 2024, 2:09 PM

Healthcare is in the midst of a once-in-a-generation transformation as hundreds of AI companies develop thousands of algorithms and tools to tackle a huge number of clinical care and administrative processes that are sorely in need of attention.   

With so many players volunteering so many different approaches to AI-driven care, however, it’s like the W(AI)ld West out there for potential purchasers, who have to be able to trust that their new solutions are safe, accurate, unbiased, and capable of producing the promised results. 

It’s a difficult world to regulate, especially since AI is not a “thing” in and of itself. Instead, it’s a foundational tool for leveling up products and functionalities that fall into dozens of different regulatory categories, leaving agencies like the FDA with some big questions to answer about how to keep an eye on the evolution of leading-edge technologies across the care continuum. 

So far, the FDA has been relatively proactive in their quest to keep the industry on track, releasing several landmark guidance documents and proposed frameworks for regulation, particularly around the use of AI and machine learning in products deemed “medical devices.” 

The agency has approved approximately 1000 AI-enabled medical devices, with the majority in the radiology and cardiology spaces so far. 

As AI spreads its wings even further, FDA leaders have published a viewpoint article in JAMA that further outlines plans for the agency to play a “central role” in regulating new products while encouraging all healthcare stakeholders to “attend to AI with the rigor this transformative technology merits.” 

An all-hands approach to fast and flexible AI regulation 

Everyone needs to be involved in AI regulation, from end users and government agencies to private entities ranging from the tiniest startups to the largest Big Tech groups, wrote FDA Commissioner Robert Califf, MD, Troy Tazbaz, Director of the Digital Health Center of Excellence at the FDA’s Center for Devices and Radiological Health, and Haider J. Warraich, MD, Senior Clinical Advisor for Chronic Disease. 

Collaboration and open communication will be essential for taking a lifecycle approach to AI regulation that keeps up with the rapid pace of change, especially as the FDA works through the challenges of defining (and possibly broadening) its scope of responsibility in the new era of AI-driven healthcare. 

“The FDA has shown openness to innovative programs for emerging technologies, such as the Software Precertification Pilot Program. However, as that program demonstrated, successfully developing and implementing such pathways may require the FDA to be granted new statutory authorities,” the team said. 

“The sheer volume of these changes and their impact…suggests the need for industry and other external stakeholders to ramp up assessment and quality management of AI across the larger ecosystem beyond the remit of the FDA.”  

Patience, participation, and flexibility will need to be the hallmarks of the upcoming era of AI development as authorities work to codify different levels of risk for AI-enabled tools, particularly those that assist with clinical decision-making or have other direct impacts on patient care, the authors stressed. 

Addressing the unknowns of LLMs and generative AI

Large language models (LLMs) and generative AI (GenAI) are some of the most exciting areas of the AI ecosystem, but also some of the riskiest when applied to healthcare, the team said. With the “potential for unforeseen, emergent consequences,” such as hallucinations and the spread of false information, these models present unique challenges to regulators whose primary goal is to protect patients. 

“The complexity of LLMs and the permutations of outputs necessitate oversight from individuals and institutions in addition to regulatory authorities,” said the authors.  

But because individual users can’t be responsible for monitoring and verifying LLM and GenAI output in every single circumstance, the industry needs creative approaches to establishing guardrails for when, where, and how to use these models in the clinical care process. 

“There is a need for regulatory innovation in this space to enable both analysis of these information sources and integration into clinical decision-making,” the FDA said. “Proactive engagement among developers, clinicians, health system leaders, and regulators on platforms such as the FDA’s Digital Health Advisory Committee will be critical.” 

Balancing financial opportunities with what’s best for patients

The US healthcare system is notorious for putting patients and CFOs at odds with each other with its volume-driven approach to reimbursement. As AI enters the mix, the FDA is urging healthcare stakeholders not to allow emerging technologies to become part of this battle.  

“Although the FDA does not regulate the practice of medicine, it has a strong mission to both advance public health and biomedical innovation. Therefore, there is concern that a disproportionate focus of AI applications on financial return on investment could harm patient outcomes and reduce acceptance and trust in this technology,” the team notes.  

End users should be wary of using AI to squeeze more dollars out of patients, and should avoid over-automation that might have a negative impact on experiences or outcomes just to save on operational costs. 

Instead, health systems and payers should strive to use AI as a way to free up scarce clinical resources, promote equitable access to care, and support the development of strong patient-provider relationships, since “clinicians are the bridge between this technology and patients, and can play an important role in advocating for high-quality evidence for health benefits that inform the clinical application of AI.” 

A future of strong oversight backed by industry collaboration

Regulation is only effective with voluntary participation from industry, the FDA concludes, and healthcare stakeholders from all corners of the community must get involved and remain involved in the process of charting the future of AI-driven care. 

“It is in the interest of the biomedical, digital, and health care industries to identify and deal with irresponsible actors and to avoid misleading hyperbole,” stressed the team. “Regulated industries, academia, and the FDA will need to develop and optimize the tools needed to assess the ongoing safety and effectiveness of AI in health care and biomedicine.”  

“The FDA will continue to play a central role with a focus on health outcomes, but all involved sectors will need to attend to AI with the care and rigor this potentially transformative technology merits.” 


Jennifer Bresnick is a journalist and freelance content creator with a decade of experience in the health IT industry.  Her work has focused on leveraging innovative technology tools to create value, improve health equity, and achieve the promises of the learning health system.  She can be reached at jennifer@inklesscreative.com.

 


Show Your Support

Subscribe

Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.