Can HIPAA Keep Up with AI?
The law protecting your medical records turns 28 this year and it’s showing its age, according to medical device makers.
HIPAA, the Health Insurance Portability and Accountability Act of 1996, was created back when paper charts were still in use. Now, as artificial intelligence transforms healthcare, this Clinton-era privacy framework is struggling to keep pace with technologies that can spot cancer, predict heart failure, and even help paralyzed patients communicate.
AdvaMed, the world’s largest medical technology association with more than 500 member companies across the globe, has issued a warning about outdated privacy rules in its newly released AI Policy Roadmap created for Congress. The association, whose members develop everything from diagnostic tests to surgical robots, cautions that life-saving innovations may struggle to reach patients without updates to HIPAA.
“Large, diverse data sets are needed by AI medical device developers to train and validate trustworthy algorithms,” the roadmap states. Without access to comprehensive data, the next generation of medical AI could be hamstrung before it reaches patients.
Data limitations are slowing medical innovation
The FDA has already authorized over 1,000 AI-enabled medical devices, but developing new ones requires enormous amounts of data, much of it protected under HIPAA. AdvaMed’s roadmap is pushing policy makers to create specific guidance for AI-enabled device manufacturers allowing “for the sharing of the datasets needed to train, test, validate, and re-train AI models while preserving patient privacy.”
Current regulations create what AdvaMed describes as “fragmented” and “siloed” data systems. Healthcare information exists in different formats across various platforms, with limited interoperability. Even when developers can access this data, they often face strict limitations on how long they can retain it or what metadata (like demographic information) they can include.
“Data quality and provenance are important considerations for AI medical device developers in the training and validation of AI models,” the roadmap states. “Privacy law requirements for de-identification and/or minimization of personal data or metadata can be inconsistent and at tension with these important considerations.”
AdvaMed argues that these constraints slow development and raise safety concerns. AI systems need diverse, representative data to work effectively across different populations. Without it, tools may underperform for certain demographic groups and healthcare disparities could widen despite the intent to narrow them.
Clearer guidelines are needed
A hospital developing an algorithm to detect early signs of sepsis might need to analyze thousands of patient records containing vital signs, lab results, and treatment outcomes. Under current HIPAA guidelines, the hospital would need to strip away identifying information, potentially including details about age, ethnicity, or pre-existing conditions that could be crucial for ensuring the algorithm works effectively.
Even if developers obtain properly de-identified data, the roadmap warns that they may be unable to maintain access long enough to fully validate their systems or make improvements based on real-world performance:
“These inconsistencies impact the ability of AI medical device developers to access, store and retain training and validation datasets (and metadata) over a certain period of time to meet FDA’s recommendations.”
AdvaMed isn’t suggesting eliminating privacy protections. Instead, it recommends creating clearer, more practical guidelines specifically for AI development. These updated guidelines would maintain patient confidentiality without impeding innovation. This could include revised standards for data de-identification, consistent policies for metadata access, and mechanisms for patient consent that allow for responsible data sharing.
HIPAA reform may not be enough
Updating privacy regulations is critical, but it’s just one piece of the puzzle. According to AdvaMed’s roadmap, the medical AI ecosystem faces many more challenges requiring attention.
The roadmap strongly advocates keeping the FDA as the lead regulator for AI medical devices, warning against a patchwork approach where multiple agencies impose overlapping requirements. It notes that the FDA has been successfully regulating AI-enabled devices for 25 years and has recently gained new tools through the Predetermined Change Control Plan process, which allows manufacturers to make pre-approved updates to AI systems without triggering a new review.
Reimbursement presents another significant challenge. Medicare’s payment systems were not designed with AI in mind, making it difficult for providers to get paid for using these technologies. AdvaMed urges CMS to “develop a formalized payment pathway for algorithm-based health care services” and recommends legislative solutions to address budget neutrality constraints that limit coverage expansion.
Inadequate Medicare reimbursement has created significant barriers for healthcare providers, particularly those serving rural and low-income communities. These underserved areas stand to gain tremendously from AI-enabled diagnostic and treatment tools, but without sustainable payment models, such advanced technologies remain financially out of reach for the populations that could benefit from them most.
“While none of us can anticipate all the game-changing applications of AI in medtech to come, we can confidently predict that transformation will continue at a rapid pace—and the policy environment absolutely must keep up,” said Scott Whitaker, AdvaMed president and CEO in a press release. “This is the right time to promote the development of AI-enabled medtech to its fullest potential to serve all patients, regardless of zip code or circumstance.”