HSCC teases 2026 AI cybersecurity guidance
The race to integrate artificial intelligence into American healthcare has often outpaced the industry’s ability to secure it. Now, a massive coalition of hospitals, device makers, and government agencies is trying to close that gap before the risks become unmanageable.
The Health Sector Coordinating Council (HSCC) Cybersecurity Working Group released a preview this month of a sweeping new framework designed to standardize how the industry protects AI systems from cyberattacks, bias, and catastrophic failure.
The release marks the culmination of a year-long effort by an AI Cybersecurity Task Group formed in late 2024. Comprising 115 healthcare organizations, the group is attempting to build a universal playbook for a technology that has, until now, largely been governed by a patchwork of internal policies and emerging federal guidelines.
The full suite of guidance documents won’t be published until the first quarter of 2026, but the newly released summaries offer the clearest look yet at how the health sector plans to fortify itself against the next generation of digital threats.
A divide-and-conquer approach
The task force’s strategy acknowledges a critical reality: AI security is too complex to be solved by a single checklist. Instead, the group has split its guidance into five distinct “work streams,” each targeting a specific vulnerability in the healthcare ecosystem.
“The AI Task Group recognized the complexity and associated risk of A.I. technology used in clinical, administrative and financial health sector applications,” the HSCC noted in its announcement, emphasizing the need to break these massive challenges into “manageable work streams.”
Here is what the industry can expect when the full guidance rolls out early next year:
- Secure by Design: The “AIBOM” Era Perhaps the most technical shift proposed is the push for “Secure by Design” principles in medical devices. The working group is advocating for the integration of an AI Bill of Materials (AIBOM) and a Trusted AI BOM (TAIBOM). Much like a nutrition label for software, these tools would provide hospitals with visibility into the provenance of the algorithms inside their machines. This is intended to counter specific AI threats such as “data poisoning”—where attackers corrupt the training data to skew results—and model manipulation. The guidance aims to align with existing frameworks from the FDA and CISA.
- Governance and “Autonomy Levels” One of the stickiest problems in healthcare AI is determining how much human oversight is necessary. The upcoming governance framework introduces a “5-level autonomy scale.” This system would classify AI tools based on their independence, allowing hospitals to match the level of human supervision to the risk posed by the system. The framework also promises to map these controls directly to HIPAA and FDA regulations, potentially easing the compliance burden for health systems.
- Operations: Beyond the LLM While Large Language Models (LLMs) dominate the headlines, the task force is looking broader. Their operational guidance covers predictive machine learning systems and embedded device AI. The goal is to create specific “playbooks” for incident response. If an AI model is poisoned or begins to drift (degrade in accuracy over time), these playbooks would give IT teams a standard procedure for containment and recovery, similar to how they currently handle ransomware or data breaches.
- The Supply Chain Blind Spot Hospitals often rely on third-party vendors for their tech stack, introducing risks they can’t directly control. The “Third-Party AI Risk” work stream focuses on vetting these outside tools. The upcoming guidance will provide model contract clauses and procurement standards, pushing for greater transparency regarding how vendors handle patient data and bias testing.
- Education: Speaking the Same Language The final pillar addresses a non-technical but fatal flaw: confusion. The group released a foundational document alongside the previews, titled “A.I. in Healthcare: 10 Terms You Need to Know,” aimed at standardizing the lexicon used by clinicians and administrators.
The HSCC’s “call to action” frames this not just as an IT issue, but as a fundamental component of care. “Cyber safety is patient safety,” the group stated, urging healthcare organizations to begin preparing for the full rollout in January 2026.
While these guidelines are voluntary, the HSCC’s influence—representing over 480 organizations across the public and private sectors—means they often set the de facto standard of care, at least in the US. For hospitals and developers, the preview serves as a two-month warning: the rules of the road for medical AI are about to get much more specific.