NIST retrofits cybersecurity playbook for AI age
As artificial intelligence systems accelerate into every corner of business and government, the federal government’s top cybersecurity standards body is trying to get ahead of the risks. The National Institute of Standards and Technology (NIST) in August released a concept paper outlining a plan to build Control Overlays for Securing AI Systems (COSAIS), a new framework that adapts its widely used SP 800-53 security controls to the unique threats posed by AI.
The paper, though only an early blueprint, signals that NIST intends to extend its influence into one of the most urgent policy debates of the decade: how to safeguard AI systems that now power everything from generative models in hospitals to agentic AI assistants embedded in enterprise workflows.
Building on familiar foundations
For two decades, NIST’s SP 800-53 controls have been a cornerstone of U.S. federal cybersecurity practice, guiding agencies and contractors on baseline protections for networks and data. By proposing overlays specifically tailored for AI, NIST aims to give organizations a way to map familiar controls onto unfamiliar terrain.
“Many organizations already have institutional processes in place to implement SP 800-53,” the paper notes. “Overlays provide customization and prioritization for the most critical controls to consider for AI systems”.
That could be a relief for CIOs and CISOs trying to extend their compliance programs into fast-evolving areas like large language models, predictive AI, and autonomous agents. Instead of reinventing standards from scratch, the overlays would allow agencies and companies to modify existing controls to cover risks such as poisoned training data, model extraction, or AI-enabled spear phishing.
Five initial use cases
NIST proposes starting with five categories of overlays, each addressing a different flavor of AI adoption:
- Generative AI (such as LLMs used for text and image creation), whether hosted in-house or by a third party.
- Predictive AI, used in decision-making systems like credit underwriting or resume screening.
- Single-agent AI systems, like enterprise copilots or coding assistants that can act autonomously within a company’s environment.
- Multi-agent AI systems, still early in adoption but expected to grow, where multiple agents coordinate to complete complex workflows.
- AI developer practices, mapping SP 800-53 controls to artifacts identified in NIST’s Secure Software Development Practices for Generative AI (SP 800-218A) and its draft guidance on misuse risks in dual-use models.
Each overlay will focus on safeguarding the confidentiality, integrity, and availability of AI models and their supporting infrastructure, while assuming that baseline enterprise controls like account management and incident response are already in place.
Why now?
The timing of the concept paper is no accident. Policymakers in Washington are under increasing pressure to address the dual challenge of securing AI systems and preventing their malicious use. The White House last year directed NIST to expand its AI Risk Management Framework (RMF), and Congress is weighing multiple bills aimed at AI transparency and safety.
Industry too has sounded alarms. In recent months, researchers have shown how attackers can “jailbreak” large language models or inject malicious prompts to exfiltrate sensitive data. The Department of Homeland Security warned that AI tools are lowering the barrier for cybercriminals to launch sophisticated phishing campaigns and deepfake disinformation.
By rooting AI security in the same technical foundation already used for federal systems, NIST is betting it can accelerate adoption. “Using SP 800-53 provides a common technical foundation,” the paper argues, while overlays allow for tailoring “to address unique risks or applications”.
Complement, not compete
The overlays are designed to slot into a larger ecosystem of NIST publications, including the AI RMF, its taxonomy of adversarial machine learning attacks, and draft guidance on dual-use foundation models. A table in the paper makes clear that COSAIS will be implementation-focused, complementing the more strategic Cybersecurity Framework Profile for AI now in development.
That layered approach mirrors how NIST has historically handled cybersecurity: providing both high-level risk management frameworks and technical control catalogs that practitioners can operationalize.
A collaborative process
As with its other frameworks, NIST is asking for feedback before it locks in details. The agency has launched a Slack channel—#nist-overlays-securing-ai—where security researchers, vendors, and policymakers can debate draft use cases and propose refinements. It is also accepting written comments at [email protected].
Based on that input, NIST expects to publish its first draft overlay for public comment in early fiscal 2026, starting with one of the five proposed use cases. A workshop will follow, giving stakeholders a chance to weigh in before final guidelines are set.
Industry implications
The overlays could eventually shape procurement requirements for federal agencies and trickle down to the private sector, much as earlier NIST controls became de facto standards in finance, health care, and defense contracting.
For organizations selling into government markets, demonstrating alignment with NIST AI security frameworks will likely become a competitive necessity. That precedent could gradually elevate these standards across industries, as companies seek to benchmark their AI security posture against recognized federal guidelines.
For hospitals experimenting with generative AI, or insurers deploying predictive models, the overlays may also offer a way to benchmark internal safeguards against a recognized federal yardstick. That could ease the burden on compliance teams juggling a growing patchwork of state, federal, and international AI rules.
Looking ahead
Still, the concept paper leaves many questions open: Which overlay should come first? How will NIST handle rapidly evolving technologies like multi-agent systems? And how prescriptive will the controls be, given the diversity of AI applications?
NIST acknowledges those uncertainties, framing the overlays as a living library that can be updated as adoption patterns and threats shift. “The overlays will not be a comprehensive set of controls for securing an enterprise,” the paper states, but rather tools that can be used “individually or in combination to better manage risks”.