Explore our Topics:

NIST releases guidance on AI cybersecurity

The NIST report outlines attack vectors and mitigation strategies for AI systems that are increasingly used in healthcare settings.
By admin
Mar 26, 2025, 10:19 AM

Healthcare organizations increasingly rely on artificial intelligence for everything from diagnostic assistance to administrative efficiency. But these AI systems face unique security vulnerabilities that traditional cybersecurity measures can’t fully address. 

The National Institute of Standards and Technology (NIST) released guidance to identify how bad actors can manipulate AI systems and how organizations can protect themselves. 

“Despite the significant progress AI and machine learning have made, these technologies are vulnerable to attacks that can cause spectacular failures with dire consequences,” said NIST computer scientist Apostol Vassilev, one of the publication’s authors in a statement.

Four types of AI cyberattacks

Evasion Attacks

Evasion attacks occur after an AI system is deployed. These attacks manipulate input data in ways that cause the AI to misclassify or misinterpret the information. For example, attackers might add subtle markings to stop signs that make autonomous vehicles misinterpret them as speed limit signs.

Poisoning Attacks

Poisoning attacks target AI systems during their training phase by introducing corrupted data. Since AI systems learn from the data they’re trained on, corrupting this data can lead to systematic errors in the AI’s decision-making process.

The report distinguishes between different types of poisoning attacks, including “targeted poisoning” focused on specific inputs and “backdoor poisoning” that inserts hidden triggers into data that can be activated later. These attacks can be particularly difficult to detect since they’re embedded in the system from its creation.

Privacy Attacks

Privacy attacks attempt to extract sensitive information about the AI or its training data. This might include trying to determine if a specific person’s data was used to train the model or extracting proprietary information about how the model works.

The report identifies several privacy attack vectors, including “membership inference” to determine if specific data was in the training set and “model extraction” that attempts to steal the AI model itself.

Abuse Attacks

Abuse attacks involve inserting incorrect information into legitimate sources that AI systems reference. Unlike poisoning attacks that target training data directly, abuse attacks manipulate external information sources that AI systems rely on during operation.

Evolving motivations for healthcare cyberattacks

Healthcare has become the third most-targeted industry for cyberattacks, trailing only education/research and government/military sectors, according to research from Checkpoint.

 Traditionally, hackers target healthcare organizations for financial gain—either by ransoming encrypted data or stealing and selling patient information on the black market. The recent ALPHV attack against Change Healthcare illustrates the devastating potential, resulting in a $22 million bitcoin ransom and months of healthcare system disruption. But the threat landscape is evolving beyond simple financial motives. 

According to Anne Neuberger, U.S. Deputy National Security Advisor for Cyber and Emerging Tech, 51% of global cyberattacks in early 2024 targeted U.S. infrastructure, many state-sponsored and designed to undermine critical systems. 

The AI-specific attack vectors identified by NIST—evasion, poisoning, privacy, and abuse attacks—represent sophisticated new tools for these adversaries. Rather than just demanding immediate ransoms, attackers could use these techniques to compromise AI-driven diagnostic systems, corrupt clinical decision support algorithms, extract valuable medical research data, or sabotage promising AI healthcare initiatives. For state-backed actors focused on infrastructure destruction, these AI vulnerabilities present particularly tempting targets that could damage healthcare delivery while simultaneously eroding public trust in medical AI systems.

Mitigation Strategies and Their Limitations

While no perfect defense exists, the NIST report outlines several mitigation approaches:

For Evasion Attacks

  • Adversarial Training: Including adversarial examples in training data to make models more robust.
  • Input Validation: Implementing checks to flag suspicious inputs.
  • Randomized Smoothing: Adding controlled noise to inputs to make evasion more difficult.
  • Ensemble Methods: Using multiple models to cross-validate decisions.

However, these approaches often come with trade-offs. For example, adversarial training can reduce a model’s accuracy on normal inputs, which might be unacceptable in high-stakes medical settings.

For Poisoning Attacks

  • Data Sanitization: Filtering training data to remove suspicious samples.
  • Robust Statistics: Using statistical techniques that are less sensitive to outliers.
  • Provenance Tracking: Maintaining clear records of where training data originated.

The report notes that detecting poisoned samples can be extremely difficult, especially with large datasets typical in healthcare applications.

For Privacy Attacks

  • Differential Privacy: Adding controlled noise to limit information leakage.
  • Federated Learning: Training models across institutions without sharing raw data.
  • Access Controls: Limiting queries to reduce attackers’ ability to extract information.

These techniques can help but often reduce model performance or utility, creating difficult trade-offs for healthcare organizations.

Practical Steps for Healthcare IT Leaders

Healthcare organizations can take several steps to reduce their vulnerability:

  1. Apply NIST’s AI Risk Management Framework: Use this alongside the adversarial machine learning report to create a comprehensive approach.
  2. Implement data validation: Review training data for anomalies or potential poisoning.
  3. Conduct adversarial testing: Regularly test AI systems with adversarial examples to identify weaknesses.
  4. Monitor AI performance: Establish baseline performance metrics and monitor for unexpected deviations.
  5. Maintain human oversight: Keep clinicians and experts in the loop for AI-assisted decisions.
  6. Practice defense in depth: Layer multiple mitigation strategies rather than relying on a single approach.
  7. Conduct regular risk assessments: As AI systems and attacks evolve, continually reassess vulnerabilities.

Healthcare organizations should approach vendor claims about AI security with healthy skepticism. 

“There are theoretical problems with securing AI algorithms that simply haven’t been solved yet. If anyone says differently, they are selling snake oil,” Vassilev warned.

By understanding these threats and implementing strategic safeguards, healthcare organizations can continue harnessing AI’s benefits while minimizing new security risks to patient care and data protection.


Show Your Support

Subscribe

Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.