Explore our Topics:

Pushing healthcare cybersecurity beyond the endpoint with AI

AI cybersecurity offers new tools to strengthen healthcare defenses, but success requires balancing innovation, governance, and risk.
By admin
May 9, 2025, 4:59 PM

Note: This is the final of three articles, sponsored by Zscaler, exploring the future of AI cybersecurity security in healthcare. Previous articles examine how healthcare organizations can secure the expanding attack surface in a highly connected environment, as well as how to manage third-party risk. 

As with other industries, healthcare entities are grappling with how to safely implement AI technology in automated administrative tasks and other functions that target physician burnout. The promise of these AI cybersecurity tools can only be met when implemented through careful consideration of compliance and patient safety risks.

The industry is already finding success in augmenting EHR-integrated decision support tools with AI to support providers with more information at the point of care. Other AI tools retrieve external and patient data to help providers make informed decisions, while identifying the risk levels of patients and predicting patient outcomes.

But can AI bring much needed support to overburdened healthcare cyber teams? A range of vendors are already making big claims on just how the advanced technology can disrupt the ongoing challenges faced by most healthcare provider organizations. As with any new security technology that claims to solve big challenges, it’s imperative that decision makers look at the evidence and their current processes and tech stack to make the determination whether AI would help ongoing cyber challenges – or further complicate an overstressed team.

By the data: automation, AI, and other staff support

Since ransomware first surged in 2016, healthcare security leaders have learned to approach new technologies, especially those making bold claims, with skepticism and a strong focus on governance. AI is no exception. Increasingly, AI is being embedded into existing tools, often by trusted vendors, which can streamline implementation. However, that doesn’t eliminate the need for caution, particularly with generative AI (GenAI) entering the clinical and operational mainstream.

Beyond the security operations center (SOC), GenAI is emerging as a tool to reduce administrative burdens and staff burnout across departments. From summarizing lengthy documentation to automating notetaking and streamlining prior authorization workflows, organizations are piloting GenAI to help clinicians and support teams work more efficiently. These tools hold promise, but only when deployed with rigorous safeguards.

One of the greatest concerns tied to GenAI in healthcare is the potential for accidental exposure of protected health information (PHI) and other sensitive data. Public-facing GenAI platforms or unsecured integrations risk ingesting and storing confidential inputs in ways that violate HIPAA or internal governance policies. This is why security leaders are pushing for stricter data handling policies and preferring closed-loop or enterprise-grade AI platforms that don’t leak data outside the organization’s control.

Healthcare leaders also need to consider AI-specific threats. Malicious actors are already exploring techniques like AI poisoning, where compromised data is fed into learning models to distort their outputs or create exploitable blind spots. Adversarial inputs — specially crafted data that fools AI systems into misclassifying threats — are another growing risk. These novel attack vectors can bypass traditional security layers, making it imperative to treat AI models as both assets and potential targets in the cybersecurity ecosystem.

To mitigate these risks, organizations should implement zero trust access to large language models (LLMs) and other AI tools, treating them like any other sensitive application. This includes verifying user identity, continuously monitoring usage, and preventing unauthorized or risky access. Additionally, platforms should be capable of seeing all GenAI interactions, including prompt-level visibility, to detect and stop the exposure of sensitive or regulated data before it’s entered. Monitoring prompt patterns also provides early warning signs of malicious or unintended misuse, strengthening protection against poisoning or data leakage.

Despite these risks, AI is a valuable potential ally. In cybersecurity operations, AI improves:

  • Advanced threat detection: By identifying subtle anomalies or suspicious behavior patterns across massive data sets.
  • Automation: By reducing the response time to known threats and helping overworked analysts prioritize incidents.
  • Predictive analytics: By flagging abnormal user behavior or signs of insider threats before breaches occur.

Organizations are also seeing success using AI in third-party risk management, including document auditing, security assessment review, and vendor policy benchmarking—automating time-consuming tasks while improving accuracy.

The reality is that AI and GenAI are already inside healthcare environments, offering tools to ease workloads and improve resilience. But their power demands a higher bar for governance. To ensure compliance and minimize risk, organizations must treat AI procurement and implementation like any other transformative technology: with robust processes, cross-functional oversight, and a relentless focus on protecting sensitive information.

Reducing the scale of third-party

The number of supply chain and other third-party vendors will continue to rise, bringing with it continued visibility challenges and longstanding risks. Vendors are adding AI functionality to their tools to enhance the outcomes of their products, while improving the accuracy of risk assessment findings.

Risk assessments are not only required by the Health Insurance Portability and Accountability Act (HIPAA), they’re an effective tool for finding gaps in security policies and settings internally and with vendor connections. With automation, for example, provider organizations are seeing assessment improvements with the addition of automation to review existing tools and policies.

In particular, by leveraging automation on procurement processes like vendor security questionnaires, providers are improving the overall process and gaining better understanding into high-risk partnerships and associated tools. AI technologies are making the questionnaire process more effective and faster for security teams, particularly with third-party and fourth-party risk management.

Provider organizations have reported that AI is able to audit documents, security assessments, and other materials from third-party providers, as well as a more effective way to evaluate the cyber risk levels of all their vendors in real time. Others have found success with the use of natural language processing techniques to find gaps, inconsistencies, or concerns in documentation provided by vendors to attest their security procedures. The technology can flag these issues for security leaders to further investigate.

What’s more, AI algorithms have been found effective at comparing the security policies of vendors against industry standards, the organization’s best practices, and competitive companies to empower security leaders with the ability of better monitoring the security posture of their business partners.

Providers are reporting better results, while experiencing reduced workloads for overburdened staff. And while AI is no silver bullet, as no security technology is, health systems in need of easing the workload of current staff, closing security gaps, and better testing the effectiveness of their processes can find tested opportunities to add AI and automation to improve cyber resilience.

To ensure compliance, security leaders should treat AI procurement as it would other tech implementations: through effective governance, compliance, and processes to ensure AI cybersecurity is transformative, rather than inherently risky, when strategically implemented into healthcare networks. As regulators and the new administration continue their policy changes, healthcare entities should look to incorporate procedures and technology to meet the challenge.


About Zscaler

Zscaler, a leader in cloud security, helps healthcare organizations protect patient data and critical systems with its Zero Trust platform. As the healthcare landscape becomes increasingly digital, Zscaler understands the importance of robust cybersecurity measures in ensuring secure and compliant operations.


Show Your Support

Subscribe

Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.