Ox Security’s new agentic AI can patch vulnerabilities before hackers strike
Most developers now use AI tools to write code—but when those tools introduce security flaws, it’s still humans who are left to clean up the mess. Today’s AI coding assistants may help you move faster, but they don’t fix the vulnerabilities they create. Developers have to manually search for security gaps, diagnose the risk, and write a patch—if they catch the issue at all.
OX Security thinks it can close that loop with its new system, Agent OX. The Tel Aviv-based cybersecurity firm announced this week that its technology can automatically generate patches tailored to each company’s coding patterns and architecture, marking a departure from the generic recommendations that have frustrated the industry for years.
“Most of those promised AI features? They’re generic. They generate boilerplate advice, cookie-cutter recommendations, and one-size-fits-nobody fixes,” said OX Security on their blog. “The new system uses the developer’s own writing style and the names of the parameters, and the context used in the ecosystem.”
The AI code problem
The security challenges multiply when healthcare organizations adopt AI coding tools to accelerate development. Veracode researchers tested over 100 large language models and found that 45% of AI-generated code samples failed security tests and introduced OWASP Top 10 vulnerabilities. Java proved particularly problematic, with a 72% security failure rate.
Agent OX operates through multiple AI agents that analyze each vulnerability from different perspectives—examining business logic, database architecture, authentication mechanisms, and potential data exposure risks.
The system follows a three-stage protocol: identifying vulnerabilities through scanning, determining whether they’re actually exploitable, and generating custom code fixes tailored to each organization’s architecture. When ready, developers review the proposed code and can deploy it with a single click.
Medical device implications
For medical device manufacturers, the implications are particularly significant. AI-powered medical devices surged by more than 80% between 2015 and 2020, with the FDA having approved over 500 such systems. These devices confront distinctive risks—including data corruption attacks, adversarial manipulation, and model poisoning that can compromise functionality before deployment.
The FDA’s recent requirements for AI-powered medical devices mandate security integration throughout development. Agent OX’s automated approach could help manufacturers meet these standards while maintaining their existing technical architectures.
Related article: FDA draft guidance requires ‘built-in’ cybersecurity for medical devices
Skepticism and scale
Some security experts question whether AI can solve crises it helped create. CrowdStrike’s research team, presenting findings at NVIDIA’s 2025 conference, cautioned that machine-generated code could trigger a tsunami of unaddressed vulnerabilities. Their experimental multi-agent security system, which shares similarities with OX’s design, hasn’t progressed beyond initial testing.
“Comprehensive security automation has shifted from optional to essential,” the CrowdStrike researchers emphasized.
The true challenge emerges when organizations attempt large-scale deployment. Will AI accurately interpret the subtleties of proprietary codebases? Can development teams trust machine-generated patches for mission-critical systems where?