Explore our Topics:

Investors flock to agentic AI as startups pitch fixes for cybersecurity gaps

Agentic AI fuels a cybersecurity arms race as startups, investors, and regulators push new tools and standards.
By admin
Sep 16, 2025, 3:40 PM

Venture money is surging into tools that defend against (and sometimes are themselves) autonomous software agents, more commonly known as agentic AI. These systems plan, act, and coordinate with minimal human prompts, which is a double-edged sword for security leaders. AI agents can triage alerts, fuzz APIs, and red-team SaaS at machine speed, but they can also jailbreak themselves, siphon data, and open attack paths unintentionally.

While governing bodies tighten regulations, investors are betting big that a new crop of AI security companies will become as ubiquitous as endpoint detection and response (EDR) organizations did a decade ago. Last month, the EU AI Act ushered in obligations for general-purpose models and set a timeline for high-risk systems, formalizing expectations around accuracy, robustness and cybersecurity, a compliance backdrop that favors security-by-design vendors.

U.S. guidance is coalescing, too, with CISA, the NSA, and partner agencies publishing playbooks on securely deploying external AI systems and protecting data across the AI lifecycle. These pragmatic checklists can be mapped to existing controls by security teams. Further, NIST’s Adversarial Machine Learning, released earlier this year, gives risk owners a common vocabulary to assess adversarial attacks, including data poisoning, evasion attacks, and supply-chain exposures across both predictive and generative AI systems. When taken together, this guidance supports the approach of treating AI like any other high-risk software component requiring comprehensive security assessment and mitigation strategies.

 

The investor checkbooks are out

The market is responding, both on offense and defense. Autonomous security testing and “AI copilots” for the Security Operations Center (SOC) are attracting fresh rounds of investment, while a parallel wave is targeting the AI layer itself with AI firewalls, agent graph scanning, and measures to harden LLM applications against data exfiltration and prompt injection. M&A is already consolidating the category; in early September, Secure Access Service Edge (SASE) Cato Networks acquired AI-security startup Aim Security to fold AI posture management and an “AI firewall” into its cloud platform. This acquisition signals that mainstream platforms are beginning to regard AI safety features as a competitive necessity.

The Cato-Aim deal is one headline in a broader surge of activity. Across the ecosystem, founders are pitching everything from autonomous red-teaming to identity for agents, and investors are buying in. Here’s a closer look at the players shaping the early investor market and the bets they’re making:

  • Lakera — $20M Series A (Atomico; 2024). One of the earliest to productize “LLM attack surface management,” Lakera sits in line with apps to block prompt injection, data leaks, and tool-abuse in real time. Its pitch: production-grade controls that don’t break developer velocity.
  • Protect AI — $60M Series B (2024). Built for ML security and governance, Protect AI inventories models, traces lineage, and scans pipelines for secrets and policy violations the CI/CD of MLOps. Enterprises gravitate to its breadth across dev and runtime.
  • ai — $100M Series D (2025). Best known for autonomous pen testing (NodeZero), the company automates attack paths across hybrid estates, increasingly relevant for agent-heavy apps and API sprawl.
  • CalypsoAI — >$40M raised; +$5M in 2025. Focused on “inference-layer” security: guardrails, agentic red-teaming, and observability at the point where models decide and act. The company added capital this spring and was an RSAC Innovation Sandbox finalist.
  • Vouched — $17M Series A (2025). Identity for agents, not just humans. As enterprises wire autonomous agents into finance and ops, Vouched is building IDV and trust brokering so agents can authenticate, authorize, and transact without handing out skeleton keys.
  • Prompt Security — acquired by SentinelOne (Aug. 2025). One of the first pure-play GenAI security platforms, Prompt Security moved from startup to a strategic asset as incumbents pulled AI security into endpoint and data-protection suites; reported deal value: $180M.
  • Aim Security — acquired by Cato Networks (Sept. 2025). Brings AI-SPM and an “AI firewall” to a SASE platform, an instructive signal that network and AI security controls will converge at the edge.
  • SplxAI — $7M seed (2025). Offensive AI security at startup speed, rapid red-teaming that simulates thousands of attacks against LLM apps, plus an open-source “Agentic Radar” to track multi-agent risks.
  • Cranium — $25M Series A (2023; continuing momentum). A governance and trust layer for enterprise AI, with asset mapping, risk scoring, and compliance reporting aligned to frameworks like the EU Act and NIST. Early mover advantage with blue-chip partners.

Not every entrant leads with a funding splash. At RSA this year, data-security vendor Sentra rolled out “Data Security for AI Agents,” a sign that established players are retooling for agent-centric workflows even without fresh cap tables attached to the feature launch. And there’s a long tail of LLM-first platforms (e.g., Lasso Security) that started in 2023 with seed rounds and have been building traction in verticals like healthcare and the public sector.

 

A perfect storm for venture bets

In addition to the regulatory pressure that is pushing companies into cataloging their models and adopting technical safeguards, two other forces are driving the market forward and attracting investors. One is the mounting body of evidence that the risks are real. Federal playbooks and industry-run exercises have shown how AI systems can leak data and be manipulated or abused in unexpected ways. Regulators are telling security chiefs to classify these models as high-risk assets and layer on compensating controls.

The other force is the pull of the big platforms. Cloud, networking, and endpoint giants are building or buying their way into AI security so they can keep these threats within their existing consoles, a logic that helps explain the recent Aim and Prompt acquisitions.

 

Caveats, unanswered questions, and what comes next

Claims about the accuracy of “AI firewalls” and monitoring tools are still unproven, and experts say buyers should insist on red-team results rather than flashy dashboards. The placement of those defenses is just as important. Unless controls are embedded at the points where prompts, data, and tools actually interact (through gateways, plugins, or vector databases), security gaps are likely to multiply.

Policy shifts are adding another layer of uncertainty. In Europe, voluntary codes of practice for general-purpose AI are expected to harden into binding technical requirements. In the United States, the absence of a single AI law leaves oversight in the hands of sector regulators and government procurement rules.

For now, the most pragmatic strategy may be a cautious one; align AI deployments with established security frameworks and test pilot tools that can take on the unglamorous but vital work of asset discovery, policy enforcement, and real-time observability.

Security leaders should expect a wave of “agents for defense” marketed as autonomous assistants inside the SOC, along with more dealmaking from the big network and identity players. If agentic AI takes hold in back-office software, the next scramble will center on identity, not just for people, but for the agents themselves, and on the audit trails that prove who, or what, carried out an action.


Show Your Support

Subscribe

Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.