Explore our Topics:

AI becomes the new weapon of choice in cybercrime

Anthropic’s threat report reveals how cybercriminals now use AI to scale extortion, fraud, and espionage worldwide.
By admin
Sep 15, 2025, 9:32 AM

When a cybercriminal recently launched a string of extortion campaigns against hospitals, government offices, and even religious institutions, the most chilling detail wasn’t the number of victims—or even the ransoms that sometimes topped half a million dollars. It was that much of the operation was orchestrated by an AI coding agent.

That finding is at the center of Anthropic’s latest Threat Intelligence Report, which documents how malicious actors are embedding AI models into every stage of cybercrime. The report draws on recent investigations by Anthropic’s threat team, which tracks real-world misuse of its AI system, Claude.

“A single operator can now achieve the impact of an entire cybercriminal team through AI assistance,” notes the report’s authors Alex Moix, Ken Lebedev, and Jacob Klein of Anthropic’s Threat Intelligence team.

Vibe hacking and the evolution of extortion

The report highlights what researchers call “vibe hacking.” In this model, attackers use AI coding agents not just for advice, but as live participants in network intrusions. The AI scans systems, harvests credentials, and analyzes stolen data to calculate ransom amounts.

At least 17 organizations—including healthcare providers, financial institutions, and emergency services—were targeted in one campaign tracked by Anthropic. Instead of encrypting files, the attackers exfiltrated sensitive data and threatened exposure unless ransoms between $75,000 and $500,000 were paid.

Claude even generated HTML ransom notes embedded into victims’ machines, complete with custom threats, sector-specific regulatory warnings, and precise payment deadlines. According to the report, the integration of AI across reconnaissance, intrusion, and extortion phases represents “a fundamental shift in how cybercriminals can scale their operations.”

North Korea’s fraudulent workforce

The report also describes how North Korean operatives have used AI to secure high-paying remote jobs at technology companies. Traditionally, these workers underwent years of training to gain the skills needed to infiltrate Western firms. But Anthropic investigators found that AI has eliminated that bottleneck.

“Operators do not appear to be able to write code, debug problems, or even communicate professionally without Claude’s assistance,” the report notes. Yet they are able to pass interviews, maintain day-to-day work, and collect salaries that are funneled back to the regime.

This dependency marks what the report calls “a new paradigm where technical competence is simulated rather than possessed.”

Ransomware-as-a-service without the skills

Another case tracks a UK-based actor selling ransomware packages on dark-web forums for $400 to $1,200. The offerings included encryption with ChaCha20, anti-detection features, and Tor-based command and control systems—sophisticated elements typically beyond the reach of untrained criminals.

Investigators concluded the seller was heavily reliant on AI to generate the malware, describing “a complete dependency on Claude” for building and troubleshooting the code.

“This represents a recurring theme,” the report warns. “Complex malware development becomes accessible to non-technical criminals.”

State-linked espionage

Anthropic also documented a Chinese-linked campaign that targeted Vietnamese telecommunications, government databases, and agricultural systems. The actor used Claude across nearly every stage of attack, from reconnaissance to lateral movement.

According to the report, the nine-month campaign integrated AI into 12 of 14 MITRE ATT&CK tactics—essentially embedding the model as a standing team member throughout the operation.

AI across the fraud supply chain

Beyond ransomware and espionage, the report shows AI being used to enhance fraud at scale. Threat actors employed models to analyze stolen data, generate synthetic identities, validate stolen credit cards, and even power bots that run romance scams with emotionally persuasive messages.

One case involved a Telegram bot marketed as a “high EQ” system for scam operators. Another documented a carding service that used AI to rotate through validation APIs and avoid detection, enabling more resilient financial fraud.

“These cases collectively demonstrate how AI empowers criminal operations across the entire abuse supply chain,” the report concludes.

The challenge ahead

Anthropic says it has banned accounts linked to these activities, created new classifiers to detect similar misuse, and shared technical indicators with industry partners. But the company acknowledges that misuse is evolving quickly, and that defensive tools must adapt just as fast.

The overarching lesson of the report is clear: AI lowers barriers that once limited the scale of cybercrime. It allows a lone operator to run campaigns that previously required teams of specialists. It enables fraudsters with little skill to impersonate experts. And it allows state-linked groups to embed AI into every corner of long-running espionage operations.

Traditional assumptions about the link between attacker sophistication and the complexity of their operations, the report warns, “no longer hold when AI can provide instant expertise.”


Show Your Support

Subscribe

Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.