Explore our Topics:

Researchers show ransomware that writes itself

Researchers built proof-of-concept ransomware that can plan, execute, and evade traditional defenses without human operators.
By admin
Sep 26, 2025, 10:02 AM

A new study from NYU Tandon School of Engineering warns that ransomware no longer needs a human operator. Researchers have built a proof-of-concept showing how large language models (LLMs) can autonomously plan, adapt, and execute every stage of an attack — from scanning files to delivering personalized ransom notes.

From script kiddies to self-composing malware

Early ransomware strains like CryptoLocker and WannaCry operated on relatively static code: once security researchers obtained a sample, they could reverse engineer, detect, and block it. By the late 2010s, operations such as LockBit and Conti professionalized ransomware into “Ransomware-as-a-Service,” franchising malware to affiliates and layering in “double extortion” tactics—stealing data before encrypting it.

Ransomware 3.0, as defined by the NYU team, escalates this evolution. Instead of shipping fixed malicious code, attackers could embed seemingly benign instructions into a binary. At runtime, the orchestrator queries an LLM—open-source models like GPT-OSS-20B and GPT-OSS-120B were used in the experiments—to generate fresh ransomware code tailored to the target environment.

The system executes in four phases: reconnaissance, leverage, launch, and notify. In testing, it reliably scanned hosts, identified sensitive files, selected appropriate payloads (encrypting corporate servers, exfiltrating personal files, or destroying industrial controllers), and issued ransom notes that referenced victims’ own files.

The polymorphism is striking: no two runs generated identical code, complicating traditional defenses that rely on signatures or behavior baselines.

Why it matters

“This lowers the barrier to entry dramatically,” the authors write. Commodity hardware and open-weight models could allow even low-skilled actors to mount sophisticated campaigns. By removing the need for teams of coders and operators, the economics of ransomware shift. Instead of focusing only on million-dollar payouts from large organizations, attackers could efficiently target smaller victims once considered unprofitable.

Average ransomware payments topped $2 million in 2025, according to blockchain analytics firm Chainalysis. If personalized AI-written ransom notes boost compliance rates, the revenue potential only grows.

The stealth factor adds urgency. Traditional ransomware often leaves obvious traces—CPU spikes, mass file writes, or abnormal system calls. The NYU prototype showed a far lighter footprint, operating without the classic red flags that endpoint detection systems are tuned to spot.

Echoes in industry and policy

Security experts had previously raised alarms that AI-assisted malware might soon cross this threshold. The threat became tangible when ESET researchers discovered what they believed was the first AI-powered ransomware in the wild, a sample dubbed “PromptLock” that used embedded instructions to harness an LLM during runtime. This discovery was later revealed to be code created by the NYU research team during their testing procedures, validating the real-world feasibility of the capabilities their academic study demonstrates.

The work also lands amid heightened concern about AI misuse in cybersecurity. Earlier this year, Kaspersky uncovered a malicious Python package hidden in Cursor, a popular AI-assisted development environment, used to exfiltrate cryptocurrency wallets. And research groups have already demonstrated AI-generated keyloggers such as “BlackMamba.”

Policymakers are also taking note. The White House’s 2023 AI Cyber Challenge funded DARPA competitions to stress-test LLMs in security contexts, while bipartisan bills in Congress have sought to strengthen reporting requirements for healthcare and energy providers facing ransomware attacks.

But most current safeguards assume humans are writing the code. “Defenses built on yesterday’s ransomware playbook may not hold against adaptive, polymorphic malware generated in real time,” the NYU team warns.

Limits and countermeasures

To be clear, the researchers stress that their prototype was tested only in controlled lab environments and does not include persistence, privilege escalation, or lateral movement. It is not, in other words, a fully weaponized tool. Instead, it is a proof of feasibility—showing that AI can autonomously plan and execute a closed-loop ransomware lifecycle.

They suggest potential defenses: stricter monitoring of sensitive file access, honey-file traps to expose reconnaissance, and controls on outbound LLM connections (e.g., whitelisting providers, inspecting API traffic). Cloud providers, meanwhile, may need to refine abuse detection to spot and shut down malicious prompts.

The road ahead

The NYU paper concludes with a stark reminder: generative AI is a dual-use technology. The same models that accelerate software development or assist in vulnerability discovery can be repurposed for scalable, automated extortion.

Industry analysts say the findings should accelerate conversations about “red-teaming” AI models and embedding stronger safety filters, but also about how law enforcement and regulators prepare for attackers who may no longer need a team—or even much technical skill.

The move from ransomware gangs to ransomware algorithms may be closer than defenders are ready for.


Show Your Support

Subscribe

Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.