Published by Aviatrix Security Research Team 

The AI security era isn’t coming — it’s already here. 

Across red teams, research labs, and increasingly, real-world intrusions, AI models are being weaponized to generate exploits, manipulate code, bypass traditional defenses, and automate entire attack chains. This is an architectural shift. Instead of hardcoded payloads, we’re entering a world where malware writes itself at runtime, using large language models (LLMs) embedded directly into attacker tooling. 

The cybersecurity industry has long anticipated vulnerabilities in how AI is used: prompt injection, model poisoning, overreliance. But what happens when AI becomes the attacker? 

That question became reality when ESET uncovered PromptLock, the world’s first known AI-powered ransomware. Unlike traditional ransomware written line-by-line by a human operator, PromptLock uses a locally hosted LLM to dynamically generate malicious Lua scripts on the victim’s machine, in real time. 

While PromptLock was later confirmed as a proof-of-concept built by NYU researchers, it marks a definitive shift in the threat landscape. Defenders are about to face a wave of AI-native threats, and most environments today are not prepared.  

What Is PromptLock?

PromptLock was discovered on August 25, 2025, when samples were uploaded to VirusTotal. ESET researchers quickly flagged it as the first ransomware variant to integrate a generative AI model directly into its payload. 

Unlike conventional ransomware, PromptLock doesn’t contain fixed encryption logic. Instead, it leverages: 

  • A Golang-based loader 

  • A locally hosted LLM (GPT‑OSS:20B) via the Ollama API 

  • AI-generated Lua scripts, created dynamically at runtime 

  • Cross-platform compatibility (Windows, Linux, macOS) 

The LLM runs locally: no external API calls, no command-and-control infrastructure. Everything happens on the endpoint, in memory, and adaptively.  

Attack Scenario: AI-Driven Breach in Real Time

Here’s how a PromptLock-style attack could unfold: 

  • A user unknowingly runs a malicious binary. 

  • The binary launches a local instance of GPT‑OSS:20B using the Ollama API. 

  • The AI is prompted to generate Lua scripts that: 

  • Scan and index files 

  • Attempt data exfiltration 

  • Encrypt sensitive directories using polymorphic logic 

Each execution results in unique scripts, rendering traditional signature-based AV and EDR ineffective. 

Dark Reading highlights that the AI model enables limitless script variations, making detection far more complex. Exfiltration may target cloud storage like S3 or other public endpoints—without ever calling out to a known C2 domain.  

Why Traditional Defenses Fail

IAM (Identity & Access Management)

IAM enforces access, not behavior. If malicious AI-generated scripts run under a valid identity, IAM doesn’t intervene. 

EDR (Endpoint Detection & Response)

EDRs rely on known behaviors and signatures. PromptLock generates logic at runtime, often evading traditional agent-based monitoring. 

CNAPP (Cloud-Native Application Protection Platforms)

CNAPPs focus on posture and misconfiguration instead of live, in-memory execution. They won’t catch a self-mutating Lua script inside a workload. 

NGFWs (Next-Gen Firewalls)

NGFWs inspect edge traffic. PromptLock operates locally, without making external calls—evading everything perimeter-based. 

Bottom line: Every tool and solution security teams rely on to stop ransomware is blind to this class of attack.  

Business Risk

AI-generated ransomware is polymorphic, evasive, and increasingly open-source accessible. PromptLock previews the future of ransomware-as-a-service (RaaS), where payloads are assembled on demand by AI, not written by human actors. 

  • Exfiltration risk: LLMs can scan and act on sensitive data, rendering encryption a secondary concern. 

  • Detection delay: No signatures, no repeatable logic — anomaly models can’t keep up. 

  • Cloud impact: The AI can enumerate IAM roles, API keys, and storage paths in seconds, perfect for lateral movement and privilege escalation

PromptLock may have started as a research project, but it exposed a very real blind spot in cloud defenses: what happens after initial access, when malware generates itself and runs in memory. Defending against this evolution requires inline, runtime enforcement woven directly into the cloud network fabric.  

How CNSF Helps Break the Chain

Aviatrix Cloud Network Security Fabric (CNSF) is purpose-built for runtime defense across multicloud environments. 

Runtime Flow Detection

Identify Lua scripts triggering unexpected east-west movement, file scans, or exfiltration attempts. 

Zero Trust Segmentation

Prevent runtime-generated malware from spreading across cloud accounts, subnets, or regions. 

DNS & Egress Controls

Block outbound flows to exfil endpoints — even if they’re AI-selected and previously unseen. 

High Performance Encryption

Keep data encrypted at rest and in motion — so AI-driven payloads have nothing readable to exploit. 

CoPilot & SmartGroups

Continuously adapt segmentation and enforcement based on workload identity, behavior, and flow context. 

The Breach Chain: AI-Style

[1] Local Binary Execution →  [2] Ollama Starts GPT Model →  [3] AI Generates Lua Scripts →  [4] Scans Files + IAM Roles →  [5] Exfiltrates & Encrypts Data →  [6] Optional Destruction Stage (Not Yet Active)  

Each run is unique. 

You can’t pre-detect it. You can only enforce in runtime. 

Take Action Now

PromptLock is just the beginning. Future malware will generate itself in real time. 

Static policies, SIEM rules, and edge firewalls can’t catch threats that don’t exist until they run. Organizations need fabric-native runtime enforcement that adapts as fast as AI does. 

Aviatrix CNSF delivers: 

  • Inline segmentation across multi-cloud environments 

  • DNS and egress controls at the workload level 

  • Live visibility into AI-generated flows 

  • Identity-aware enforcement with SmartGroups 

  • Full-path encryption for data-in-use protection 

Don’t just detect threats; break the breach chain before it starts. 

Ransomware is evolving. Your defenses should too. 

Learn more about how CNSF can enforce zero trust principles for AI workloads

Schedule a demo to see CNSF in action.  

References

BetterWorldTechnology, “AI-Powered Ransomware 'PromptLock' Emerges, Leveraging OpenAI's GPT Model,” August 30, 2025, https://www.betterworldtechnology.com/post/ai-powered-ransomware-promptlock-emerges-leveraging-openai-s-gpt-model.  

CyberScoop, “NYU team behind AI-powered malware dubbed ‘PromptLock’,” September 5, 2025, https://cyberscoop.com/ai-ransomware-promptlock-nyu-behind-code-discovered-by-security-researchers/.  Dark Reading, “AI-Powered Ransomware Has Arrived With 'PromptLock',” August 27, 2025, https://www.darkreading.com/vulnerabilities-threats/ai-powered-ransomware-promptlock.  

ESET, “First known AI-powered ransomware uncovered by ESET Research,” August 26, 2025, https://www.welivesecurity.com/en/ransomware/first-known-ai-powered-ransomware-uncovered-eset-research/.  

ESET, “ESET discovers PromptLock, the first AI-powered ransomware,” August 27, 2025, https://www.eset.com/us/about/newsroom/research/eset-discovers-promptlock-the-first-ai-powered-ransomware/.    

Benson George
Benson George

Sr. Principal Product Marketing Manager

Benson brings deep experience across the security stack—from securing connected devices and embedded systems to quantifying and reducing cloud attack surfaces and enforcing encryption standards. He brings a threat-informed perspective to cloud architecture—helping enterprises defend against today’s advanced attack techniques and tomorrow’s unknown risks.

PODCAST

Altitude

subscribe now

Keep Up With the Latest From Aviatrix

Cta pattren Image