“It’s inside the house” is a common trope in horror movies used to heighten suspense and fear–the “it” in these being an evil character. In cybersecurity, you want to prevent bad actors from getting inside the house, and that is what all the tools in a typical defense-in-depth strategy focus on.
There are, however, other potential threats that are already inside your systems every single day: your employees and other authorized users. That is not to say that they are harboring ill intent. While disgrunted employees who intentionally sabotage systems are rare, well-meaning employees can accidentally create vulnerabilities. Unfortunately, sometimes this human fallibility can lead to data leaks that are potentially as damaging as a hacker-led breach. This problem isn’t about people falling for malware or phishing scams, but making simple mistakes.
Here are some examples to underscore how the security controls built to find and stop external threats don’t work in these cases.
Human error in the development pipeline
Development teams build applications under challenging conditions. They work in multiple environments—development, QA/staging, production—and have to meet a wide range of functional, performance, regulatory, and security requirements with every piece of code. They are also under pressure to get this all done fast.
Imagine a developer working heads-down, trying to figure out the cause of a critical bug in an application. The clock is ticking and the pressure is on. The development environment only has a small amount of dummy data and they believe a more realistic data set is needed to test the code, so they connect their local development machine or a staging environment directly to a copy of the production database. They are so focused on trying to replicate the issue that they don’t realize they’ve inadvertently copied sensitive customer data into a less secure, unmonitored development environment.
The developer has created a new weak point — one that could be easily exploited if that environment is ever compromised. If a true bad actor is able to penetrate the enterprise network, it will be easy for them to steal this copy of the data, which now lives outside the controls on the production system. And if there’s an audit, the company could fail and be subject to fines and other penalties.
Human error in cloud operations
Clouds are complex environments with lots of configuration options—and security can add a whole new layer of complexity in the process. Consider a cloud engineer who has been asked to spin up a new infrastructure to support a strategic business initiative that has C-level visibility. They need to move fast. To avoid breaking anything and without a clear understanding of the exact outbound connection requirements by the new service, they configure an overly permissive “allow all” outbound policy for a group of virtual machines. They have now unintentionally created a direct, unguarded pathway for sensitive data to leak onto the public internet. In fact, these types of misconfigurations are consistently ranked among the leading causes of major cloud data breaches.
Human error in AI use
In the entire history of internet-based technologies, there is no other innovation that has been adopted as quickly and pervasively as artificial intelligence. Shadow AI has become a massive problem, and companies are scrambling to balance mandates to leverage AI to boost efficiency and competitiveness with security by implementing AI policies and providing employees with vetted and sanctioned AI applications. But things are moving so fast in this arena that the latter is struggling to keep up with the former. IBM recently released a report showing that 63% of breached organizations either don't have an AI governance policy or are still developing a policy.
Picture a marketing staffer who, in meeting after meeting, keeps hearing about the imperative of using AI for all aspects of campaign design and execution to boost results. They are a self-starter with a track record of getting stuff done. Their scrappiness, which includes constantly seeking out new tools to do their job better, is consistently rewarded by management. They are excited by AI’s potential for your next project and they find a powerful new generative AI application. It’s one that a lot of people are using, and as a bonus, it’s free. So, they allow sensitive documents—quarterly financial forecasts, product roadmaps, and M&A strategy papers—to be transferred to the AI tool for analysis. They are so focused on the output of the AI tool that they don’t consider the highly sensitive nature of the inputs they are giving it.
However, as part of its function, the AI tool ingests all the data and sends it to its own third-party cloud environment for processing. Without realizing it, the marketer has just performed a massive, unmonitored exfiltration of sensitive intellectual property.
Traditional security controls aren’t built for these insider vulnerabilities…
All of these examples underscore how easy it is for major data leakage vulnerabilities to be opened up within the enterprise network. And the fact that the security controls typically in place can’t identify them doesn’t necessarily reflect a failure on their part.
Many organizations use identity and access management (IAM) solutions to protect their databases. But in the case of the developer, an authorized user with valid credentials, the IAM granted access with full implicit trust and there were no additional controls questioning the context of that particular access request.
In the case of the cloud engineer, an auditing tool can detect the misconfiguration. But this happens after the fact—and data may already have leaked out at that point. As already established, you cannot rely on humans to perfectly configure thousands of individual resources, and yet there are no controls in place to catch and fix inevitable misconfigurations.
Shadow AI creates invisible data flows that bypass traditional security controls entirely. It’s not malware, so endpoint detection and response (EDR) solutions are blind to it. In the case of the marketer, like with the developer, the user is authorized to access the data, so IAM controls can’t help. The problem is with the data flow itself and that’s something that traditional tools don’t provide any visibility into or control over.
…But Aviatrix is
The Aviatrix Cloud Native Security Fabric (CNSF) represents this necessary architectural evolution. It is the missing foundational layer that enforces a true zero trust posture where it matters most. By embedding dynamic, policy-driven control directly into the data path of all workload-to-workload communication, The Aviatrix CNSF provides a unified control plane for visibility and policy across the entire hybrid and multicloud estate, turning an unmanageable and complex environment into a governable one.
Here’s how the Aviatrix CNSF could save the three intrepid employees, not from making the mistakes in the first place (no technology can do that), but from the potentially catastrophic consequences of those actions.
Preventing human error in the development pipeline
When the developer attempted to establish a connection with the production database, the Aviatrix CNSF would inspect the request, see that it violates a segmentation policy that prohibits traffic between the “Production-Database” and “Staging-Compute” security segments, and prevent the connection. Because separation of duties is enforced at the network layer with the inspection happening inline, the developer’s valid credentials per the IAM are irrelevant and the architectural policy takes precedence.
Preventing human error in cloud operations
When the cloud engineer misconfigured the outbound policy, the Aviatrix CNSF would act as a centralized, non-negotiable safety net across the entire cloud footprint, enforcing an egress filtering policy for the entire VPC or VNet. Any attempt by an internal process to send data out through that public gateway to an unapproved destination would be blocked by the fabric's inline enforcement. An audit will catch the misconfiguration somewhere down the line, but the more restrictive global policy will have prevented any data leakage.
Preventing human error in AI use
When the marketer initiates a new, anomalous data flow from an internal corporate workload to a previously unknown external AI service, the Aviatrix CNSF can immediately enable the security team to visualize this shadow AI activity on a network topology map and, armed with this visibility, apply policy. Further, the Aviatrix CNSF can be configured to block all communication to unvetted or non-sanctioned external AI services, or it can be set to alert on large data transfers to any new destination, allowing the security team to investigate and create a formal governance process for AI tools.
Mistakes aren’t monsters, but they can be just as dangerous
Most security tools focus primarily on external threats, which is crucial. However, they are often powerless to identify, let alone fix, internal threats that risk or even cause unintentional data leaks. Because these “breaches” start from the inside and can result from a virtually infinite number of causes, you can’t rely on solutions that focus on preventing or catching the “bad” behavior. Instead, you need to embed within your fabric the ability to understand what “good” looks like, and then enforce it such that only explicitly allowed connections or access happen.
Download our white paper to explore how the Aviatrix CNSF embeds protection within a cloud architecture.
Schedule a demo to see Aviatrix CNSF in action.