
In a recent webinar, Chris McHenry, Aviatrix Chief Product Officer, and I explored how the groundbreaking Aviatrix Kubernetes Firewall solution empowers organizations to secure, scale, and synergize complex networks.
Starting with the concept of microservices, we explained why Kubernetes is a powerful solution for rapid and easy deployments and how the Aviatrix Kubernetes Firewall helps resolve challenges while maintaining the ease-of-use and simplicity that makes Kubernetes attractive.
What You’ll Learn:
- A brief history of Kubernetes from microservices, to Dockr, to today’s solution
- Why Kubernetes’s consumption of IP addresses, issues with workload identity, and other unique features create security and operations issues
- How the Aviatrix Kubernetes Firewall provides invaluable security and scalability for Kubernetes clusters
A Brief History: Why Kubernetes is a Powerful Solution
First, I gave a brief review of how the rise of microservices led to the development of Kubernetes. As organizations moved to the cloud, the concept of using microservices became popular. Microservices take advantage of the loose coupling model to pull applications apart into their component services. This method allows developers to focus on these pieces separately and develop each piece rather than risk endangering an entire monolithic application with each change.
The simplicity and user-friendly nature of Kubernetes, microservices, and containerization are why we call them “application modernization” — they help to simplify and scale application architecture.
Kubernetes is not compute or a network; it’s an orchestrator. All microservices and lifecycle and updates and production and building an application out of microservices is the job of Kubernetes. I traced the origin of Kubernetes back to the original Docker swarm orchestration model, which allowed teams to have one or multiple functional applications out of the care and feeding of these container services.
Kubernetes arrived on the scene with “clusters.” A Kubernetes “cluster” is part compute and part management or control plane. It orchestrates compute and applications that will run on the compute.
Kubernetes took a mayfly approach to “pods,” which are microservices that are regularly spun up to do some action and then can be decommissioned or rolled out to a new version. Kubernetes can gracefully sunset old pods and roll out the new version without doing a patching or upgrading behavior. This mayfly method made updates quick and easy but also means that Kubernetes rapidly uses and decommissions pods, which creates new challenges.
Challenges of Kubernetes
I explained why some of the very features that make Kubernetes a great solution create new challenges:
- Rapidly consuming IP addresses – Because Kubernetes uses loose coupling and orchestration, pods are ephemeral; Dev teams create and decommission them rapidly. Having ephemeral pods makes it challenging to do things like dynamically allocate IP addresses. This mayfly approach to containers means we are changing IP addresses as often as we change pods (that is, constantly) and eating through large amounts of IP addresses.
- Tracking workload identity – Workload identity becomes difficult to track outside of Kubernetes; inside a Kubernetes cluster, some native solutions and some developed as a response to this problem of handling IP address allocation and security within the cluster and identity.
Solutions focused on Kubernetes such as CNI (container network interface) and service mesh work well within a Kubernetes cluster, but they create a large problem for Kubernetes traffic destined for non-Kubernetes destinations, such as PaaS, SaaS, or legacy VMs in the cloud or on-premises.
The Challenge of IP Exhaustion and Overlap
Kubernetes deployments require a very large IP pool since clusters are created and destroyed quickly. If a cluster runs out of IP addresses, you can’t instantiate new pods or orchestrate new workloads – so if you sunset an old part of your application, you can’t create a new one.
Organizations respond with a couple of common solutions:
- Solution 1: Kube-Proxy allows non-routable IP addresses to be used within the Kubernetes cluster through NATting, or using NAT (network address translation) – you translate every IP address as it leaves the cluster.
- Problem with this solution: When you want to identify workloads and enforce security policy, everything looks the same; every pod, every workload, has the same IP address, so a firewall doesn’t know how to respond.
- Solution 2: Reusing IP addresses from elsewhere in the businesses and reuse in the cloud for Kubernetes.
- Problem with this solution: Reusing IP addresses will work, but if you ever need to connect these workloads for any reason, you still have network challenges as well as security policy challenges to overcome; this is just postponing the inevitable.
IP Exhaustion as a Major Security Problem
The problem of IP address exhaustion and overlap represents a regression in security posture, one that organizations must address to scale and grow.
Here’s why: many organizations who use NAT to deal with IP address exhaustion need to open their firewalls more broadly than they need to be. For example, if you had three different apps with three different security requirements and used Kubeproxy or NAT, you have made all three pods look like they came from the same IP address. To allow these pods to communicate, you need to open your firewall as wide as possible so all apps can meet their security requirements. You’ve backslid from granular to broad security policies.
The Problem of Workload Identity
Another major issue with Kubernetes deployments is verifying identity for authorized access. A zero trust access approach is based on a user connecting to an app and supplying a credential. That credential is mapped against an identity store or framework that determines whether the remote user has authorized access.
However, this same level of scrutiny and identity verification is not given to the applications themselves. You can use roles in organizations to apply these user access guidelines, but you can’t do so on the same level of granularity or specific attributes on workloads.
Because applications are decoupled and built into more microservices, identity and trust boundaries have never been more important or more difficult to implement.
App to app security is often either wide open or very, very basically enforced, creating a significant security gap.
Within the cluster, we can create workload identity using labels, namespaces, and clusters. We have many options to identify and create and enforce dynamic security policies based on these attributes that are Kubernetes-specific. Challenges come up with connecting Kubernetes workloads with resources outside the cluster using security frameworks that rely on traditional identifiers that are ephemeral and dynamic within Kubernetes. The problem of workload identity makes it difficult to create granular security outside the cluster.
Common Solutions to Kubernetes Challenges: Service Mesh, CNI, and IP Recycling
Here are some of the main solutions organizations are using to tackle the challenges of Kubernetes IP exhaustion and overlap, security access, and identity:
Service Mesh
A service mesh is very tightly controlled and based on workload identity: Envoy or a tunneling container connects the services together. It’s very tightly coupled with application identity.
The security issue lies not within cluster or between clusters secured by a service mesh, but in connecting resources to legacy VMs or PaaS or SaaS services or on-premises outside the mesh. Service meshes are incredibly secure and tight, but only for services within the mesh.
CNI
Another solution is CNI (container network interface). CNI offers network policies, but it’s inwardly focused with Kubernetes. You can connect outside the cluster, but CNI is static in a way that feels counter to what Kubernetes is meant to be. Policies are written to be static, overly permissive, and unable to utilize rich identity outside the cluster.
The integration difficulties with Kubernetes IP addresses and workload identity prevent many organizations from enjoying the benefits of Kubernetes or using new PaaS, SaaS, or cloud technologies like data lakes.
Increasingly, workload identity is tied less and less to static attributes like IP addresses or network membership and more to dynamically changing attributes like pods, namespaces, and services. This relationship makes it very difficult to build scalable security policies that can react to changing environments.
Aviatrix Solutions for Kubernetes Challenges
Chris McHenry stepped in to explain how Aviatrix entered the scene with solutions to resolve Kubernetes challenges.
Aviatrix launched its Distributed Cloud Firewall (DCF) feature two years ago to reimagine what network security looks like in a cloud-first environment. Now, we’ve built the Aviatrix Kubernetes Firewall, which empowers organizations to access the speed and convenience of Kubernetes with scalable, enforceable security policies.
The Aviatrix Kubernetes Firewall brings both speed and security by:
- Resolving workload identity and IP address exhaustion – Using attributes based on dynamic Kubernetes attributes – tags, properties, or namespaces – so that you never run out of IP addresses.
- Complementing other security solutions – Integrating with other solutions like CNI and service mesh to enforce network-wide security policies from a central management plane.
- Enforcing network segmentation – Implementing macro- and micro-segmentation through the Aviatrix SmartGroups feature, which gives you the flexibility to categorize resources across clouds and services.
- Moving towards a zero trust architecture – Implementing critical security features like high-performance encryption, geoblocking, threat detection, TLS proxy, eBPF filtering, and NAT gateways to protect your network’s traffic flows – east/west and north/south.
- Enforcing AI security – Setting ingress and egress policies to control what comes in and out of your network, for AI workloads as well as all others.
- Simplifying security – Aviatrix integrates with Kubernetes, cloud networks, and traditional networks and delivers one consistent security policy across all of them.
With the Aviatrix Kubernetes Firewall, you can maximize the speed and ease-of-use of Kubernetes while moving your network towards a zero-trust, robust security posture.
- Watch the full webinar here.
- Schedule a demo to learn more about how the Aviatrix Kubernetes Firewall can protect your network.
- Explore a decision guide about Kubernetes security.