Information on multi-cloud networking, cloud network platform, cloud networking, cloud network security, cloud network operations, aviatrix secure cloud networking
Issue link: https://aviatrix.com/resources/i/1494829
2 built into the data plane itself, not bolted on as individual inspection points. Aviatrix also supports NGFW service insertion, dynamically routing traffic to third-party firewalls or other native cloud services for inspection. Together, the Controller, CoPilot, and gateways form a cohesive system to deliver the Aviatrix Secure Cloud Network, which supports various networking and security solutions across the CSPs, such as the Aviatrix Secure Cloud Backbone. This brief focuses on the features and capabilities of the Aviatrix Secure Cloud Backbone. It reviews four best-practice designs, each creating a resilient, agile, secure, and cost-effective solution that helps solve all the common industry pain points. Initial State | Shared Industry Pain Points IT and Cloud network teams often begin their cloud journey by connecting their private corporate networks to the CSPs. This is either achieved with VPN connections over the Internet or, when things get serious, with private interconnect circuits such as Direct Connect, ExpressRoute, etc. These private circuits are typically hosted as layer 2 (switched) or layer 3 (routed) handoffs in global colo facilities (or "Meet-Me" locations), such as Equinix. Consuming these private circuits as a layer 1 (physical) handoff is possible by leasing lit metro fiber directly between the colo and the data center or campus location. However, this does not always come cheap, and as such, most private circuits are hosted in the colo, where the customer must stage their own networking and security gear. To solve inter-region or inter-cloud routing requirements, enterprises often begin by routing traffic down from the CSP virtual gateways, backhaul it across their private network fabrics, then send it back up to a virtual gateway in another location. This pattern is called the "stove pipe" due to the siloed circuits between the various regions or clouds. A more advanced (and popular) variation is to add in a "bowtie" mesh between these circuits, such that the routers in the colo can forward the traffic coming down from one region or cloud back up to another. This is also called "hair pinning"; a routing tactic that pre-dates cloud connectivity. While private circuits are still the gold standard for hybrid connectivity, they do have pain points that come into sharp focus, especially when inter-regional/inter-cloud traffic is involved: 1. Inter-regional or Inter-cloud traffic must hairpin into the colo or travel across the customer's private network, which can introduce severe latency. Latency can not only impact round-trip time but, due to network mechanics, can also shrink the total capacity of the pipe, a phenomenon known as "bandwidth-delay product." Even though a customer might be paying for a 10 Gbps private circuit, they can only send 1 Gbps or less between two regions separated by a great distance. Thus, the shortest physical distance is always desired. Industry Pain Points | No Cloud Backbone + No Virtual DMZ 16 On-Ramp Wes t US On-Ramp Wes t EU On-Ramp Eas t US Expre s s Route 2 x 10 Gbps Wes t US Eas t US Private Network Fabric Di re ct Conne cts 2 x 10 Gbps Cus tomer DC Wes t Cus tomer DC Eas t Cus tomer Campus EU Di re ct Conne cts 4 x 10 Gbps Wes t EU