What kind of throughput can I expect from the AWS Transit Gateway (TGW) to my on-prem or customer environments?
Using the AWS Transit Gateway (TGW) is a great way to connect your VPCs and VPNs together using a singular routing entity powered by a native AWS networking substrate, but many of you may find yourself skulking in the reservation corner, wondering just what kind of performance is there to be had by terminating a VPN line to this new kind of connection. Let’s do a quick breakdown of the moving parts in this scenario and get a better understanding of it.
Many of you may already be familiar with the traditional VPN connection models that land you in the cloud and may have several if not hundreds of them already set up in your network. To do a fair analysis of the TGW VPN connectivity, lets first breakdown the model used with your AWS VGW:
Traditional AWS VPNs use the following components:
- VGW – Virtual Private Gateway: This represents your landing pad into the cloud, a doorway to your on-prem environment.
- Customer Gateway: This is a logical mapping of the configuration of the connecting on-prem equipment exposing the VPN service listener.
- Routing Options of static or dynamic BGP
- Inside CIDR and Pre-Shared Key options
In the graphic below, you can see how these components come together to provide IP routing accessibility from an on-prem or third party network to your AWS Cloud.
There are several more concepts involved in the way that you set up your connectivity such as whether or not you use the AWS client, set up site to site or use a third party appliance, but we will keep the focus on the AWS side of the equation for this discussion.
When setting up your VPN connection via the new AWS Transit Gateway (TGW), it is almost an identical process, save that you no longer need to select a VGW to connect to. You now terminate directly to the AWS Transit Gateway.
But, now that we have this awesome new regionally distributed routing layer that works across all my local VPCs, I should get a big boost in performance with my VPN connections to my corporate office, right?
Because of the fact that you are still using an IPSec tunnel to create your VPN connection, you still have the same site to site vpn bandwidth limitations in your traditional VGW architecture. There are a few band-aids and hat tricks that you can do to increase performance such as aggregating separate VPNs using the ECMP function that AWS provided, but that has its own roadblocks that we discuss in great detail here:
You could also decide to lease your own AWS flavored digital subscriber line known as Direct Connect. The speeds possible are impressive and you can aggregate several of them together if you really wanted to get crazy. But for those who want to get crazy on a budget, we have a solution setup just for you, and we call it InsaneMode™ Encryption.
Above is one of the more common architectures that you will see when our customers employ InsaneMode to solve their connection performance dilemma. You’re seeing security domains chiseled into the routing implementation to define segmentation, VPC to VPC connectivity accelerated up to 22 Gbps per second as well as our hybrid transit model that gives the on-prem connection a giant boost to 10 Gbps. More can be read about InsaneMode by visiting our docs page here:
When a given organization decides to make the Layer 3 leap into the world of using the AWS Transit Gateway, it’s important to know that it doesn’t solve every single problem under the sun. That is why Aviatrix designed a Native-Plus Architecture that works directly on top of the AWS Hyperplane virtual substrate. It is a complementary software design that extends the functionality of and completes the business use-case of the AWS Transit Gateway (TGW).