Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

Can Aviatrix provide access for client sites to on-premise SaaS services via AWS?

Some SaaS providers hosting services in an on-premises data center for their enterprise clients, need to provide client access to the on premise service via the public cloud. This document describes how organization can implement this while also using the AWS Direct Connect to aggregate the client access from an Access Hub VPC to which the client’s VPC can peer using encrypted tunnels.

Typical business requirements for the cloud access use case are as follows:

  • A client should be able to sign up the service access from my service portal and to start using the services within a day
  • The client can cancel their service access from my service portal and the access is terminated in a day
  • The design should not require a network engineer be on call to sustain the connectivity and service access adds or deletes
  • The client VPC can be at any AWS region or from other cloud provider like Azure. No inter client communication is allowed in the access VPC or in the data center. The client traffic is completely encrypted in the cloud.
  • Client should not be required to refactor their cloud IP addressing. The access network should be agnostic to the client cloud addressing.
  • The uptime ratio of the connectivity between the on-prem and the access cloud should be sufficient to meet service SLAs for the client.
  • The datacenter shouldn’t be comprised from the client access. All traffic from/to the client needs to be inspected by a firewall in the access VPC before/after it enters/leaves the data center

Aviatrix Access Hub VPC Solution for AWS

Aviatrix can build an access hub VPC with redundant gateways within a day. The client access VPC can then join the the hub VPC within few mins with High Availability (HA) or no HA. It can leave the hub in similar agile fashion. The operation can be supported with a few clicks from the controller’s GUI or via RESTful APIs. The APIs can be used to integrate with the customer service portal backend. The overall architecture is shown in the following diagram:

Solution Main Components

  1. An access hub VPC aggregates traffic from client (access) VPCs and connects to on-prem
  2. Access hub VPC provides a DMZ for the client access from their cloud. The ingress traffic can be gated by the destination CIDR and/or service endpoint URLs. This ensures the client can’t access other network spaces within the on-prem envrionment.
  3. AWS Direct Connect/VIF and an unattached VGW – It provides the private link between the on-prem and AWS cloud.
  4. On-prem edge router that peers the VGW via eBGP over the VIF. This can extend your on-prem network to connected VPCs via a private link between the on-prem and AWS cloud.
  5. A centralized controller manages gateways in the client access and hub VPCs and orchestrates the encrypted peering between access VPC and hub VPC. No inter access VPC communication is allowed at the hub VPC.

Solution Build Workflow

The AWS direct connect can be set up in parallel while other components are being assembled. The recommended workflow is as follows:

  1. Configure the on-prem edge router to run eBGP on the direct connect interface
  2. Launch a VPC in a region where the direct connect is terminated to act as an access hub VPC
  3. Launch an Aviatrix controller AMI from AWS marketplace in the access hub VPC
  4. Following this link to startup the controller and complete the onboarding process
  5. On the controller, follow the ‘Transit VPC’ page wizard to complete step 1 to step 3. The hub gateway will be securely connected to an unattached VGW.
  6. The step 4 to step 6 are used to attach an access VPC to the hub VPC provided the access VPC is launched with the public subnet. The hub gateway will be securely connected to the access gateway. The access VPC gateway should be launched with NAT enabled.

The step 6 above is executed when a client access is requested. The step 7 in the wizard can be used to detach an access VPC when a client unsubscribes from the service. If your services are accessed by FQDN, you can setup the private DNS since the access VPC has the connectivity to your on-prem DNS server. The workflow can be fully automated by Terraform for large scale deployments too.

Design Considerations

IP Subnets and VPC Address Space

It is recommended to use a /16 in last few blocks of 10.0.0.0/8 for the access network. For instance, 10.254.0.0/16 can be a choice if it is not overlapped with your client’s address spaces. Each VPC takes a /24 block which splits to 4 x /26 subnets – two public subnets and two private subnets. Each pair of subnets (public and private subnets) are in an AZ zone. The private subnet can be used to land the client access from their cloud.

The gateway HA will launch two gateways in two public subnets in two AZ zones. We can use 10.254.0.0/24 for the access hub VPC. The access VPC 1 will be assigned 10.254.1.0/24, so on and so forth. This recommended allocation will support up to 254 access VPCs and one access hub VPC. More /16 blocks can be added to support large number of access VPCs.

The following diagram illustrates an example of one access hub VPC and 5 access VPCs and their IP addresses allocation. The private subnets are not shown.

Access VPC

The client can use the following networking technologies to connect from their cloud to this VPC.

  • AWS peering in the same region or a different region
  • Aviatrix encrypted peering. In addition to the encryption, this can be used to connect the client cloud in Azure or Google. If the client’s cloud IP blocks overlap with the access VPC subnet, Aviatrix peering can perform address mapping.
  • User VPN. The gateway can be enabled to support SSL VPN access if the client wants to access the services from a VPN client.

The gateway is launched with NAT enabled. So, the client cloud IP address space overlap won’t be an issue. For instance, all peered client clouds can use 10.0.0.0/16.

The following security functions can be enabled on the gateway in the access VPC to build the first line of defense for your on-prem private network.

  1. L4 access control based on the destination CIDR. For instance, the client can only allow to access CIDRs hosting the services.
  2. Egress FQDN filtering can be set up to only allow the client to access services resolved by the whitelisted domain urls.

The following diagram shows an example of three client clouds with the overlap IP blocks. A client cloud is in Azure. Each client subscribes to a different service hosted in a different on-prem subnet. The access VPC gateway will only allow the client to access the subscribed service. The gateway NAT can handle the client cloud overlap IP addressing. A service proxy instance can be setup in the access VPC private subnet to hide the real address of the service endpoint to help simplify the on-prem address planning for the service endpoints.

Access Hub VPC

The gateway will peer with the VGW by eBGP which advertise the access VPC CIDR to the on-prem when a new access VPC is added for a client. The service endpoint prefix will be learned from the on-prem router by eBGP. When a new service endpoint is added, it will be learned automatically. It is strongly recommended not to host workload instances in this VPC other than hub gateway instances. The VPC route table can be managed clearly.

Peering HA

Two IPsec tunnels are built between the pair of gateways in the access hub VPC and the access VPC, operating in active and backup mode separately. The traffic will traverse on active tunnel until the tunnel or the end point gateway fails. The failover will take place if such an error is detected. The traffic will then be moved to the backup tunnel. The failover will not impact other traffic paths in the access network. In case a hub gateway fails, all traffic will be moved to the working hub gateway.

On-Prem BGP Router

It should only advertise specific routes that host services endpoints. No default route (0.0.0.0/0) should be advertised to the hub VPC. The BGP router configuration is typically done during the startup phase. It can learn the access VPC routes by eBGP. The connectivity between the on-prem and the access VPC is built automatically.

The Direct Connect Backup

A separate VPN connection from the on-prem can be added to the VGW as a backup link. A 2nd BGP peers is added to the VGW and on-prem BGP router. The direct connect path will be the best path per BGP path selection criteria. The path protection should help enhance the uptime of connectivity between the access cloud and the on-prem. The additional backup link won’t impact the access cloud networking design.

Aviatrix Solution Advantages

The integrated solution has the following main advantages over options from other vendors.

  • The integrated gateway can support critical features required in the access VPC.
    • Simple on-prem route for the cloud with NAT
    • CIDR mapping for the overlap IP address with client cloud
    • Encrypted peering between VPCs across region, account, and cloud provider
    • Access protection by egress L4 firewall and FQDN filtering
    • SSL VPN
  • eBGP peering for route propagation
  • Access VPC traffic isolation
  • High availability designThe integrated gateway can support critical features required in the access hub VPC.

The solution can enable the cloud ops team to quickly build and change the connectivity infrastructure to support the fast roll-out of revenue generating services.

Become the cloud networking hero of your business.

See how Aviatrix can increase security and resiliency while minimizing cost, skills gap, and deployment time.