How to manage outbound AWS IP addresses
When making API requests from your AWS or Azure environment to a partner or customer, the receiving server may have a firewall with a whitelist of allowed IP addresses. If that is the case, it is often better to provide a small set of known IP addresses that you will be making requests from to make it easier on the IT team on the receiving end.
If the requesting servers are spread out across many VPCs or many regions, controlling which IP the request comes from is difficult or even impossible. And, maintaining a list that changes every time a new server is added or new VPC is brought on with additional servers is error prone and time consuming for both sides.
Provide a single IP or a small set of static IPs where requests will originate to a partner.
Out of the box, you will most likely be required to provide a new IP address for every VPC or VNet. See the diagram below for a common first solution using only AWS components.
This model works fine for small numbers of VPCs in a static environment. However, if you make changes to your environment like moving VPCs to a new region or re-architecting, you must go to each partner and provide them an updated list of source IP addresses. Even small changes, like adding a new VPC, result in notifications to your partners. And, notifying your partners is only the first step. You must validate and make sure each has made the desired changes.
A better approach is to put software defined solution in front of those worker VPCs that will provide a “central NAT” service. This diagram shows an ideal scenario:
Luckily, Aviatrix can replace that black box with battle-tested software that enables highly secure connections, fault tolerant design, and in-depth troubleshooting of your cloud networking problems. All of this is included, while providing you and your partners a consistent set of public IP addresses.
For simplicity, all internet bound traffic is routed through the egress VPC via the Aviatrix Gateway GWT. With the current design, tracing a packet from “workers 1” VPC involves:
- Traffic leaves an EC2 instance in “workers 1” VPC (e.g., 192.168.15.40) destined for DST_IP.
- The route 0.0.0.0/0 points to GWT (egress VPC) via GW1 (“workers 1” VPC).
- Gateway GW1 forwards the traffic across the tunnel from GW1 to GWT.
- Traffic leaves the GWT gateway with a source IP of the associated Elastic IP (IP1) and continues out to the internet via the IGW.
- When the packet arrives at the partner network firewall, the allow policy will apply for the traffic coming from IP1.
This represents just one of many possible design patterns. You may have multiple egress VPCs or you may reduce your firewall policy from the 0.0.0.0/0 rule to specific /32 addresses representing just the IPs of your partner(s).
For step-by-step instructions on how to implement this using Aviatrix, check out this guide on our docs site.