Lets say I have a service running clustered on N ec2 instances. On top of that I have Amazon EKS and Elastic Loadbalancer. There is a service not managed by me running outside of AWS where I have an account that my services in AWS are using via HTTP requests. When I made an account to this external service I was asked for an IP (range) of services which will be using this external service. There is my problem. Currently lets say I have 3 EC2 instances with Elastic IP addresses (which are static), so I can just give those three IP addresses to this external service provider and everything works just fine. But in the future I might add more EC2 instances to scale out and whitelisting new IP addresses in the external service is a pain. In some cases those whitelist change requests may take for a week to approve by the external service provider and I dont have that time. Even further, accessing this external service is the only reason I go for static IPs for the EC2 instances. So if possible I would ditch the Elastic IPs.
So my question is how could I act so that if I make requests outside of the AWS in a random instance in my cluster, external service providers would always see the same IP address for me as a service consumer?
Disclaimer: I actually dont have that setup running yet, but I am in the middle of doing research if that would be a feasible option. So forgive me if my question sounds dumb for some obvious reason
Something like Network address translation (NAT) can solve your problem. A NAT gateway with Elastic IP, used for rerouting all traffic through it.
NAT gateway provided by AWS as service can be expensive if your data traffic is big, so you can make your own NAT instance, but that is bit complicated to set up and maintain.
The main difference between NAT gateway and NAT instance are listed here
The example bellow is assumed that EC2 instances are in private subnet, but it doesn't have to be a case.
I believe you need a proxy server in your environment with an Elastic IP. Basically you can use something like NGINX/Apache and configure it with an elastic IP. Configure the webserver to provide an endpoint to your EC2 instances, and doing a proxy pass to the external endpoint.
For high availability, you can manage a proxy in each availability zone, ideally configured using an auto scaling group to keep at leaset one instance alive in each AZ. Going through this approach, you will need to make sure that you assign the public IP from your elastic IP pool.
Generally, hostnames are better alternative to the IP addresses to avoid such situations as they can provide a static endpoint no matter what is the IP behind. Not sure whether you can explore that path with your external API provider. It can be challenging when there is static IP based routing/whitelisting rules in place.
This is what a NAT Gateway is for. NAT Gateways have an Elastic IP attached and allow the instances inside a VPC to make outbound connections, transparently, using the gateway's static address.
Related
I have provisioned an AWS API Gateway and created a Lambda function to connect to an external REST API. The API Gateway & Lambda is not in a VPC so the egress IP address is random. The challenge I have is the external REST API is behind a firewall, which requires the IP address or subnet of the Lambda to be whitelisted.
I have looked at the AWS IP Address page (below), however there is no explicit mention of either API Gateway or Lambda.
https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html#filter-json-file
Has anyone come across this before & found a resolution to it. For the purposes of this solution I cannot put the API Gateway & Lambdas in a VPC.
Any help would be greatly appreciated!
API Gateway seems to be irrelevant to this discussion. If I understand your question, you're trying to make API requests from a Lambda function to a remote API server and you want those requests to originate from a known IP address so that you can whitelist that IP at the remote server.
First thing I would say is don't use IP whitelisting; use authenticated API requests instead.
If that's not possible then use VPC with an Internet Gateway (IGW). Create a NAT and an Elastic IP, launch the Lambda into a private subnet of that VPC, and route the subnet's non-local traffic to the NAT. Then whitelist the NAT's Elastic IP on the remote API server. Examples here and here.
I know that you said you "cannot put [...] Lambdas in a VPC", but if you don't then you have no control over the originating IP address.
It is frustrating that the only way to ensure a Lambda function uses a static ip without a hack is to put the Lambda inside a VPC, create a NAT with an Elastic IP, as many other answers nicely explain.
However, NATs cost around $40 per month in regions that I am familiar with, even with minimal traffic. This may be cost prohibitive for certain use cases, such as if you need multiple dev/test/staging environments.
One possible workaround (which should be used with caution) is to create a micro EC2 instance with an elastic IP (which gives the static IP address), then your choice of proxy/routing so you can make HTTP calls by tunnelling from the Lambda function through the EC2 instance. (e.g. SSH from Lambda function into EC2 instance then CURL from EC2 to the endpoint which is protected by an allowlist)
This is a few extra hoops to jump through and could introduce security vulnerabilities which should be mitigated (e.g. Beware storing SSH keys or passwords inside a Lambda function, ensure Security Groups are tight) but I wanted to post this as a possible workaround for any devs who need a cost effective workaround for a requirement to connect to an endpoint which enforces allowlist rules.
I use Google Places API, and I need to put a restriction on my API keys, more specifically an IP restriction because the calls are from a web server.
I am using AWS Elastic Beanstalk with an environment where I have a Load Balancer, Autoscaling, and a VPC. So the IP address changes every time a new EC2 server is created.
My question is :
How do I put a static IP (Elastic ?) on my environment?
I have found many similar posts like this one (https://stackoverflow.com/a/49200693/3954420) or this one (https://medium.com/#obezuk/how-to-use-elastic-beanstalk-with-an-ip-whitelisted-api-69a6f8b5f844) where I have to create a NAT Gateway.
But it requires at the end to type a target IP address, and unfortunately, Google API servers IPs are not static.
How can I use NAT Gateway or is there another way ?
Thanks
We are trying to use Elastic Load Balancing in AWS with auto-scaling so we can scale in and out as needed.
Our application consists of several smaller applications, they are all on the same subnet and the same VPC.
We want to put our ELB between one of our apps and the rest.
Problem is we want the load balancer to be working both internally between different apps using an API and also internet-facing because our application still has some usage that should be done externally and not through the API.
I've read this question but I could not figure out exactly how to do it from there, it does not really specify any steps or maybe I did understand it very well.
Can we have an ELB that is both internal and external?
For the record, I can only access this network through a VPN.
It is not possible to for an Elastic Load Balancer to have both a public IP address and a private IP address. It is one or the other, but not both.
If you want your ELB to have a private IP address, then it cannot listen to requests from the internet.
If your ELB is public-facing, you can still call to it from your internal EC2 instances using the public endpoint. However, there are some caveats that goes with this:
The traffic will exit your VPC and re-enter it. It will not be direct instance-to-ELB connection that a private IP address will afford you.
You also cannot use security groups in your security group rules.
There are 3 alternative scenarios:
Duplicate the ELB and EC2 instances, one dedicated to private traffic, one dedicated to public traffic.
Have 2 ELBs (one public, one private) that share the same back-end EC2 instances.
Don't use an ELB for either private or public traffic, and instead use an Elastic IP address (if public) or a private IP address (if private) on a single EC2 instance.
I disagree with #MattHouser answer. Actually, in a VPC, your ELB have all its internal interfaces listed in Network Interfaces with Public IP AND Primary private IP.
I've tested the private IP of my public ELB and it's working exactly like the external one.
The problem is : theses IPs are not listed anywhere in a up to date manner like on a private ELB DNS. So you have to do it by yourself.
I've made a little POC script on this, with an internal Route53 hosted zone : https://gist.github.com/darylounet/3c6253c60b7dc52da927b80a0ae8d428
I made a Lambda function that checks which private IPs are set to the loadbalancer and will update Route53 record when it changes: https://github.com/Bramzor/lambda-sync-private-elb-ips
Using this function, you can easily make use of the ELB for private traffic. I personally use it to connect multiple regions to each other over a VPC inter-region peering without needing an additional ELB.
The standard AWS solution would be to have an extra internal ELB for this.
Looks like #DaryL has an interesting workaround, but it could fail for 5 minutes if the DNS is not updated. Also there is no way to have a separate security group for the internal IPs since they share the ENI and security of the external IP of the ELB.
I faced the same challenge and I can confirm the best solution so far is to have two different ALBs, one internet-facing and the other internal. You can attach both ALBs to a single AutoScaling Group so you can access the same cluster.
Make sure the networking options (Subnets, security groups) of both ALBs are the same in order for both to access the same cluster instances. Autoscaling and Launch Configuration works seamlessly with both ALBs attached to the same AutoSacling Group. This is also working with ALBs created from ElasticBeanstalk environments.
I have several EC2 instances which need to have a white listed IP address to talk with an external service. Is it possible to have these all route through the same Elastic IP when they make external calls using strictly AWS features (not another nginx reverse proxy server)?
I need them to all go through 1 IP so that I can support auto scaling.
Update 12/18/2015:
There is now a managed NAT solution that can be used to solve this problem.
See: https://aws.amazon.com/about-aws/whats-new/2015/12/introducing-amazon-vpc-nat-gateway-a-managed-nat-service/
Original Answer:
You basically need a NAT host and you need to point all traffic that has a destination outside of the VPC to the NAT instance.
Give the NAT instance the whitelisted ip.
Here are the docs:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html
I need the static IP to allow access to a firewalled network not on the AWS network.
Is it possible to get a static IP for a load balanced app using Elastic Beanstalk? I'm following the AWS docs regarding using Route 53 to host my app with a domain name, but from what I've read, this does not ensure a static IP because it is essentially using a CNAME allowing the IP behind the scenes to change. Is that the right understanding? Is it possible at all?
This post helped me get a static IP for outgoing requests by using a NAT Gateway, and routing specific requests through it.
I needed this static IP in order to be whitelisted from an external API provider.
I found this way much easier than the provided by AWS, without the need of creating a new VPC and a private and public subnets.
Basically, what I did was:
Create a new subnet to host the NAT Gateway.
Create the NAT Gateway in the above subnet, and assign a new Elastic IP. This one will be our outgoing IP for hitting external APIs.
Create a route table for the NAT subnet. All outbound traffic (0.0.0.0/0) should be routed through the NAT Gateway. Assign the created subnet to use the new route table.
Modify the main route table (the one that handles all our EC2 instances requests), and add the IP(s) of the external API, setting its target to the NAT Gateway.
This way we can route any request to the external API IPs through the NAT Gateway. All other requests are routed through the default Internet Gateway.
As the posts says, this is not a Multi AZ solution, so if the AZ that holds our NAT Gateway fails, we may lose connection to the external API.
Update:
See #TimObezuk comment to make this a Multi-AZ solution.
Deploy your beanstalk environment in VPC, and with the right configuration, a static IP for outbound traffic is easy.
In this setup, your instances all relay their outbound traffic through a single machine, which you can assign an elastic IP address to. All of the inside-originated, Internet-bound traffic from all of the instances behind it will appear, from the other network, to bw using that single elastic IP.
The RDS portion of the following may be irrelevant to your needs but the principles are all the same.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo-vpc-rds.html
Deploy your beanstalk environment in VPC, and with the right configuration, a static IP for outbound traffic is easy.
In this setup, your instances all relay their outbound traffic through a single machine, which you can assign an elastic IP address to. All of the inside-originated, Internet-bound traffic from all of the instances behind it will appear, from the other network, to bw using that single elastic IP.
The RDS portion of the following may be irrelevant to your needs but the principles are all the same.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo-vpc-rds.html