Static IP for outbound API calls - amazon-web-services

A new api service we use requires that we give them a list of all the IP addresses our calls will be coming from; if we make an api call from any other IP address, the call will fail.
This question has been asked before here, but I'm wondering if in 2019 there is any simpler/easier/lower cost solution.
Our Setup
Elastic Beanstalk, which currently scales to anywhere from 5 - 50 ec2 instances for our web application based on traffic
An Application Load Balancer
Also have a worker tier, which would be available for use if that might be helpful
Typically these api calls would be coming from any of our web tier ec2 instances, as the calls will be based on a user interaction. We can of course set up something different, e.g. have the worker tier make the calls
Solutions I've Found
Give each ec2 instance an elastic (static) ip address. This is not a great solution for us, because as we hopefully continue to scale the number of ip addresses needed will continue to grow {ref}
Set up two NAT instances (one not being sufficient as it would be a single point of failure). I'm hoping there is something simpler and lower cost than this option. {ref} {ref}
Create new ec2 instances and put them behind a Network Load Balancer. Again, complex and costly. {ref}
Are there any new, easier, less costly solutions? I have never used AWS Lambda before; maybe it is be possible to run Lambda functions all from one IP address? I don't have many ideas beyond that at this point. Thanks for your time.

A NAT is the best solution, and shouldn't cost you much more than a web-server.
The simplest way to use a NAT is the NAT Gateway. Pricing depends on region, but it's around $0.05/hour, which is a little more than the price of a t3.medium EC2 instance. You're also charged a per-GB rate for data, which can add up quickly. On the positive side, Amazon manages the infrastructure for you, including patches and high-availability.
A NAT Instance is an EC2 instance running a specially-configured AMI. You could probably get away with running this on a t3.micro instance, at $0.01 per hour, which is probably much less than any of your webservers. You will be responsible for applying patches and waking up in the middle of the night if anything goes wrong.
You can probably get away with a single NAT, of either type. You will pay for cross-AZ traffic by doing this ($0.01/GB), so it will be false economy if you move a lot of data across the NAT. It's a tossup on whether you'll get higher availability from two NATs, because you can only reference one at a time in your routing tables. So if one goes down you'll have to update the routing tables to point at the other, which will probably take as much time as bringing up a new instance.
You can't use a Lambda, because it needs to have a permanent IP address assignment and you can't control that with Lambda. You could write your own proxy server, running on EC2, but the costs for that are the same as a NAT Instance.

Here is prescriptive guidance from AWS: https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html
"This pattern describes how to generate a static outbound IP address in the Amazon Web Services (AWS) Cloud by using a serverless architecture..."
Essentially, you have an AWS Lambda function that uses an Elastic IP address as the outbound IP address. In the guidance, you will create "a Lambda function and a virtual private cloud (VPC) that routes outbound traffic through an internet gateway with a static IP address. To use the static IP address, you attach the Lambda function to the VPC and its subnets. "

Related

Coordinating multiple VMs in a VPC

I'm using a CloudFormation stack that deploys 3 EC2 VMs. Each needs to be configured to be able to discover the other 2, either via IP or hostname, doesn't matter.
Amazon's private internal DNS seems very unhelpful, because it's based on the IP address, which can't be known at provisioning time. As a result, I can't configure the nodes with just what I know at CloudFormation stack time.
As far as I can tell, I have a couple of options. All of them seem to me more complex than necessary - are there other options?
Use Route53, set up a private DNS hosted zone, make an entry for each of the VMs which is attached to their network interface, and then by naming the entries, I should know ahead of time the private DNS I assign to them.
Stand up yet another service to have the 3 VMs "phone home" once initialized, which could then report back to them who is ready.
Come up with some other VM-based shell magic, and do something goofy like using nmap to scan the local subnet for machines alive on a certain port.
On other clouds I've used (like GCP) when you provision a VM it gets an internal DNS name based on its resource name in the deploy template, which makes this kind of problem extremely trivial. Boy I wish I had that.
What's the best approach here? (1) seems straightforward, but requires people using my stack to have extra permissions they don't really need. (2) is extra resource usage that's kinda wasted. (3) Seems...well goofy.
Use Route53, set up a private DNS hosted zone, make an entry for each of the VMs which is attached to their network interface, and then by naming the entries
This is the best solution, but there's a simpler implementation.
Give each of your machines a "resource name".
In the CloudFormation stack, create a AWS::Route53::RecordSet resource that associates a hostname based on that "resource name" to the EC2 instance via its logical ID.
Inside your application, use the resource-name-based hostname to access the other isntance(s).
An alternative may be to use an Application Load Balancer, with your application instances in separate target groups. The various EC2 instances then send all traffic through the ALB, so you only have one reference that you need to propagate (and it can be stored in the UserData for the EC2 instance). But that's a lot more work.
This assumes that you already have the private hosted zone set up.
I think what you are talking about is known as service discovery.
If you deploy the EC2 instances in the same subnet in the same VPC with the same security group that allows the port the want to communicate over, they will be "discoverable" to each other.
You can then take this a step further. If autoscaling is on the group and machines die and respawn they can write there IPs into a registry i.e. dynamo so that other machines will know where to find them.

Cannot find AWS ElasticCache Configuration Endpoint Ip Address

We would like to use a NAT to connect locally to the ElasticCache Configuration Endpoint (as described in: http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Access.Outside.html). But we could't find the IP address and NAT cannot use DNS.
We did manage to map each node IP Address using NAT, but we would like to connect to the whole cluster at once.
I found another related question in Ask Ubuntu, but no awserws either:
https://askubuntu.com/questions/779961/use-endpoint-instead-of-ip-in-iptables
I would recommend setting up a local redis server for development or unit testing.
Create seperate elasticache clusters for test, staging and production environments.
The reason being, it's too much of work to make it work to connect from local machines and secondly, all this work is again not recommended in any case for a production environment as per AWS documents itself:
Limitations
This approach should be used for testing and development purposes
only. It is not recommended for production use due to the following
limitations:
The NAT instance is acting as a proxy between clients and multiple
clusters. The addition of a proxy impacts the performance of the cache
cluster. The impact increases with number of cache clusters you are
accessing through the NAT instance. The traffic from clients to the
NAT instance is unencrypted. Therefore, you should avoid sending
sensitive data via the NAT instance. The NAT instance adds the
overhead of maintaining another instance. The NAT instance serves as a
single point of failure. For information about how to set up high
availability NAT on VPC, see High Availability for Amazon VPC NAT
Instances: An Example.

AWS Lambda - use Kinesis under VPC

I have an AWS Lambda function that makes use of an ElastiCache Redis cluster.
Since the Redis cluster is "locked" in a VPC, the Lambda function must reside in that VPC too.
For some reason, if the Lambda is allocated an IP of a public subnet, which has an Internet gateway - it still cannot make connections to the outside (the internet), thus making it impossible to use Kinesis.
For that, they suggest using a NAT gateway which lets the Lambda connect to the outside.
Basically, this works for me - but my issue is the money.
This solution is expensive for large amount of data transfers and I'm looking for some way to make it cheaper.
For a small POC that I've made, I paid ~$10.
This is too much for ~30GB as my production pipeline will run hundreds of gigabytes / month.
How do you suggest I let the Lambda function connect the outside (specifically Kinesis) without using a NAT gateway?
Thank you!
without using a NAT gateway?
Use a NAT instance.
You have to have one of these two things for anything in VPC to access the Internet from a private IP address.
NAT instances were exactly how this was always done in VPC, until the relatively new NAT Gateway service was rolled out.
You can also use a NAT gateway, which is a managed NAT service that provides better availability, higher bandwidth, and requires less administrative effort. For common use cases, we recommend that you use a NAT gateway rather than a NAT instance.
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html
Sure, it's easier, but it costs more. A lot more. The most significant difference in this case is that with a NAT instance, you pay a flat rate for use of the hardware, which could be an inexpensive t2.nano, $5/mo.
The NAT Gateway service is a high powered solution with nearly infinite scaling capacity, and is priced accordingly. A NAT instance is only as good as the hardware you choose to run it on, but I find t2.nano and t2.micro quite adequate for workloads requiring less than 250 Mbit/s of Internet connectivity.
Use the link, above, to learn more.
Lambda function instances will never be assigned a public IP address, regardless of the type of VPC subnet you place them in. A NAT gateway is the only solution to provide a Lambda function inside a VPC with access to resources that reside outside the VPC (like Kinesis).
If that isn't going to work for you due to cost, you might look into running a Redis server on an EC2 instance with an Elastic IP, which would allow the Lambda function to connect without being inside the VPC. A similar alternative would be to use RedisLabs instead of ElastiCache.

AWS architecture with limited elastic IPs

Right now our small-ish business has 3 clients who we have assigned to 3 elastic IPs in Amazon Web Services (AWS).
If we restart an instance no one loses access because the IPs are the same after restart.
Is there a way to handle expanding to 3 more clients without having things fall apart if there's a restart?
I'm trying to request more IPs, but they suggest it depends on our architecture, and I'm not sure what architecture they're looking for (or why some would warrant more elastic IPs than others or if this is an unchecked suggestion box).
I realize this is a very basic question, but googling around only gets me uninformative docs from the vendors mouth.
EDIT:
There is a lot of content on the interwebs (mostly old) about AWS supporting IPv6, but that functionality appears to be deprecated.
You can request more EIPs in the short run. Up to 5 EIP is free depending on your account. You should also considering using name based URLs and assign each of your clients to a subdomain, for example,
clientA.example.com
clientB.example.com
clientC.example.com
This way you will not be needing an additional IP for every client you add. Depending on your traffic, one EC2 instance can serve many clients, and as you scale, you can put multiple EC2 instances behind an AWS Elastic Load Balaner, and this will scale to serve exponentially more clients.
If the client wants to keep their servers separate and can pay for them, you can purchase EIP as many as you need. You should also consider separating database into one database instance for each client, which is probably what clients desire more than separation of IPs.
For IPv6, a quick workaround would be to use a front-end ELB that supports both IPv6 and IPv4.
If you use elastic IPs from VPC, you get 5 per region for an AWS account. See Amazon VPC Limits.
So, you can go to console and select VPC. Then click on elastic IPs, create it. Once created, assign it to a relevant instance.
So, atleast for now, you can solve the problem if you are not bothered about region.

EC2 instance region via IP Address

I'm trying to get my EC2 instances to communicate better with APIs of a 3rd party service. Latency is extremely important as voice communication is heavily involved & lag is intolerable.
I know a few of the providers use EC2, but the thing is Amazon's IP system makes it difficult to find which region the instance is in. With non elastic-ip services I could do a whois and find if it was in Australia or somewhere in Europe so I could put a server close by.
With these elastic IP's how can I find which zone they're in. I can use ping times but its a bit of a guess and I have to make all these instances in different regions to find the shortest ping time.
Amazon EC2 regularly publishes its Amazon EC2 Public IP Ranges, which clusters them by Region.
It does not cluster them by Availability Zone (AZ) (if you actually meant that literally), but this shouldn't matter much, insofar cross AZ latency should regularly be within single digit milliseconds range only.
Other than that you might also be interested in my answer to How could I determine which AWS location is best for serving customers from a particular region?, which outlines two other options for handling this based on external data/algorithms or via the Multi-Region Latency Based Routing now Available for AWS (which would likely only be useful when fully embracing Amazon Route 53 as well).
Put your server behind a Route 53 DNS and let Latency Based Routing do the rest for you - it can decide automatically for you the least latent server.
http://aws.typepad.com/aws/2012/03/latency-based-multi-region-routing-now-available-for-aws.html