Right now our small-ish business has 3 clients who we have assigned to 3 elastic IPs in Amazon Web Services (AWS).
If we restart an instance no one loses access because the IPs are the same after restart.
Is there a way to handle expanding to 3 more clients without having things fall apart if there's a restart?
I'm trying to request more IPs, but they suggest it depends on our architecture, and I'm not sure what architecture they're looking for (or why some would warrant more elastic IPs than others or if this is an unchecked suggestion box).
I realize this is a very basic question, but googling around only gets me uninformative docs from the vendors mouth.
EDIT:
There is a lot of content on the interwebs (mostly old) about AWS supporting IPv6, but that functionality appears to be deprecated.
You can request more EIPs in the short run. Up to 5 EIP is free depending on your account. You should also considering using name based URLs and assign each of your clients to a subdomain, for example,
clientA.example.com
clientB.example.com
clientC.example.com
This way you will not be needing an additional IP for every client you add. Depending on your traffic, one EC2 instance can serve many clients, and as you scale, you can put multiple EC2 instances behind an AWS Elastic Load Balaner, and this will scale to serve exponentially more clients.
If the client wants to keep their servers separate and can pay for them, you can purchase EIP as many as you need. You should also consider separating database into one database instance for each client, which is probably what clients desire more than separation of IPs.
For IPv6, a quick workaround would be to use a front-end ELB that supports both IPv6 and IPv4.
If you use elastic IPs from VPC, you get 5 per region for an AWS account. See Amazon VPC Limits.
So, you can go to console and select VPC. Then click on elastic IPs, create it. Once created, assign it to a relevant instance.
So, atleast for now, you can solve the problem if you are not bothered about region.
Related
A new api service we use requires that we give them a list of all the IP addresses our calls will be coming from; if we make an api call from any other IP address, the call will fail.
This question has been asked before here, but I'm wondering if in 2019 there is any simpler/easier/lower cost solution.
Our Setup
Elastic Beanstalk, which currently scales to anywhere from 5 - 50 ec2 instances for our web application based on traffic
An Application Load Balancer
Also have a worker tier, which would be available for use if that might be helpful
Typically these api calls would be coming from any of our web tier ec2 instances, as the calls will be based on a user interaction. We can of course set up something different, e.g. have the worker tier make the calls
Solutions I've Found
Give each ec2 instance an elastic (static) ip address. This is not a great solution for us, because as we hopefully continue to scale the number of ip addresses needed will continue to grow {ref}
Set up two NAT instances (one not being sufficient as it would be a single point of failure). I'm hoping there is something simpler and lower cost than this option. {ref} {ref}
Create new ec2 instances and put them behind a Network Load Balancer. Again, complex and costly. {ref}
Are there any new, easier, less costly solutions? I have never used AWS Lambda before; maybe it is be possible to run Lambda functions all from one IP address? I don't have many ideas beyond that at this point. Thanks for your time.
A NAT is the best solution, and shouldn't cost you much more than a web-server.
The simplest way to use a NAT is the NAT Gateway. Pricing depends on region, but it's around $0.05/hour, which is a little more than the price of a t3.medium EC2 instance. You're also charged a per-GB rate for data, which can add up quickly. On the positive side, Amazon manages the infrastructure for you, including patches and high-availability.
A NAT Instance is an EC2 instance running a specially-configured AMI. You could probably get away with running this on a t3.micro instance, at $0.01 per hour, which is probably much less than any of your webservers. You will be responsible for applying patches and waking up in the middle of the night if anything goes wrong.
You can probably get away with a single NAT, of either type. You will pay for cross-AZ traffic by doing this ($0.01/GB), so it will be false economy if you move a lot of data across the NAT. It's a tossup on whether you'll get higher availability from two NATs, because you can only reference one at a time in your routing tables. So if one goes down you'll have to update the routing tables to point at the other, which will probably take as much time as bringing up a new instance.
You can't use a Lambda, because it needs to have a permanent IP address assignment and you can't control that with Lambda. You could write your own proxy server, running on EC2, but the costs for that are the same as a NAT Instance.
Here is prescriptive guidance from AWS: https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html
"This pattern describes how to generate a static outbound IP address in the Amazon Web Services (AWS) Cloud by using a serverless architecture..."
Essentially, you have an AWS Lambda function that uses an Elastic IP address as the outbound IP address. In the guidance, you will create "a Lambda function and a virtual private cloud (VPC) that routes outbound traffic through an internet gateway with a static IP address. To use the static IP address, you attach the Lambda function to the VPC and its subnets. "
I'm using a CloudFormation stack that deploys 3 EC2 VMs. Each needs to be configured to be able to discover the other 2, either via IP or hostname, doesn't matter.
Amazon's private internal DNS seems very unhelpful, because it's based on the IP address, which can't be known at provisioning time. As a result, I can't configure the nodes with just what I know at CloudFormation stack time.
As far as I can tell, I have a couple of options. All of them seem to me more complex than necessary - are there other options?
Use Route53, set up a private DNS hosted zone, make an entry for each of the VMs which is attached to their network interface, and then by naming the entries, I should know ahead of time the private DNS I assign to them.
Stand up yet another service to have the 3 VMs "phone home" once initialized, which could then report back to them who is ready.
Come up with some other VM-based shell magic, and do something goofy like using nmap to scan the local subnet for machines alive on a certain port.
On other clouds I've used (like GCP) when you provision a VM it gets an internal DNS name based on its resource name in the deploy template, which makes this kind of problem extremely trivial. Boy I wish I had that.
What's the best approach here? (1) seems straightforward, but requires people using my stack to have extra permissions they don't really need. (2) is extra resource usage that's kinda wasted. (3) Seems...well goofy.
Use Route53, set up a private DNS hosted zone, make an entry for each of the VMs which is attached to their network interface, and then by naming the entries
This is the best solution, but there's a simpler implementation.
Give each of your machines a "resource name".
In the CloudFormation stack, create a AWS::Route53::RecordSet resource that associates a hostname based on that "resource name" to the EC2 instance via its logical ID.
Inside your application, use the resource-name-based hostname to access the other isntance(s).
An alternative may be to use an Application Load Balancer, with your application instances in separate target groups. The various EC2 instances then send all traffic through the ALB, so you only have one reference that you need to propagate (and it can be stored in the UserData for the EC2 instance). But that's a lot more work.
This assumes that you already have the private hosted zone set up.
I think what you are talking about is known as service discovery.
If you deploy the EC2 instances in the same subnet in the same VPC with the same security group that allows the port the want to communicate over, they will be "discoverable" to each other.
You can then take this a step further. If autoscaling is on the group and machines die and respawn they can write there IPs into a registry i.e. dynamo so that other machines will know where to find them.
We have a web application running on ec2 instance.
We have added AWS ELB to route all request to application to load balancer.
SSL certificate has been applied to ELB.
I am worried about whether HTTP communication between ELB and ec2 instance is secure?
or
should I use HTTPS communication between ELB and ec2 instance?
Does AWS guarantees security of HTTP communication between ELB and ec2 instance?
I answered a similar question once but would like to highlight some points:
Use VPC with proper Security Groups setup (must) and network ACL (optional).
Notice your private keys distribution. AWS made it easy with storing it safely in their system and never using it again on your servers. It is probably better to use self-signed certificates on your servers (reducing the chance to leak your real private keys)
SSL is cheap these days (compute wise)
It all depends on your security requirements, regulations and how much complexity overhead are you willing to take.
AWS do provide some guaranties (see network section) against spoofing / retrieval of information by other tenants, but the safe assumption is that multi-tenant public cloud environment is not 100% hygienic and you should encrypt.
Single tenant instance (as suggested by #andreimarinescu) will not help as the attack vector discussed here is the network between the ELB (shared env) and your instance. (however, it might help against XEN zero days)
Long answer with short summary - encrypt.
Absolute control over security and cloud deployments are in my opinion two things that don't mix very well.
Regarding the security of traffic between the ELB and the EC2 instances, you should probably deploy your resources in a VPC in order to add a new layer of isolation. AWS doesn't offer any security guarantees.
If the data transferred is too sensitive, you might also want to look at deploying in a dedicated data-center where you can have greater control over the networking aspects. Also, you might want to look at single-tenant instances on EC2, since you're probably sharing your physical resources with other EC2 customers.
That being said, there's one aspect that you also should take into account: SSL termination is quite an expensive job, so terminating SSL at the ELB level will allow your backend instances to focus resources on actually fulfilling the requests, but this will also impact the ELB (it will automatically scale, but it will have to do it faster, and you might see increased latency as it does so during spikes of traffic).
I'm trying to get my EC2 instances to communicate better with APIs of a 3rd party service. Latency is extremely important as voice communication is heavily involved & lag is intolerable.
I know a few of the providers use EC2, but the thing is Amazon's IP system makes it difficult to find which region the instance is in. With non elastic-ip services I could do a whois and find if it was in Australia or somewhere in Europe so I could put a server close by.
With these elastic IP's how can I find which zone they're in. I can use ping times but its a bit of a guess and I have to make all these instances in different regions to find the shortest ping time.
Amazon EC2 regularly publishes its Amazon EC2 Public IP Ranges, which clusters them by Region.
It does not cluster them by Availability Zone (AZ) (if you actually meant that literally), but this shouldn't matter much, insofar cross AZ latency should regularly be within single digit milliseconds range only.
Other than that you might also be interested in my answer to How could I determine which AWS location is best for serving customers from a particular region?, which outlines two other options for handling this based on external data/algorithms or via the Multi-Region Latency Based Routing now Available for AWS (which would likely only be useful when fully embracing Amazon Route 53 as well).
Put your server behind a Route 53 DNS and let Latency Based Routing do the rest for you - it can decide automatically for you the least latent server.
http://aws.typepad.com/aws/2012/03/latency-based-multi-region-routing-now-available-for-aws.html
Is it possible to have most of our server hardware outside of EC2, but with some kind of load balancer to divert traffic to EC2 when there's load that our servers can't handle, or as a backup incase these servers go down?
For example, have a physical server serving out our service (let's ignore database consistency for the moment), but there's a huge spike due to some coolness - can we spin up some EC2 instances and divert traffic off to it? This is much like Amazon's own auto scaling.
And also, if our server hardware dies for some reason (gremlins eat the power cables for example) - can we route all our traffic over to EC2 instances?
Thanks
Yes you can but you will have to code. AWS has Command Line Tools for doing EC2/Autoscaling/S3 stuff with simple commands in bash or other interfaces and SDKs, like Boto for Python etc.
You can find it here: http://aws.amazon.com/code/
Each Ec2 instance has a public network interface associated with it. Use a DNS CNAME record to "switch" your site traffic to the Ec2 instance. If you need to load-balance across multiple machines you can use round-robin DNS, or start a ELB and put any number of Ec2 instances behind it.
Ec2 infrastructure is extremely easy to scale. Deploying your application on top of an Ec2 is a whole other matter. It could be trivial -- or insanely complicated.