Autoscaling group on AWS exhausting internal IPs - amazon-web-services

Suppose I have this simple scenario. I setup an autoscaling group that will launch a new EC2 instance if the CPU usage goes about 85%. My understanding is that when CPU usage goes under a certain level autoscaling group will scale in by reducing the number of EC2 instances.
Every time an EC2 instance is launched an private IP address is assigned. Suppose my subnet has a CIDR block of 10.0.3.0/28 (which is 16 IPs).
Questions:
1. When autoscaling group scale in (remove an EC2 instance) does it release the internal IP to make it available for next time that scales up?
2. If so, immediately or it takes some time to be released and how long?

When autoscaling group scale in (remove an EC2 instance) does it release the internal IP to make it available for next time that scales up?
Of course
If so, immediately or it takes some time to be released, and how long?
Well, the time depends on the time that the EC2 is shutting down, you could test this in a AWS account with the free tier, its easy to do it.

Related

AWS ECS places one task per EC2 instance

I'm trying to create a deployment where as many tasks as possible are on each EC2 instance, but ECS still places one task per instance which then makes them heavily underutilized.
Here's all the settings I think are relevant:
ASG capacity provider uses managed scaling, target capacity is set to 100%, instances are t3a.micro (2048 CPU, 960 memory)
Task uses 480 memory, 1024 CPU but for testing I tried to go to 200 memory and 500 CPU and nothing changed
The service placement strategies are binpack(cpu) and binpack(memory)
The tasks are in a public VPC within subnets that have access to nat gateway, public ip assign is disabled, the networking mode is awsvpc
I'm changing the desired count to test this
On every deploy tasks are placed into empty instances and if there aren't any ASG creates new ones, there's never more than one task per instance.
It seems that the micro EC2 instance you require doesn't have enough ENI capacity to scale more than 1 task per container instance.
According to the documentation:
With the awsvpc network mode, Amazon ECS creates and manages an
Elastic Network Interface (ENI) for each task and each task receives
its own private IP address within the VPC. This ENI is separate from
the underlying hosts ENI. If an Amazon EC2 instance is running
multiple tasks, then each task’s ENI is separate as well.
Since "this ENI is separate from the underlying hosts ENI", running 1 ECS task requires at least 2 interfaces. In case of running 2 ECS tasks you would need 3 ENI and so on.
When I get a description of EC2 instance types (you can just run aws ec2 describe-instance-types --filters "Name=instance-type,Values=t3a.micro") I see that t3a.micro has a maximum limit of only 2 available network interfaces ("MaximumNetworkInterfaces": 2). So, you need to make better use of the existing maximum number of network interfaces or to get a container instance with capacity to attach more network interfaces.
A solution might be to use an instance type with more ENI capability or to increase task density with ENI trunking. Please read about ENI trunking considerations before. ENI trunking might look something like this:

Assigning static IPs to auto scaled EC2 instance

We have a 3rd party integration which needs the EC2 instance IP to be whitelisted. The 3rd party whitelists the IP on their server and then only the EC2 instance can communicate with them.
In the case of single instance this works.
However when auto scaling kicks in, we would end up in more than 1 instance. These new instances automatically get new IPs for every autoscale action.
Is it possible for us to ask AWS to assign IPs from a say a set of 4 predefined Elastic IPs? ( Assumption is that autoscaling is restricted to say 4 and we have 4 floating EIPs )
I'm trying to avoid gateway NAT since there is a big cost associated with it.
Any ideas?
With autoscaling this is not directly possible to assign an Elastic IP to autoscaled instances. However there are couple of options you can consider.
After instance autoscales, having a boot up script(e.g UserData in Linux) with AWS EC2 CLI commands to associate an Elastic IP address you have allocated to your account writing a command line script. Note that you need to handle the health checks accordingly for the transition to happen smoothly.
Having a CloudWatch alarm trigger to execute an Lambda function which will associate an Elastic IP address to the instance newly started. For this you can use AWS SDK and code to check the instance without EIP and Associate an available EIP to it.
Auto Scaling will not automatically assign an Elastic IP address to an instance.
You could write some code to do this and include it as part of the User Data that is executed when an instance starts. It would:
Retrieve a list of Elastic IP addresses
Find one that is not currently associated with an EC2 instance
Associate it with itself (that is, with the EC2 instance that is running the User Data script)
Use a NAT instance. There's only a small cost associated with a t2.nano and you should find that more than adequate for the purpose.
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html
While not as reliable as a NAT Gateway (you're paying for hands-off reliability and virtually infinite scalability), it's unlikely you'll have trouble with a NAT instance unless the underlying hardware fails, and you can help mitigate this by configuring Instance Recovery:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-recover.html

How to work on amazon aws ec2 reserved instance?

Recently I have purchased a reserved instance EC2 t2.medium.
There is no option to start and stop, whereas my freetire has all the features including stop, start, with public ip, etc.
Please Suggest me, how to use and work around reserved instance||
Reseverd instances are not started or stopped, they merely present the reservation of an on-demand instance at a reduced price. In order to use your reserved instance, you just need to have an instance of the same type as your reserved one already launched, or launch an on-demand instance of the same type and the reservation will be allocated to your instance. The only thing you need to make sure is that the servers will be started in the same availability zone as your reserved instance (for example - if your reserved instance is for us-east-1c AZ you have to choose the same AZ for your on-demand instance)
To make sure you are using the full amount of reserved instances at your disposal, you can check the reports section of your ec2 dashboard. the EC2 Reserved Instance Utilization reports will show you the amount of reserved instance being utilized.

How to achieve EC2 high availability while preferring the instance launch into a specific Availability Zone

I am looking for how to specify the zone I want to deploy to in a single instance deployment, with autoscaling, while also having automatic failover to another zone -- Do any options exist to achieve this?
More context
Due to how reserved instances are linked to a single availability zone (AZ), we find it to be a good strategy (from an "ease of management"/simplicity perspective), when buying reserved instances for our dev environment, to buy them all in a single zone and then launch all dev instances in that single zone. (In production, we buy across zones and run with autoscale groups that specify to deploy across all zones).
I am looking for how to:
Specify the AZ that I want an instance to be deployed to, so that I can leverage the reserved instances that are tied to a single (and consistent) AZ.
while also having
The ability to failover to an alternate zone if the primary zone fails (yes, you will pay more money until you move the reserved instances, but presumably the failover is temporary e.g. 8 hours, and you can fail back once the zone is back online).
The issue is that I can see how you can achieve 1 or 2, but not 1 and 2 at the same time.
To achieve 1, I would specify a single subnet (and therefore AZ) to deploy to, as part of the autoscale group config.
To achieve 2, I would specify more than one subnet in different AZs, while keeping the min/max/capacity setting at 1. If the AZ that the instance non-deterministically got deployed to fails, the autoscale group will spin up an instance in the other AZ.
One cannot do 1 and 2 together to achieve a preference for which zone an autoscale group of min/max/capacity of 1 gets deployed to while also having automatic failover if the zone the server is in fails; they are competing solutions.
This solution uses all AWS mechanisms to achieve the desired effect:
Launch the instance into the preferred zone by specifying that zone's subnet in the 1st autoscale group's config; this group's min/max/capacity is set to 1/1/1.
Create a second autoscale group with the same launch config as the 1st, but this other autoscale group is set to a min/max/desired of 0/1/0; this group should be configured with the subnets in every available zone in the region except the one specified in the 1st autoscale group.
Associate the 2nd autoscale group with the same ELB that is associated with the 1st autoscale group.
Set up a CloudWatch alarm that triggers on the unhealthy host alarm for #1's autoscale group; have the alarm change the #2 autoscale group's to a min/max/desired of 1/1/1. (As well as send out a notification so that you know this happened).
If you don't expect to get unhealthy host alarms except in the cases where there is an actual host failure or if the AZ goes down -- which is true in our case -- this is a workable solution.
As you have already figured out, (as of mid-2015) that's not possible. Auto-scaling doesn't have the concept of failover, strictly speaking. It expects you to provide more than one AZ and machines enough in each one if you want to have high availability. If you don't, then you aren't going to get it.
The only possible workaround I can imagine for this is setting up a watchdog yourself which changes the auto-scaling group's subnet once an AZ becomes unavailable. Not so hard to do, but no so reliable as well.

Is there a way to automatically terminate unhealthy EC2 instances from ELB?

Is there any way to have either ELB or an EC2 auto-scaling group terminate (or reboot) unhealthy instances from ELB?
There are some specific database failure conditions in our front end which makes it turn unhealthy, so the ELB will stop routing traffic to it. That instance is also part of an auto-scaling group, which scales on the group's CPU Load. So, what ends up happening is that the instance no longer gets traffic from ELB, so it has no CPU load, and skews the group's CPU load, thus screwing up the scaling conditions.
Is there an "easy" way to somehow configure ELB or an autoscaling group to automatically terminate unhealthy instances from the group without actually having to write code to do the polling and terminating via the EC2 API?
If you set the autoscaling group's health check type to ELB then it will automatically retire any instances that fail the ELB health checks (ie doesn't respond in a timely manner to the URL configured)
As long as the configured health check properly reports than an instance is bad (which sounds like it is the case since you say ELB is marking the instance as unhealthy) this should work