We use Amazon EC2 Auto Scaling groups for a bunch of apps - as everyone knows, while you try your hardest not to have memory leaks and other "been up for a while problems" - it's still possible.
We'd like to protect against such possibilities by just bouncing the servers - ie make sure an instance only lasts, say, 24 hours before killing it. However, we want the killing to be "safe" - eg - even if there's only one instance in the group, we want to start up another instance to a working state, then kill the old box.
Is there any support for this? eg a time-to-live property on an instance?
There is no such property in Amazon EC2 nor in Auto Scaling.
You could manually set the instance health to Unhealthy, which would cause Auto Scaling to terminate and replace the instance. However, if you have only one instance then there will likely be a period where there are no instances.
You could set the Auto Scaling termination policy to OldestInstance, which means that when Auto scaling needs to terminate an instance, it will terminate the oldest instance within the AZ that has the most instances. This gets rid of old instances, but only when the group is scaled-in.
Therefore, you could supplement the Termination Policy with a script that scales-out the group and then causes it to scale-in again. For example, double the number of instances, wait for them to launch, and then halve the number of instances. This should cause them all to refresh (with a few edge conditions if your instances are spread across multiple AZs, causing non-even counts).
Another option is to restart the instance(s). This will not cause them to appear unhealthy to Auto Scaling, but they will appear unhealthy to a Load Balancer. (If you have activated ELB Health Checks within Auto Scaling, then Auto Scaling would actually terminate instances the fail the health check.) You can use Scheduled Events for Your Instances to have Amazon CloudWatch Events restart your instance(s) at certain intervals, or even have a script on the instance tell the Operating System to restart at certain intervals.
However, there is no automatic option to do exactly what you asked.
Since 2019, there has been a Maximum Instance Lifetime parameter, that almost does what you wanted.
Unfortunately, though, it isn’t possible to set the maximum instance lifetime to 24 hours (86400 seconds): the minimum is a week.
Maximum instance lifetime must be equal to 0, between 604800 and 31536000 seconds (inclusive), or not specified.
Related
I am running a flask python program on EC2 which is under Load Balancer and autoscaling. In a scenario where is load increases on one Ec2 it creates another and if newly scaled Ec2 has been idle or not utilized it scales in or terminates it. The problem here is if a single user is accessing newly scaled instance which hardly takes any CPU utilization how autoscaling group will realize that it idle and if it doesn't it will terminate it leaving downtime for that user.
I have two scenarios in mind that it checks for a particular program for a amount of time in EC2 if it is running then don't, otherwise terminate it.
I see Step scaling policy but there option is only for CPU utilization that is hardly consumed if there is a single user, not even 0.1 %.
Can someone please tell me whats the best option for me and if these two options are possible then how to do it? I have been trying to ask developers since many days but could not get reliable answers in my case.
Amazon EC2 Auto-scaling does not know which of your instances are 'in use'.
Also, the decision to terminate an instance is typically made on a metric across all instances (eg CPU Utilization), rather than a metric on a specific instance.
When Auto Scaling decides to remove an instance from the Auto Scaling group, it picks an instance as follows:
It picks an Availability Zone with the most instances (to keep them balanced)
It then selects an instance based on the Termination Policy
See also: Control which Auto Scaling instances terminate during scale in - Amazon EC2 Auto Scaling
When using a Load Balancer with Auto Scaling, traffic going to the instance that will be terminated is 'drained', allowing a chance for the instance to complete existing requests.
You can further prevent an instance from terminating while it is still "in use"by implementing Amazon EC2 Auto Scaling lifecycle hooks that allow your own code to delay the Termination.
Or, if all of this is unsatisfactory, you can disable the automatic selection of an instance to terminate and instance have your own code call TerminateInstanceInAutoScalingGroup - Amazon EC2 Auto Scaling to terminate a specific instance of your choosing.
For an overview of Auto Scaling, I recommend this video from the AWS Reinvent conference: AWS re:Invent 2019: Capacity management made easy with Amazon EC2 Auto Scaling (CMP326-R1) - YouTube
Is there a way to elegantly Script/Configure Spot instances request, if Spot not available in some specified duration, just use OnDemand. And if Spot instance gets terminated just shift to OnDemand.
Spot Fleet does not do this (it just manages only Spot), EMR fleets have some logic around this. You can have auto scaling with Spot or on Demand not both (even though you can have 2 separate ASGs simulate this behavior).
This should be some kind of a base line use case.
Also does an Event get triggered when a Spot instance is launched or when it is Terminated. I am only seeing CLIs to check Spot status, not any CloudWatch metric/event.
Cloudwatch Instance State events can fire when any event changes states.
They can fire for any event in the lifecycle of an instance:
pending (launching), running (launch complete), shutting-down, stopped, stopping, and terminated, for any instance (or for all instances, which is probably what you want -- just disregard any instance that isn't of interest), and this includes both on-demand and spot.
http://docs.aws.amazon.com/AmazonCloudWatch/latest/events/EventTypes.html#ec2_event_type
http://docs.aws.amazon.com/AmazonCloudWatch/latest/events/LogEC2InstanceState.html
You could use this to roll your own solution -- there's not a built in mechanism for marshaling mixed fleets.
I used to do this from the ELB with health checks. You can make two groups, one with spot instances and one with reserved or on demand. Create a CW alarm when spot group contains zero healthy hosts, and scale up the other group when it fires. And the other way around, when it has enough healthy hosts scale down the other group. Use 30 sec health checks on alarm you use to scale up and 30-60 minute cooldown on scale down.
There is also Spotml which allows you to always keep a spotInstance or an onDemand instance up and running.
In addition to simply spawning the instance it also allows you to
Preserve data via persistent storage
And configure a startup script each time a new instance is spawned.
Disclosure: I'm also the creator of SpotML, it's primarily useful for ML/DataScience workflows that can largely just run on spot instances.
I'm writing a web-service that packs up customer data into zip-files, then uploads them to S3 for download. It is an on-demand process, and the amount of data can range from a few Megabytes to multiple Gigabytes, depending on what data the customer orders.
Needless to say, scalability is essential for such a service. But I'm having trouble with it. Packaging the data into zip-files has to be done on the local harddrive of a server instance.
But the load balancer is prone to terminating instances that are still working. I have taken a look at scaling policies:
http://docs.aws.amazon.com/autoscaling/latest/userguide/as-instance-termination.html
But what I need doesn't seem to be there. The issue shouldn't be so difficult: I set the scale metric to CPU load, and scale down when it goes under 1%. But I need a guarantee that the exact instance will be terminated that breached the threshold, not another one that's still hard at work, and the available policies don't seem to present me with that option. Right now, I am at a loss how to achieve this. Can anybody give me some advice?
You can use Auto Scaling Lifecycle Hooks to perform actions before an instance is terminated. You could use this to wait for the processing to finish before proceeding with the instance termination.
It appears that you have configured an Auto Scaling group with scaling policies based upon CPU Utilization.
Please note that an Elastic Load Balancer will never terminate an Amazon EC2 instance -- if a Load Balancer health check fails, it will merely stop serving traffic to that EC2 instance until it again passes the health checks. It is possible to configure Auto Scaling to use ELB health checks, in which case Auto Scaling will terminate any instances that ELB marks as unhealthy.
Therefore, it would appear that Auto Scaling is responsible for terminating your instances, as a result of your scaling policies. You say that you wish to terminate specific instances that are unused. However, this is not the general intention of Auto Scaling. Rather, Auto Scaling is used to provide a pool of resources that can be scaled by launching new instances and terminating unwanted instances. Metrics that trigger Auto Scaling are typically based upon aggregate metrics across the whole Auto Scaling group (eg average CPU Utilization).
Given that Amazon EC2 instances are charged by the hour, it is often a good idea to keep instance running longer -- "Scale Out quickly, Scale In slowly".
Once Auto Scaling decides to terminate an instance (which it selects via a termination policy), use an Auto Scaling lifecycle hook to delay the termination until ready (eg, copying log files to S3, or waiting for a long process to complete).
If you do wish to terminate an instance after it has completed a particular workload, there is no need to use Auto Scaling -- just have the instance Shutdown when it is finished, and set the Shutdown Behavior to terminate to automatically terminate the instance upon shutdown. (This assumes that you have a process to launch new instances when you have work you wish to perform.)
Stepping back and looking at your total architecture, it would appear that you have a Load Balancer in front of web servers, and you are performing the Zip operations on the web servers? This is not a scalable solution. It would be better if your web servers pushed a message into an Amazon Simple Queue Service (SQS) queue, and then your fleet of back-end servers processed messages from the queue. This way, your front-end can continue receiving requests regardless of the amount of processing underway.
It sounds like what you need is Instance Protection, which is actually mentioned a bit more towards the bottom of the document that you linked to. As long as you have work being performed on a particular instance, it should not be automatically terminated by the Auto-Scaling Group (ASG).
Check out this blog post, on the official AWS blog, that conceptually talks about how you can use Instance Protection to prevent work from being prematurely terminated.
We have an AWS scaling group that has 10-20 servers behind a load balancer. After running for a couple of weeks some these server go bad. We have no idea why the servers go bad and it will take some time for us to get to a stage where we can debug this issue.
In the interim is there a way to tell AWS to terminate all the instances in the scaling group in a controlled fashion (one by one) until all the instances are replaced by new ones every week or so?
You can achieve this very effectively using Data Pipeline.
This is the developer guide for How do I stop and start Amazon EC2 Instances at scheduled intervals with AWS Data Pipeline?
There is no function in Auto Scaling to tell it to automatically terminate and replace instances. However, you could script such functionality.
Assumptions:
Terminate instances that are older than a certain number of hours old
Do them one-at-a-time to avoid impacting available capacity
You wish to replace them immediately
A suitable script would do the following:
Loop through all instances in a given Auto-Scaling Group using describe-auto-scaling-instances
If the instance belongs to the desired Auto Scaling group, retrieve its launch time via describe-instances
If the instance is older than the desired number of hours, terminate it using terminate-instance-in-auto-scaling-group with --no-should-decrement-desired-capacity so that it is automatically replaced
Then, wait a few minutes to allow it to be replaced and continue the loop
The script could be created by using the AWS Command-Line Interface (CLI) or a programming language such as Python.
Alternatively, you could program the instances to self-destruct after a given period of time (eg 72 hours) by simply calling the operating system to shut-down the instance. This would cause auto-scaling to terminate the instance and replace it.
There are two ways to achieve what you are looking for, Scheduled Auto Scaling Actions or take them one of the instances out of the ASG.
Scheduled Scaling
Scaling based on a schedule allows you to scale your application in response to predictable load changes. For example, every week the traffic to your web application starts to increase on Wednesday, remains high on Thursday, and starts to decrease on Friday. You can plan your scaling activities based on the predictable traffic patterns of your web application.
https://docs.aws.amazon.com/autoscaling/latest/userguide/schedule_time.html
You most likely want this.
Auto Scaling enables you to put an instance that is in the InService state into the Standby state, update or troubleshoot the instance, and then return the instance to service. Instances that are on standby are still part of the Auto Scaling group, but they do not actively handle application traffic.
https://docs.aws.amazon.com/autoscaling/latest/userguide/as-enter-exit-standby.html
As of Nov 20, 2019, EC2 AutoScaling supports Max Instance Lifetime: https://aws.amazon.com/about-aws/whats-new/2019/11/amazon-ec2-auto-scaling-supports-max-instance-lifetime/
From:
The maximum instance lifetime specifies the maximum amount of time (in
seconds) that an instance can be in service. The maximum duration
applies to all current and future instances in the group. As an
instance approaches its maximum duration, it is terminated and
replaced, and cannot be used again.
When configuring the maximum instance lifetime for your Auto Scaling
group, you must specify a value of at least 86,400 seconds (1 day). To
clear a previously set value, specify a new value of 0.
Scenario
I am creating a script that will launch new instances into an AutoScaling Group and then remove the old instances. The purpose is to introduce newly created (or updated) AMI's to the AutoScaling Group. This is accomplished by increasing the Desired capacity by double the current number of instances. Then, after the new instances are Running, decreasing the Desired capacity by the same number.
Problem
When I run the script, I watch the group capacity increase by double, the new instances come online, they reach the Running state, and then the group capacity is decreased. Works like a charm. The problem is that SOMETIMES the instances that are terminated by the decrease are actually the new ones instead of the older ones.
Question
How can I ensure that the AutoScaling Group will always terminate the Oldest Instance?
Settings
The AutoScaling Group has the following Termination Polices: OldestInstance, OldestLaunchConfiguration. The Default policy has been removed.
The Default Cooldown is set to 0 seconds.
The Group only has one Availability Zone.
Troubleshooting
I played around with the Cooldown setting. Ended up just putting it on 0.
I waited different lengths of time to see if the existing servers needed to be running for a certain amount of time before they would be terminated. It seems that if they are less than 5 minutes old, they are less likely to be terminated, but not always. I had servers that were 20 minutes old that were not terminated instead of the new ones. Perhaps newly launched instances have some termination protection grace period?
Concession
I know that in most cases, the servers I will be replacing will have been running for a long time. In production, this might not be an issue. Still, it is possible that during the normal course of AutoScaling, an older server will be left running instead of a newer one. This is not an acceptable way to operate.
I could force specific instances to terminate, but that would defeat the point of the OldestInstance Termination Policy.
Update: 12 Feb 2014
I have continued to see this in production. Instances with older launch configs that have been running for weeks will be left running while newer instances will be terminated. At this point I am considering this to be a bug. A thread at Amazon was opened for this topic a couple years ago, apparently without resolution.
Update: 21 Feb 2014
I have been working with AWS support staff and at this point they have preliminarily confirmed it could be a bug. They are researching the problem.
It doesn't look like you can, precisely, because auto-scaling is trying to do one other thing for you in addition to having the correct number of instances running: keep your instance counts balanced across availability zones... and it prioritizes this consideration higher than your termination policy.
Before Auto Scaling selects an instance to terminate, it first identifies the Availability Zone that has more instances than the other Availability Zones used by the group. If all Availability Zones have the same number of instances, it identifies a random Availability Zone. Within the identified Availability Zone, Auto Scaling uses the termination policy to select the instance for termination.
— http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/us-termination-policy.html
If you're out of balance, then staying in balance is arguably the most sensible strategy, especially if you are using ELB. The documentation is a little ambiguous, but ELB will advertise one public IP in the DNS for each availability zone where it is configured; these three IP addresses will achieve the first tier of load balancing by virtue of round-robin DNS. If all of the availability zones where the ELB is enabled have healthy instances, then there appears to be a 1:1 correlation between which external IP the traffic hits and which availability zone's servers that traffic will be offered to by ELB -- at least that is what my server logs show. It appears that ELB doesn't route traffic across availability zones to alternate servers unless all of the servers in a given zone are detected as unhealthy, and that may be one of the justifications of why they've implemented autoscaling this way.
Although this algorithm might not always kill the oldest instance first on a region-wide basis, if it does operate as documented, it would kill off the oldest one in the selected availability zone, and at some point it should end up cycling through all of them over the course of several shifts in load... so it would not leave the oldest running indefinitely, either. The larger the number of instances in the group is, it seems like the less significant this effect should be.
There are a couple of other ways to do it:
Increase desired to 2x
Wait for action to increase capacity
When the new instances are running, suspend all AS activity (as-suspend-processes MyAutoScalingGroup)
Reset desired
Terminate old instances
Resume AS activity.
Or:
Bring up a brand new ASG with the new launch config.
Suspend AS activity , until 1. is finished.
If everything is ok, delete the old ASG.
Resume AS activity
For ultimate rollback deployment:
Create new ELB (might have to ask Amazon to provision more elb if you have a lot of traffic, this is kinda lame and makes it not automation friendly)
Create new ASG with new LC
Switch DNS to new ELB
Delete old ELB/ASG/LC if everything's fine, if not just change DNS back
Or with the new ASG API that lets you attach/detach instances from ASG:
Somehow bring up your new instances (could just be run-instances or create a temp asg)
Suspend AS activity, Use http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/attach-instance-asg.html to attach them to your old ASG,
Use http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/detach-instance-asg.html or terminate your old instances
Resume AS activity
The reason you might want to use your old ASG is because it can be a pita to set all the policies again (even when automated) and it feels a bit safer to change as little as possible.
A.
My use case is that we needed to scale down and be able to choose which machines go down. Unfortunately the termination policy "OldestFirst" was not working for us either. I was able to use a variant of the attach/detach method that ambakshi shared to remove the oldest (or any instance I choose) and at the same time lower the desired instances value of the autoscaling group.
Step 1 – Change the autoscaling group Min value to the number you want to scale down to.
Step 2 – Suspend the ASG
Step 3 – Detach the instances you want to terminate, you can do multiple instances in one command. Make sure to use the should-decrement-desired-capacity flag
Step 4 – Resume the ASG
Step 5 – Terminate your instances using the console or the CLI
UPDATE
There is no need to suspend the Auto Scaling Group, just doing steps 1, 3 and 5 worked for me. Just be aware of any availability zone balancing that may happen.