I am using autoscale group for adding and removing additional instances for my application. I am using CPU Utilization as my scaling parameter and wondering what happens when an instance is running a program and the CPU Utilization comes below 65% (i.e threshold value).
Does it wait for the instance to finish the program or terminate the instance at that moment? If it terminates the instance at that moment then it might lead to data loss/data inconsistency.
Any help would be appreciated.
If you're looking to prevent or delay an instance during a scale in event you could take a look at lifecycle hooks.
By enabling this autoscaling can send a notification that a specific instance action is about to occur (scale out or scale in). Using a combination of services (such as SNS, Lambda, SSM etc) you would be able to programmatically notify the instance that is is about to be terminated which you can then take any necessary actions.
The instance termination will wait until there is a confirmation to the autoscaling group that it has been completed which will lead to it being terminated. Additionally a lifecycle hook will have a timeout, if no confirmation is received by the time the timeout has been exceeded then the termination will still occur.
I think that you are looking for termination policy
Look at this link:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-instance-termination.html#default-termination-policy
And in my experience, the instance will be terminated no matter what it is running
Does it wait for the instance to finish the program or terminate the instance at that moment.
Sadly, I does not wait. ASG works outside of your instances and is not concerned with any programs running on your instance.
Having said that, there are few things you can do, some of which are described in:
Controlling which Auto Scaling instances terminate during scale in
Generally speaking you should develop your applications to be stateless. This means the applications should be "aware" that they can be terminated at any time. One way to achieve is by using external storage systems, such as S3 or EFS, which will persist data between terminations.
Other way, is to use termination protection. In this case, the application will put its instance in this state at the beginning of processing, and then whey the calcuation finishes, the termination protection will be removed.
Related
I want to run Druid on EKS but was concerned about using EC2 autoscaling groups to scale my middle managers. If every middle manager is running an ingestion task but AWS decides to scale down, will a middle manager be terminated or will there be termination protection in place? If so, what other alternatives to scaling do people suggest?
A signal will be sent to your containers to give them an opportunity to shutdown gracefully. This is part of lifecycle management.
By default, the orchestrator will wait 30 seconds before forcefully stopping the container. You can adjust this by setting terminationGracePeriodSeconds. You can also add hooks like PostStart or PreStop to do any extra operations to ensure consistency in your system.
See also: EC2 Autoscaling lifecycle hooks
Is there a way to ensure an AWS ECS container instance doesn't shut down in the middle of running a critical task?
I have an auto-scaling AWS ECS service that scales the number of instances based on CPU usage. These instances process long-running batch jobs that may take anywhere from 5 to 30 minutes.
The problem is that sometimes, during a scale-down, an instance that's actively running a critical job gets shut down which ultimately causes the job to fail.
You can use a feature called managed termination protection.
When the scaling policy reduces the number of instances, it has no control over which instances actually terminate. The default behavior of the auto-scaling group may well terminate instances that are running tasks, even though there are instances not running tasks. This is where managed termination protection comes into the picture. With this option enabled, ECS dynamically manage instance termination protection on your behalf.
Please have a look at Controlling which Auto Scaling instances terminate during scale in and specifically the section Instance scale-in protection in the AWS documentation.
Is there a way to elegantly Script/Configure Spot instances request, if Spot not available in some specified duration, just use OnDemand. And if Spot instance gets terminated just shift to OnDemand.
Spot Fleet does not do this (it just manages only Spot), EMR fleets have some logic around this. You can have auto scaling with Spot or on Demand not both (even though you can have 2 separate ASGs simulate this behavior).
This should be some kind of a base line use case.
Also does an Event get triggered when a Spot instance is launched or when it is Terminated. I am only seeing CLIs to check Spot status, not any CloudWatch metric/event.
Cloudwatch Instance State events can fire when any event changes states.
They can fire for any event in the lifecycle of an instance:
pending (launching), running (launch complete), shutting-down, stopped, stopping, and terminated, for any instance (or for all instances, which is probably what you want -- just disregard any instance that isn't of interest), and this includes both on-demand and spot.
http://docs.aws.amazon.com/AmazonCloudWatch/latest/events/EventTypes.html#ec2_event_type
http://docs.aws.amazon.com/AmazonCloudWatch/latest/events/LogEC2InstanceState.html
You could use this to roll your own solution -- there's not a built in mechanism for marshaling mixed fleets.
I used to do this from the ELB with health checks. You can make two groups, one with spot instances and one with reserved or on demand. Create a CW alarm when spot group contains zero healthy hosts, and scale up the other group when it fires. And the other way around, when it has enough healthy hosts scale down the other group. Use 30 sec health checks on alarm you use to scale up and 30-60 minute cooldown on scale down.
There is also Spotml which allows you to always keep a spotInstance or an onDemand instance up and running.
In addition to simply spawning the instance it also allows you to
Preserve data via persistent storage
And configure a startup script each time a new instance is spawned.
Disclosure: I'm also the creator of SpotML, it's primarily useful for ML/DataScience workflows that can largely just run on spot instances.
I'm writing a web-service that packs up customer data into zip-files, then uploads them to S3 for download. It is an on-demand process, and the amount of data can range from a few Megabytes to multiple Gigabytes, depending on what data the customer orders.
Needless to say, scalability is essential for such a service. But I'm having trouble with it. Packaging the data into zip-files has to be done on the local harddrive of a server instance.
But the load balancer is prone to terminating instances that are still working. I have taken a look at scaling policies:
http://docs.aws.amazon.com/autoscaling/latest/userguide/as-instance-termination.html
But what I need doesn't seem to be there. The issue shouldn't be so difficult: I set the scale metric to CPU load, and scale down when it goes under 1%. But I need a guarantee that the exact instance will be terminated that breached the threshold, not another one that's still hard at work, and the available policies don't seem to present me with that option. Right now, I am at a loss how to achieve this. Can anybody give me some advice?
You can use Auto Scaling Lifecycle Hooks to perform actions before an instance is terminated. You could use this to wait for the processing to finish before proceeding with the instance termination.
It appears that you have configured an Auto Scaling group with scaling policies based upon CPU Utilization.
Please note that an Elastic Load Balancer will never terminate an Amazon EC2 instance -- if a Load Balancer health check fails, it will merely stop serving traffic to that EC2 instance until it again passes the health checks. It is possible to configure Auto Scaling to use ELB health checks, in which case Auto Scaling will terminate any instances that ELB marks as unhealthy.
Therefore, it would appear that Auto Scaling is responsible for terminating your instances, as a result of your scaling policies. You say that you wish to terminate specific instances that are unused. However, this is not the general intention of Auto Scaling. Rather, Auto Scaling is used to provide a pool of resources that can be scaled by launching new instances and terminating unwanted instances. Metrics that trigger Auto Scaling are typically based upon aggregate metrics across the whole Auto Scaling group (eg average CPU Utilization).
Given that Amazon EC2 instances are charged by the hour, it is often a good idea to keep instance running longer -- "Scale Out quickly, Scale In slowly".
Once Auto Scaling decides to terminate an instance (which it selects via a termination policy), use an Auto Scaling lifecycle hook to delay the termination until ready (eg, copying log files to S3, or waiting for a long process to complete).
If you do wish to terminate an instance after it has completed a particular workload, there is no need to use Auto Scaling -- just have the instance Shutdown when it is finished, and set the Shutdown Behavior to terminate to automatically terminate the instance upon shutdown. (This assumes that you have a process to launch new instances when you have work you wish to perform.)
Stepping back and looking at your total architecture, it would appear that you have a Load Balancer in front of web servers, and you are performing the Zip operations on the web servers? This is not a scalable solution. It would be better if your web servers pushed a message into an Amazon Simple Queue Service (SQS) queue, and then your fleet of back-end servers processed messages from the queue. This way, your front-end can continue receiving requests regardless of the amount of processing underway.
It sounds like what you need is Instance Protection, which is actually mentioned a bit more towards the bottom of the document that you linked to. As long as you have work being performed on a particular instance, it should not be automatically terminated by the Auto-Scaling Group (ASG).
Check out this blog post, on the official AWS blog, that conceptually talks about how you can use Instance Protection to prevent work from being prematurely terminated.
I have an application that is constantly gathering data from active connections and then writing compiled/batched data at the end of every minute.
I have Amazon Auto Scaling working with these servers. The problem is.. when the group is down scaled I need to keep the servers writing their last minute worth of data before termination occurs after being removed from the ELB.
Is there anyway to Remove the instance from the Load Balancer then have a wait period of X minutes before terminating the instance? (Ideally I would wait 2-5 mintues before termination of the instance)
Any guidance would help
Thanks
One option is to handle termination yourself. Instead of configuring autoscaling to downscale your instance group, put the logic to determine if an instance needs to terminate in the instance itself. Once you decide that an instance needs to self-terminate, do whatever work you need to do before terminating, and then call the as-terminate-instance-in-auto-scaling-group command with --decrement-desired-capacity option to terminate the instance. E.g.:
as-terminate-instance-in-auto-scaling-group --decrement-desired-capacity i-d15ea5e
See this AWS forum thread: https://forums.aws.amazon.com/thread.jspa?messageID=407743&tstart=0#407743.