How does AWS Cloudwatch alarms work when triggered together? - amazon-web-services

In my AWS elastic server setup, I have configured 4 Alarms
Add instance when CPU utilization > 20
Add instance when TargetResponceTime > 0.9
Remove instance when CPU utilization < 20
Remove instance when TargetResponceTime < 0.9
What will happen if two or more alarms triggered together?
For Example
If alarm 1 and 2 triggered together will it add two instances?
If alarm 1 and 4 triggered together will it remove an instance and add one or will it stay neutral?
The alarms are working fine, but I want to understand the mechanism behind alarm action execution.
Any Idea?

Your auto scaling group has a cooldown period, so technically multiple actions cannot occur at the same time. The next action would occur after the cooldown period has passed.
This functionality is to stop exactly what you're talking about, with multiple instances scaling at once.
I think personally for what you're doing you should be making use of a composite CloudWatch alarm. By having an OR condition these 4 alarms could become 2, which would reduce the number of alarms you have to trigger an autoscaling action.

Related

Make EC2 Autoscaling scale-out more quickly for customMetric

I'm setting up an AWS EC2 Autoscaling Group (ASG) and it's using TargetTrackingScaling that is tracking a custom metric. This metric is published by each instance in the ASG every 30 seconds.
It's working fine, but I'd like the scale-out action to happen more quickly. The source of this looks to be due to the ASG Alarm that gets auto generated. To me it looks like it's waiting for at least 3 datapoints over 3 minutes before ringing the alarm.
Is there a way I can configure the ASG/Scaling policy such that the alarm only needs 1 datapoint (or less time) before deciding to ring the alarm? Or if that's not possible, can I create a custom alarm and use that instead of the alarm that the ASG auto generated.

Auto scale rule based on custom Cloudwatch alarm

I have an auto-scaling group of EC2 servers that run a number of processes.
This number of processes changes with the load and I'd like to trigger a scaling (up/down) based on the number of processes.
I've successfully set up a script that sends to Cloudwatch the number of processes on every servers, for every minutes, and I can see these on Cloudwatch. (I haven't set a dimension, to be able to get the value for all the servers).
Then, I created an Alarm, that uses the average for the values sent, and if it reach a certain limit, it triggers the "Add a new server" to the auto scaling group, and when it stop being on alarm, it triggers a "Remove a server".
My issue is that when I add the new server, the average drops, since there is one more server now, which move the alarm to the ok state, removing the server, and increasing again the average, triggering again the alarm, etc.
For instance, the limit is set to 10 processes on average. With 3 servers, if the average becomes 11, I trigger the alarm state, adding a server. Now with the new server, I'm at 33 processes (3 x 11) for 4 servers : 8,25 processes on average, thus triggering the "OK" alarm.
My question is: Is it possible to set up an alarm based on the number of processes without having the new trigger causes a up-down-up-down issue?
Instead of average, I can use something else to trigger the alarm, such as min/max/I-don't-know.
Thank you for your help. Happy to provide any other details if needed.
You should not create an alarm that adds instances when True and removes instances when False. This will cause a continual 'flip-flop' situation rather than trying to find a steady-state.
You could have each server regularly send a custom metric to Amazon CloudWatch. You could then use this with Target tracking scaling policies for Amazon EC2 Auto Scaling - Amazon EC2 Auto Scaling, which will calculate the average value of the metric and automatically launch/terminate instances to keep the target value around 10.
This would work well with long-running processes (perhaps 5+ minutes with several processes running concurrently), but would not be good with short sub-minute processes because it takes time to launch new instances.
I think you could look at metric math. So instead of directly triggering your alarm based on your process-count-metric only, you could perhabs calculate the average count yourself using metric math. You could use the GroupTotalInstances metric from your ASG, or just publish second custom metric having the number of instances.
In both cases, your metric for the alarm would use metric math to divide number of processes by size of ASG for each evaluation period.

Is it possible to have an AWS EC2 scale group that defaults to 0 and only contains instances when there is work to do?

I am trying to setup a EC2 Scaling group that scales depending on how many items are in an SQS queue.
When the SQS queue has items visible I need the Scaling group to have 1 instance available and when the SQS queue is empty (e.g. there are no visible or non-visible messages) I want there to be 0 instances.
Desired instances it set to 0, min is set to 0 and max is set to 1.
I have setup cloudwatch alarms on my SQS queue to trigger when visible messages are greater than zero, and also triggers an alarm when non visible messages are less than one (i.e no more work to do).
Currently the Cloudwatch Alarm Triggers to create an instance but then the scaling group automatically kills the instance to meet the desired setting. I expected the alarm to adjust the desired instance count within the min and max settings but this seems to not be the case.
Yes, you can certainly have an Auto Scaling group with:
Minimum = 0
Maximum = 1
Alarm: When ApproximateNumberOfMessagesVisible > 0 for 1 minute, Add 1 Instance
This will cause Auto Scaling to launch an instance when there are messages waiting in the queue. It will keep trying to launch more instances, but the Maximum setting will limit it to 1 instance.
Scaling-in when there are no messages is a little bit tricker.
Firstly, it can be difficult to actually know when to scale-in. If there are messages waiting to be processed, then ApproximateNumberOfMessagesVisible will be greater than zero. However, there are no messages waiting, it doesn't necessarily mean you wish to scale-in because messages might be currently processing ("in flight"), as indicated by ApproximateNumberOfMessagesNotVisible. So, you only want to scale-in if both of these are zero. Unfortunately, a CloudWatch alarm can only reference one metric, not two.
Secondly, when an Amazon SQS queue is empty, it does not send metrics to Amazon CloudWatch. This sort of makes sense, because queues are mostly empty, so it would be continually sending a zero metric. However, it causes a problem that CloudWatch does not receive a metric when the queue is empty. Instead, the alarm will enter the INSUFFICIENT_DATA state.
Therefore, you could create your alarm as:
When ApproximateNumberOfMessagesVisible = 0 for 15 minutes, Remove 1 instance but set the action to trigger on INSUFFICIENT_DATA rather than ALARM
Note the suggested "15 minutes" delay to avoid thrashing instances. This is where instances are added and removed in rapid succession because messages are coming in regularly, but infrequently. Therefore, it is better to wait a while before deciding to scale-in.
This leaves the problem of having instances terminated while they are still processing messages. This can be avoided by taking advantage of Auto Scaling Lifecycle Hooks, which send a signal when an instance is about to be terminated, giving the application the opportunity to delay the termination until work is complete. Your application should then signal that it is ready for termination only when message processing has finished.
Bottom line
Much of the above depends upon:
How often your application receives messages
How long it takes to process a message
The cost savings involved
If your messages are infrequent and simple to process, it might be worthwhile to continuously run a t2.micro instance. At 2c/hour, the benefit of scaling-in is minor. Also, there is always the risk when adding and removing instances that you might actually pay more, because instances are charged by the hour -- running an instance for 30 minutes, terminating it, then launching another instance for 30 minutes will actually be charged as two hours.
Finally, you could consider using AWS Lambda instead of an Amazon EC2 instance. Lambda is ideal for short-lived code execution without requiring a server. It could totally remove the need to use Amazon EC2 instances, and you only pay while the Lambda function is actually running.
for simple conf, with per sec aws ami/ubuntu billing dont worry about wasted startup/shutdown time, just terminate your ec2 by yourself, w/o any asg down policy add a little bash in client startup code or preinstal it in cron and poll for process presence or cpu load and term ec2 or shutdown (termination is better if you attach volumes and need 'em to autodestruct) after processing is done. there's ane annoying thing about asg defined as 0/0/1 (min/desired/max) with defaults and ApproximateNumberOfMessagesNotVisible on sqs - after ec2 is fired somehow it switches to 1/0/1 and it start to loop firing ec2 even if there's nothing is sqs (i'm doing video transcoding, queing jobs to do to sns/sqs and firing ffmpeg nodes with asg defined on non empty sqs)

OpsWorks Stack - Load-based instances scaling UP and DOWN based on cloudwatch alarms

We have an opsworks stack with two 24x7 instances. Four time-based instances. Two load-based instances.
Our issue is with the load-based instances. We've spent a great deal of time creating meaningful-to-our-service cloudwatch alarms. Thus, we want the load-based instances in our stack to come UP when a particular cloudwatch latency alarm is in an ALARM state. I see that in the load-based instance configuration, you can define a cloudwatch alarm for bringing the instance(s) UP and you can define a cloudwatch alarm for bringing the instance(s) DOWN.
Thing is, when I select the specific cloudwatch alarm I want to use to trigger the UP, it removes that cloudwatch alarm from being selected as the trigger for DOWN. Why?
Specifically, we want our latency alarm (we'll call it the "oh crap things are slowing down" cloudwatch alarm) to trigger the load-based instances to START when in an ALARM state. Then, we want the "oh crap things are slowing down" cloudwatch alarm to trigger the load-based instances to SHUTDOWN when in an OK state. It would be rad if the load-based instances waited 15 minutes after the OK state of the alarm before shutting down.
The "oh crap things are slowing down" threshold is Latency > 2 for 3 minutes
Do I just need to create a new "oh nice things are ok" alarm with a threshold of Latency < 2 for 3 minutes to use as the DOWN alarm in the load-based instance configuration?
Sorry for the newbie question, just feel stuck.
From what I can tell, you have to add a second alarm that triggers only when the latency is below 2 for three minutes. If someone else comes up with a cleaner solution than this, I'd love to hear about it. As it is, you'll always have one of the alerts in a continuous state of alarm.

What order are AWS AutoScaling policies applied in?

I am planning on using AWS Autoscaling to scale my EC2 services, I have 4 policies that need to control my instance behavior, 2 for scale out and 2 for scale in. My question is what order will they be evaluated in? Scale out first then scale in? or vice-versa? Random? or something else?
Thank you,
Policies are not evaluated in an order. Each policy is compared against the metrics that policy is set up to measure, and takes actions based on the results.
For example, perhaps you have the following four policies:
Add 1 instance when an SQS queue depth is > 1000 messages
Remove 1 instance when the same SQS queue depth is < 200 messages
Add 1 instance when the average CPU of all instances in the autoscaling group is > 80%
Remove 1 instance when the average CPU of all instances in the autoscaling group is < 30%
As you can see, ordering doesn't make sense in this context. The appropriate action(s) will be executed whenever the conditions are met.
Note that without planning and testing you can encounter loops of instances that constantly cycle up and down. Drawing from the previous example, imagine that a new instance is launched because there are > 1000 messages in the queue. But the CPU usage is only 20% for all the instances, so then the 4th policy fires to remove an instance. Thus all the policies should be considered in concert.