cloudwatch alarm for Autoscaling - amazon-web-services

how to setup cloudwatch alarm for Autoscaling group when its scaling down Mincapacity Instances using cloudformation template.
I mean need alarm when all the Instances were "OutofService" basically this will happen when Instance failed ELB healthcheck.

Why don't you add an alarm based on the CloudTrail metric HealtlyHostCount? If you set a low threshold, you will get warned when there are no healthy instances.
You can see the metrics documentation here

Related

Setup CloudWatch Alarms for EC2 instances in Autoscaling Group(CF)

I have an AWS::AutoScaling::AutoScalingGroup configuration that runs two instances of EC2. My question is - is it possible to attach CloudWatch alarms for both instances? For example I want to observe StatusCheckFailed_Instance metric for each EC2 in a group?
Usually you can attach alarms through the EC2 Instance ID but how to know each EC2 Instance ID in AutoScalingGroup to attach alerts? or here should be another way to attach alerts? I really can't find something useful and workable over internet.
Option 1)
Create your own script that's triggered on launch/terminate events
the scripts will each be set to trigger a lambda that would read in the instance ID and create/delete an alarm
Option 2)
If you're not trying to use the auto-recover option (which you shouldn't need in an ASG, since the ASG will just replace the instances), then you can make 1 aggregate alarm for the ASG
Create the alarm based on the StatusCheckFailed_Instance metric with the ASGName=<> Dimension
Set it to trigger if the MAX statistic value is > 1 (since that means at least 1 instance is failing, each instance will push its own datapoints to ASG versions of EC2 metrics)
Since you only have 2 instances, you can just manually check both if it ever triggers. But for larger ASGs using the SEARCH() math expression on the CloudWatch metrics console (or a dashboard) would be a good way to look through all the ASG instances and view their metrics to see which one is failing

How to delete AWS CloudWatch alarm during EC2 termination

I would like to delete attached AWS CloudWatch alarm of an EC2 instance during the termination and also based on specific tags. Because we will have large number of EC2 in our account.
Example: I would like to delete CloudWatch alarm of EC2 instances during its termination which has Tag (name:id,Value:123).

How to set an alarm not to trigger action after autoscaling

I have a problem setting my autoscaling group. I have created an alarm that when triggered makes the autoscaling add a new EC2 instance to it. The autoscaling has 200 seconds of Default Cooldown period, but the alarm keeps recording data during that time and is triggered again. That makes the autoscaling group launch another machine and end up entering a loop that makes the group raise all the available machines.
How can I configure the autoscaling group so that it ignores the second triggered alarm? Is there any point about the configuration that I seem to be missing? Thanks in advance.
EDIT:
These are the metrics and scaling policies that trigger my group:
And this is the reason why I think that the autoscaling is still receiving alarms. Because terminations and launchings overlap in time.
I am not sure which type of health check you are using but there is a condition called "grace period"
Frequently, an Auto Scaling instance that has just come into service needs to warm up before it can pass the health check. Amazon EC2 Auto Scaling waits until the health check grace period ends before checking the health status of the instance
https://docs.aws.amazon.com/autoscaling/ec2/userguide/healthcheck.html
That can be the configuration that you are missing
AWS autoscale ELB status checks grace period

Why does CPUUtilization Alarm have always INSUFFICIENT_DATA state?

I'm trying to create an Auto-Scaling Group, which will work based upon CPUUtilization of the Target Group.
I managed to created an Auto-Scaling group. When I execute the Scaling Policies via some test data. It works.
I created 2 alarms in Cloudwatch. However, those are in "INSUFFICIENT_DATA".
The alarms should be able to checking the CPUUtilization of the Auto-Scaling Group.
So, how can I run the autoscaling based on CPUUtilization of Target Group?
The screentshots are below:
The loadbalancer configuration
The target group configuration
The autoscaling configuration
The scaling policies
The CloudWatch alarm config
Alarm configuration for Scale-UP
Alarm configuration for Scale-Down
https://serverfault.com/questions/479689/cloudwatch-alarms-strange-behavior
I found this answer. My alarm works for period 1 minute. But my cloudwatch monitoring wasn't detailed.
When I change the alarm checking period from 1 minute to 1 hour, the alarm works. However, I need to execute the alarm for period 1 minute. I enabled the detailed monitoring but still doesn't work.

AWS Cloudwatch alarm for each single instance of an auto scaling group

We have configured an Auto Scaling group in AWS. And it works fine. We have configured some alarms for the group, such as: send alarm if the average CPUUtilization > 60 for 2 minutes ... use AWS CLI.
The only problem is, if we want to monitoring each instance in the group. We have to configure them manually. Are they any way to do it automatically like config, template?
Amazon CloudWatch alarms can be created on the Auto Scaling group as a whole, such as Average CPUUtilization. This is because alarms are used to tell Auto Scaling when to add/remove instances and such decisions would be based upon the group as a whole. For example, if one machine is 100% busy but another is 0% busy, then on average the group is only 50% busy.
There should be no reason for placing an alarm on the individual instances in an auto-scaling group, at least as far as triggering a scaling action.
There is no in-built capability to specify an alarm that will be applied against each auto-scaled instance individually. You could do it programmatically by responding to an Amazon SNS notification whenever an instance is added/removed by Auto Scaling, but this would require your own code to be written.
You can accomplish this with lifecycle hooks and a little lambda glue. When you have lifecycle events for adding or terminating an instance, you can create an alarm on that individual instance or remove it (depending on the event) via a lambda function.
To John's point, this is a little bit of an anti-pattern with horizontal scaling and load balancing. However, theory and practice sometimes diverge.