I'm trying to set up an AWS Auto Scaling Group (ASG) that auto-scales based on average group CPU load.
I have a scale up policy that is supposed to scale the group up by 1 instance once the average CPU usage is higher than 70%. However when the alarm is triggered, the ASG launches several instances at the same time, which it shouldn'd do.
The relevant bits of CloudFormation configuration:
ECSScaleUpPolicy:
Type: AWS::AutoScaling::ScalingPolicy
Properties:
AdjustmentType: "ChangeInCapacity"
AutoScalingGroupName: !Ref ECSAutoScalingGroup
PolicyType: "StepScaling"
MetricAggregationType: "Average"
EstimatedInstanceWarmup: 600
StepAdjustments:
-
MetricIntervalLowerBound: "0"
ScalingAdjustment: "1"
ECSScaleUpAlarm:
Type: "AWS::CloudWatch::Alarm"
Properties:
AlarmDescription: "CPU more than 70% during the last minute."
AlarmName: "ECSScaleUpAlarm"
AlarmActions:
-
!Ref ECSScaleUpPolicy
Dimensions:
-
Name: "ClusterName"
Value: !Ref ECSCluster
MetricName: "CPUReservation"
Namespace: "AWS/ECS"
ComparisonOperator: "GreaterThanOrEqualToThreshold"
Statistic: "Average"
Threshold: 70
Period: 60
EvaluationPeriods: 1
TreatMissingData: "notBreaching"
As you can see, the scaling adjustment is just 1 and the instance warmup is quite long, it should wait for more time before launching the second instance :(
According to the documentation Policy type of Step scaling causes the group capacity to increase or decrease based on the size of the alarm breach. You need to change that to Simple scaling so that the capacity can be set based on a single adjustment.
Related
I'm trying to learn autoscaling for ECS with EC2 launch type.
Without the autoscaling part, everything works well.
When I add the autoscaling part, or the Scalable Target the Alarm and Policy for both, scaling in and out, the service gets stuck in the event:
service ecs-service was unable to place a task because no container instance met all of its requirements. The closest matching container-instance XXX has insufficient CPU units available.
If I look at the service the desired capacity is stuck in 4, pending is 0 and running is 1.
In relation to the alarms, the high cpu usage alarm is OK and the low cpu usage alarm is In alarm.
The Task Definition has 1024 MB assigned to CPU and 1024 MB to Memory.
The Container has 1024 MB assigned to CPU and 1024 MB to Memory.
And I have been waiting for more than 40 minutes.
What would I expect?
I'm setting a low threshold for high CPU Usage (20%) to make the alarm react easily.
Then, I increase the quantity up to 4, checking the used CPU percentage.
This should work in both ways, when is adding and when it is removing. So, it should add up to 4 when high is enabled and go down to 1 when low is enabled.
Here's the entire chain of events without tasks ids, dates and events ids to simplify its reading.
service ecs-service was unable to place a task because no container instance met all of its requirements. The closest matching container-instance XXX has insufficient CPU units available. For more
service ecs-service registered 1 targets in target-group ecs-target
service ecs-service was unable to place a task because no container instance met all of its requirements. The closest matching container-instance XXX has insufficient CPU units available. For more information, see the Troubleshooting section.
Message: Successfully set desired count to 4. Waiting for change to be fulfilled by ecs. Cause: monitor alarm high-cpu-usage in state ALARM triggered policy ecs-high-policy
service ecs-service has started 1 tasks: task
service ecs-service has stopped 1 running tasks: task
service ecs-service deregistered 1 targets in target-group ecs-target
service ecs-service (instance XXX) (port 8080) is unhealthy in target-group ecs-target due to (reason Health checks failed)
service ecs-service has started 1 tasks: task
service ecs-service was unable to place a task because no container instance met all of its requirements. Reason: No Container Instances were found in your cluster. For more information, see the Troubleshooting section.
Message: Successfully set desired count to 4. Found it was later changed to 0. Cause: monitor alarm high-cpu-usage in state ALARM triggered policy ecs-high-policy
Message: Successfully set desired count to 4. Found it was later changed to 0. Cause: monitor alarm high-cpu-usage in state ALARM triggered policy ecs-high-policy
Message: Successfully set desired count to 3. Change successfully fulfilled by ecs. Cause: monitor alarm high-cpu-usage in state ALARM triggered policy ecs-high-policy
Message: Successfully set desired count to 2. Change successfully fulfilled by ecs. Cause: monitor alarm high-cpu-usage in state ALARM triggered policy ecs-high-policy
This is my Scalable Target, Alarms and Policies:
The service uses a Load Balancer.
ServiceScalableTarget:
Type: AWS::ApplicationAutoScaling::ScalableTarget
DependsOn: Service
Properties:
MaxCapacity: !Ref MaxSize
MinCapacity: !Ref MinSize
ResourceId:
Fn::Join:
- '/'
- - 'service'
- Ref: Cluster
- Fn::GetAtt:
- Service
- 'Name'
RoleARN:
Fn::ImportValue: !Ref ECSAutoScalingRole
ScalableDimension: ecs:service:DesiredCount
ServiceNamespace: ecs
HighCpuUsageAlarm:
Type: AWS::CloudWatch::Alarm
DependsOn: ScalingPolicyHigh
Properties:
AlarmName: high-cpu
MetricName: CPUUtilization
Namespace: AWS/ECS
Dimensions:
- Name: ServiceName
Value: !Ref ServiceName
- Name: ClusterName
Value: !Ref Cluster
Statistic: Average
Period: 300
EvaluationPeriods: 1
Threshold: 20
ComparisonOperator: GreaterThanOrEqualToThreshold
AlarmActions:
- !Ref ScalingPolicyHigh
ScalingPolicyHigh:
Type: AWS::ApplicationAutoScaling::ScalingPolicy
Properties:
PolicyName: olicy-high
PolicyType: StepScaling
ScalingTargetId:
Ref: ServiceScalableTarget
StepScalingPolicyConfiguration:
AdjustmentType: ChangeInCapacity
Cooldown: 600
MetricAggregationType: Average
StepAdjustments:
- MetricIntervalLowerBound: 0
MetricIntervalUpperBound: 15
ScalingAdjustment: 1
- MetricIntervalLowerBound: 15
MetricIntervalUpperBound: 25
ScalingAdjustment: 2
- MetricIntervalLowerBound: 25
ScalingAdjustment: 3
LowCpuUsageAlarm:
Type: AWS::CloudWatch::Alarm
DependsOn: ScalingPolicyLow
Properties:
AlarmName: low-cpu
MetricName: CPUUtilization
Namespace: AWS/ECS
Dimensions:
- Name: ServiceName
Value: !Ref ServiceName
- Name: ClusterName
Value: !Cluster
Statistic: Average
Period: 300
EvaluationPeriods: 2
Threshold: 15
ComparisonOperator: LessThanOrEqualToThreshold
AlarmActions:
- !Ref ScalingPolicyLow
ScalingPolicyLow:
Type: AWS::ApplicationAutoScaling::ScalingPolicy
Properties:
PolicyName: policy-low
PolicyType: StepScaling
ScalingTargetId:
Ref: ServiceScalableTarget
StepScalingPolicyConfiguration:
AdjustmentType: ChangeInCapacity
Cooldown: 600
MetricAggregationType: Average
StepAdjustments:
- MetricIntervalLowerBound: -15
MetricIntervalUpperBound: 0
ScalingAdjustment: -1
- MetricIntervalLowerBound: -25
MetricIntervalUpperBound: -15
ScalingAdjustment: -2
- MetricIntervalUpperBound: -25
ScalingAdjustment: -3
I'd appreciate help. I cannot make it work properly.
I have a workload running as an ECS service attached to a target group. Then I have an alarm monitoring that target group's instance count (HealthyHostCount). I'd like to implement blue/green deployments using 2 target groups, but it seems like because the alarm monitors a specific target group's value, it needs to be updated every deployment separately from the actual deployment.
This seems fragile and that there would be a better way to do this (e.g. after the deployment if we have a script that updates the alarm's target group, it could fail), but I can't see the better way. Is there an obviously easier solution?
Instead of monitoring you have the desired number of healthy targets, monitor that you have no unhealthy ones.
Your ECS service will take care of managing your desired count, plus you might want to scale the service so UnHealthyHostCount is the better metric to alarm on, I think anyway.
Create one alarm for each target group as below.
These won't trigger between normal ECS blue/green deployments, only if there is a registered target failing health-checks. You need to tune the health-check settings on the target group and HealthCheckGracePeriodSeconds setting for the ECS service accordingly.
BlueUnHealthyHostCountAlarm:
Type: 'AWS::CloudWatch::Alarm'
Properties:
AlarmDescription: 'Alarms when there is any unhealthy target'
Namespace: 'AWS/ApplicationELB'
MetricName: UnHealthyHostCount
Statistic: Maximum
Period: 60
EvaluationPeriods: 2
ComparisonOperator: GreaterThanThreshold
Threshold: 1
AlarmActions:
- Topic
Dimensions:
- Name: LoadBalancer
Value: AlbFullName
- Name: TargetGroup
Value: BlueTargetGroup
GreenUnHealthyHostCountAlarm:
Type: 'AWS::CloudWatch::Alarm'
Properties:
AlarmDescription: 'Alarms when there is any unhealthy target'
Namespace: 'AWS/ApplicationELB'
MetricName: UnHealthyHostCount
Statistic: Maximum
Period: 60
EvaluationPeriods: 2
ComparisonOperator: GreaterThanThreshold
Threshold: 1
AlarmActions:
- Topic
Dimensions:
- Name: LoadBalancer
Value: AlbFullName
- Name: TargetGroup
Value: GreenTargetGroup
I am trying to add a CPUCreditBalance AWS::CloudWatch::Alarm to the EBN application using cloudformation. it is similar to the picture but using cloudformation
EC2 instances and the autoscalinggroup is created by the cloudformation as well. so i dont know how to get either InscanceId or AutoScalingGroupName to place it in this code
CPUCreditBalanceAlarm:
Type: AWS::CloudWatch::Alarm
Properties:
AlarmDescription: Warning alarm when EC2 rans out of credit
MetricName: CPUCreditBalance
Namespace: AWS/EC2
Period: 300
Statistic: Average
ComparisonOperator: LessThanThreshold
Threshold: 1
EvaluationPeriods: 2
DatapointsToAlarm: 2
TreatMissingData: breaching
Dimensions:
- Name: AutoScalingGroupName
Value: XXXXXXXX
AlarmActions:
- !Ref SnsAlarmWarning
If you have your AWS::AutoScaling::AutoScalingGroup defined in the same template as the alarm, then you can just use Ref to get ASG name:
Dimensions:
- Name: AutoScalingGroupName
Value: !Ref AWSEBAutoScalingGroup
AlarmActions:
- !Ref SnsAlarmWarning
Names of resources created by EB are listed in:
Modifying the resources that Elastic Beanstalk creates for your environment
So i already have an existing redshift cluster running which I created with cloudformation, Now I need to add a new cloudwatch event to this cluster like below code, How do i map the new alarm with existing cluster.
This is for existing AWS Redshift cluster
Type: AWS::CloudWatch::Alarm
Properties:
AlarmDescription: !Join [ " ", [ "Health status alarm for", !Ref RedshiftCluster, "Redshift Cluster"]]
AlarmActions:
- !Ref redshiftClusterSNSTopic
MetricName: HealthStatus
Namespace: AWS/Redshift
Statistic: Average
Period: 300
EvaluationPeriods: 3
Threshold: 1
ComparisonOperator: LessThanThreshold
Dimensions:
- Name: ClusterIdentifier
Value: !Ref CARedshiftCluster
Not sure how to do this, help is appreciated.
You can give cloudkast a try. It is an online cloudformation template generator. It is regularly updated. As of now it does support cloudwatch.
I recently started using ECS. I was able to deploy a container image in ECR and create task definition for my container with CPU/Memory limits. My use case is that each container will be a long running app (no webserver, no port mapping needed). The containers will be spawned on demand 1 at a time and deleted on demand 1 at a time.
I am able to create a cluster with N server instances. But I'd like to be able for the server instances to automatically scale up/down. For example if there isn't enough CPU/Memory in the cluster, I'd like a new instance to be created.
And if there is an instance with no containers running in it, I'd like that specific instance to be scaled down / deleted. This is to avoid auto scale down termination of a server instance that has running tasks in it.
What steps are needed to be able to achieve this?
Considering that you already have an ECS Cluster created, AWS provides instructions on Scaling cluster instances with CloudWatch Alarms.
Assuming that you want to scale the cluster based on the memory reservation, at a high level, you would need to do the following:
Create an Launch Configuration for your Auto Scaling Group. This
Create an Auto Scaling Group, so that the size of the cluster can be scaled up and down.
Create a CloudWatch Alarm to scale the cluster up if the memory reservation is over 70%
Create a CloudWatch Alarm to scale the cluster down if the memory reservation is under 30%
Because it's more of my specialty I wrote up an example CloudFormation template that should get you started for most of this:
Parameters:
MinInstances:
Type: Number
MaxInstances:
Type: Number
InstanceType:
Type: String
AllowedValues:
- t2.nano
- t2.micro
- t2.small
- t2.medium
- t2.large
VpcSubnetIds:
Type: String
Mappings:
EcsInstanceAmis:
us-east-2:
Ami: ami-1c002379
us-east-1:
Ami: ami-9eb4b1e5
us-west-2:
Ami: ami-1d668865
us-west-1:
Ami: ami-4a2c192a
eu-west-2:
Ami: ami-cb1101af
eu-west-1:
Ami: ami-8fcc32f6
eu-central-1:
Ami: ami-0460cb6b
ap-northeast-1:
Ami: ami-b743bed1
ap-southeast-2:
Ami: ami-c1a6bda2
ap-southeast-1:
Ami: ami-9d1f7efe
ca-central-1:
Ami: ami-b677c9d2
Resources:
Cluster:
Type: AWS::ECS::Cluster
Role:
Type: AWS::IAM::Role
Properties:
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
-
Effect: Allow
Action:
- sts:AssumeRole
Principal:
Service:
- ec2.amazonaws.com
InstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Path: /
Roles:
- !Ref Role
LaunchConfiguration:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
ImageId: !FindInMap [EcsInstanceAmis, !Ref "AWS::Region", Ami]
InstanceType: !Ref InstanceType
IamInstanceProfile: !Ref InstanceProfile
UserData:
Fn::Base64: !Sub |
#!/bin/bash
echo ECS_CLUSTER=${Cluster} >> /etc/ecs/ecs.config
AutoScalingGroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
MinSize: !Ref MinInstances
MaxSize: !Ref MaxInstances
LaunchConfigurationName: !Ref LaunchConfiguration
HealthCheckGracePeriod: 300
HealthCheckType: EC2
VPCZoneIdentifier: !Split [",", !Ref VpcSubnetIds]
ScaleUpPolicy:
Type: AWS::AutoScaling::ScalingPolicy
Properties:
AdjustmentType: ChangeInCapacity
AutoScalingGroupName: !Ref AutoScalingGroup
Cooldown: '1'
ScalingAdjustment: '1'
MemoryReservationAlarmHigh:
Type: AWS::CloudWatch::Alarm
Properties:
EvaluationPeriods: '2'
Statistic: Average
Threshold: '70'
AlarmDescription: Alarm if Cluster Memory Reservation is to high
Period: '60'
AlarmActions:
- Ref: ScaleUpPolicy
Namespace: AWS/ECS
Dimensions:
- Name: ClusterName
Value: !Ref Cluster
ComparisonOperator: GreaterThanThreshold
MetricName: MemoryReservation
ScaleDownPolicy:
Type: AWS::AutoScaling::ScalingPolicy
Properties:
AdjustmentType: ChangeInCapacity
AutoScalingGroupName: !Ref AutoScalingGroup
Cooldown: '1'
ScalingAdjustment: '-1'
MemoryReservationAlarmLow:
Type: AWS::CloudWatch::Alarm
Properties:
EvaluationPeriods: '2'
Statistic: Average
Threshold: '30'
AlarmDescription: Alarm if Cluster Memory Reservation is to Low
Period: '60'
AlarmActions:
- Ref: ScaleDownPolicy
Namespace: AWS/ECS
Dimensions:
- Name: ClusterName
Value: !Ref Cluster
ComparisonOperator: LessThanThreshold
MetricName: MemoryReservation
This creates an ECS Cluster, a Launch Configuration, An AutoScaling Group, As well as the Alarms based on the ECS Memory Reservation.
Now we can get to the interesting discussions.
Why can't we scale up based on the CPU Utilization And Memory Reservation?
The short answer is you totally can But you're likely to pay a lot for it. EC2 has a known property that when you create an instance, you pay for a minimum of 1 hour, because partial instance hours are charged as full hours. Why that's relevant is, imagine you have multiple alarms. Say you have a bunch of services that are currently running idle, and you fill the cluster. Either the CPU Alarm scales down the cluster, or the Memory Alarm scales up the cluster. One of these will likely scale the cluster to the point that it's alarm is no longer triggered. After the cooldown, period, the other alarm will undo it's last action, After the next cooldown, the action will likely be redone. Thus instances are created then destroyed repeatedly on every other cooldown.
After giving a bunch of thought to this, the strategy that I came up with was to use Application Autoscaling for ECS Services based on CPU Utilization, and Memory Reservation based on the cluster. So if one service is running hot, an extra task will be added to share the load. This will slowly fill the cluster memory reservation capacity. When the memory gets full, the cluster scales up. When a service is cooling down, the services will start shutting down tasks. As the memory reservation on the cluster drops, the cluster will be scaled down.
The thresholds for the CloudWatch Alarms might need to be experimented with, based on your task definitions. The reason for this is that if you put the scale up threshold too high, it may not scale up as the memory gets consumed, and then when autoscaling goes to place another task, it will find that there isn't enough memory available on any instance in the cluster, and therefore be unable to place another task.
As part of this year's re:Invent conference, AWS announced cluster auto scaling for Amazon ECS. Clusters configured with auto scaling can now add more capacity when needed and remove capacity that is not necessary. You can find more information about this in the documentation.
However, depending on what you're trying to run, AWS Fargate could be a better option. Fargate allows you to run containers without provisioning and managing the underlying infrastructure; i.e., you don't have to deal with any EC2 instances. With Fargate, you can make an API call to run your container, the container can run, and then there's nothing to clean up once the container stops running. Fargate is billed per-second (with a 1-minute minimum) and is priced based on the amount of CPU and memory allocated (see here for details).