I have added a Capacity Provider to an ECS cluster. While scale-out events work as expected due to changes in CapacityProviderReservation metric, scale-in events do not work.
In my case, the TargetCapacity property is set to 90, but looking at CloudWatch the average for the CapacityProviderReservation metric currently sits at 50%. This has been the case for the last 16 hours.
According to AWS's own documentation, scale-in events occur -
When using dynamic scaling policies and the size of the group decreases as a result of changes in a metric's value
So it seems like the Capacity Provider is not changing the desired size of the ASG as expected.
Am I missing something here, or do capacity providers tied to ASG's simply not work both ways?
ASG and Capacity Provider resources in CloudFormation
Resources:
AutoScalingGroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
AutoScalingGroupName: !Sub ${ResourceNamePrefix}-asg
VPCZoneIdentifier:
- !Ref PrivateSubnetAId
LaunchTemplate:
LaunchTemplateId: !Ref Ec2LaunchTemplate
Version: !GetAtt Ec2LaunchTemplate.LatestVersionNumber
MinSize: 0
MaxSize: 3
DesiredCapacity: 1
EcsCapacityProvider:
Type: AWS::ECS::CapacityProvider
Properties:
Name: !Sub ${ResourceNamePrefix}-ecs-capacity-provider
AutoScalingGroupProvider:
AutoScalingGroupArn: !Ref AutoScalingGroup
ManagedScaling:
Status: ENABLED
TargetCapacity: 90
ManagedTerminationProtection: DISABLED
Dynamic scaling policy for ASG
Current status of the CapacityProviderReservation metric
The CapacityProviderReservation metric has been at 50% for well over 12 hours.
Current status of the Capacity Provider
As you can see, the desired size is still 2, while it is expected that this should have dropped back to 1.
Update
After deleting and recreating the cluster, I notice that the Capacity Provider changes the DesiredCapacity to 2 instantly, even though there are no tasks running.
Related
I'm trying to learn autoscaling for ECS with EC2 launch type.
Without the autoscaling part, everything works well.
When I add the autoscaling part, or the Scalable Target the Alarm and Policy for both, scaling in and out, the service gets stuck in the event:
service ecs-service was unable to place a task because no container instance met all of its requirements. The closest matching container-instance XXX has insufficient CPU units available.
If I look at the service the desired capacity is stuck in 4, pending is 0 and running is 1.
In relation to the alarms, the high cpu usage alarm is OK and the low cpu usage alarm is In alarm.
The Task Definition has 1024 MB assigned to CPU and 1024 MB to Memory.
The Container has 1024 MB assigned to CPU and 1024 MB to Memory.
And I have been waiting for more than 40 minutes.
What would I expect?
I'm setting a low threshold for high CPU Usage (20%) to make the alarm react easily.
Then, I increase the quantity up to 4, checking the used CPU percentage.
This should work in both ways, when is adding and when it is removing. So, it should add up to 4 when high is enabled and go down to 1 when low is enabled.
Here's the entire chain of events without tasks ids, dates and events ids to simplify its reading.
service ecs-service was unable to place a task because no container instance met all of its requirements. The closest matching container-instance XXX has insufficient CPU units available. For more
service ecs-service registered 1 targets in target-group ecs-target
service ecs-service was unable to place a task because no container instance met all of its requirements. The closest matching container-instance XXX has insufficient CPU units available. For more information, see the Troubleshooting section.
Message: Successfully set desired count to 4. Waiting for change to be fulfilled by ecs. Cause: monitor alarm high-cpu-usage in state ALARM triggered policy ecs-high-policy
service ecs-service has started 1 tasks: task
service ecs-service has stopped 1 running tasks: task
service ecs-service deregistered 1 targets in target-group ecs-target
service ecs-service (instance XXX) (port 8080) is unhealthy in target-group ecs-target due to (reason Health checks failed)
service ecs-service has started 1 tasks: task
service ecs-service was unable to place a task because no container instance met all of its requirements. Reason: No Container Instances were found in your cluster. For more information, see the Troubleshooting section.
Message: Successfully set desired count to 4. Found it was later changed to 0. Cause: monitor alarm high-cpu-usage in state ALARM triggered policy ecs-high-policy
Message: Successfully set desired count to 4. Found it was later changed to 0. Cause: monitor alarm high-cpu-usage in state ALARM triggered policy ecs-high-policy
Message: Successfully set desired count to 3. Change successfully fulfilled by ecs. Cause: monitor alarm high-cpu-usage in state ALARM triggered policy ecs-high-policy
Message: Successfully set desired count to 2. Change successfully fulfilled by ecs. Cause: monitor alarm high-cpu-usage in state ALARM triggered policy ecs-high-policy
This is my Scalable Target, Alarms and Policies:
The service uses a Load Balancer.
ServiceScalableTarget:
Type: AWS::ApplicationAutoScaling::ScalableTarget
DependsOn: Service
Properties:
MaxCapacity: !Ref MaxSize
MinCapacity: !Ref MinSize
ResourceId:
Fn::Join:
- '/'
- - 'service'
- Ref: Cluster
- Fn::GetAtt:
- Service
- 'Name'
RoleARN:
Fn::ImportValue: !Ref ECSAutoScalingRole
ScalableDimension: ecs:service:DesiredCount
ServiceNamespace: ecs
HighCpuUsageAlarm:
Type: AWS::CloudWatch::Alarm
DependsOn: ScalingPolicyHigh
Properties:
AlarmName: high-cpu
MetricName: CPUUtilization
Namespace: AWS/ECS
Dimensions:
- Name: ServiceName
Value: !Ref ServiceName
- Name: ClusterName
Value: !Ref Cluster
Statistic: Average
Period: 300
EvaluationPeriods: 1
Threshold: 20
ComparisonOperator: GreaterThanOrEqualToThreshold
AlarmActions:
- !Ref ScalingPolicyHigh
ScalingPolicyHigh:
Type: AWS::ApplicationAutoScaling::ScalingPolicy
Properties:
PolicyName: olicy-high
PolicyType: StepScaling
ScalingTargetId:
Ref: ServiceScalableTarget
StepScalingPolicyConfiguration:
AdjustmentType: ChangeInCapacity
Cooldown: 600
MetricAggregationType: Average
StepAdjustments:
- MetricIntervalLowerBound: 0
MetricIntervalUpperBound: 15
ScalingAdjustment: 1
- MetricIntervalLowerBound: 15
MetricIntervalUpperBound: 25
ScalingAdjustment: 2
- MetricIntervalLowerBound: 25
ScalingAdjustment: 3
LowCpuUsageAlarm:
Type: AWS::CloudWatch::Alarm
DependsOn: ScalingPolicyLow
Properties:
AlarmName: low-cpu
MetricName: CPUUtilization
Namespace: AWS/ECS
Dimensions:
- Name: ServiceName
Value: !Ref ServiceName
- Name: ClusterName
Value: !Cluster
Statistic: Average
Period: 300
EvaluationPeriods: 2
Threshold: 15
ComparisonOperator: LessThanOrEqualToThreshold
AlarmActions:
- !Ref ScalingPolicyLow
ScalingPolicyLow:
Type: AWS::ApplicationAutoScaling::ScalingPolicy
Properties:
PolicyName: policy-low
PolicyType: StepScaling
ScalingTargetId:
Ref: ServiceScalableTarget
StepScalingPolicyConfiguration:
AdjustmentType: ChangeInCapacity
Cooldown: 600
MetricAggregationType: Average
StepAdjustments:
- MetricIntervalLowerBound: -15
MetricIntervalUpperBound: 0
ScalingAdjustment: -1
- MetricIntervalLowerBound: -25
MetricIntervalUpperBound: -15
ScalingAdjustment: -2
- MetricIntervalUpperBound: -25
ScalingAdjustment: -3
I'd appreciate help. I cannot make it work properly.
I have a workload running as an ECS service attached to a target group. Then I have an alarm monitoring that target group's instance count (HealthyHostCount). I'd like to implement blue/green deployments using 2 target groups, but it seems like because the alarm monitors a specific target group's value, it needs to be updated every deployment separately from the actual deployment.
This seems fragile and that there would be a better way to do this (e.g. after the deployment if we have a script that updates the alarm's target group, it could fail), but I can't see the better way. Is there an obviously easier solution?
Instead of monitoring you have the desired number of healthy targets, monitor that you have no unhealthy ones.
Your ECS service will take care of managing your desired count, plus you might want to scale the service so UnHealthyHostCount is the better metric to alarm on, I think anyway.
Create one alarm for each target group as below.
These won't trigger between normal ECS blue/green deployments, only if there is a registered target failing health-checks. You need to tune the health-check settings on the target group and HealthCheckGracePeriodSeconds setting for the ECS service accordingly.
BlueUnHealthyHostCountAlarm:
Type: 'AWS::CloudWatch::Alarm'
Properties:
AlarmDescription: 'Alarms when there is any unhealthy target'
Namespace: 'AWS/ApplicationELB'
MetricName: UnHealthyHostCount
Statistic: Maximum
Period: 60
EvaluationPeriods: 2
ComparisonOperator: GreaterThanThreshold
Threshold: 1
AlarmActions:
- Topic
Dimensions:
- Name: LoadBalancer
Value: AlbFullName
- Name: TargetGroup
Value: BlueTargetGroup
GreenUnHealthyHostCountAlarm:
Type: 'AWS::CloudWatch::Alarm'
Properties:
AlarmDescription: 'Alarms when there is any unhealthy target'
Namespace: 'AWS/ApplicationELB'
MetricName: UnHealthyHostCount
Statistic: Maximum
Period: 60
EvaluationPeriods: 2
ComparisonOperator: GreaterThanThreshold
Threshold: 1
AlarmActions:
- Topic
Dimensions:
- Name: LoadBalancer
Value: AlbFullName
- Name: TargetGroup
Value: GreenTargetGroup
I'm using AutoscalingGroup with mixed policy, where OnDemandBaseCapacity and OnDemandPercentageAboveBaseCapacity are 0, so it won't launch any On-Demand instance but always try to request and launch spot instance when needed.
My Cloudformation Spec for autoscaling group:
AutoScalingGroupForApiServers:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
VPCZoneIdentifier: !Ref VpcSubnetsForApiLoadBalancer
MinSize: !Ref ASGMinSizeForApiServers
MaxSize: !Ref ASGMaxSizeForApiServers
HealthCheckType: !Ref HealthCheckTypeForApiServers
HealthCheckGracePeriod: !FindInMap [ Constants, '-', AutoScalingGroupDefaultHealthCheckGracePeriod ]
MixedInstancesPolicy:
InstancesDistribution:
OnDemandBaseCapacity: 0
OnDemandPercentageAboveBaseCapacity: 0
SpotAllocationStrategy: lowest-price
SpotInstancePools: 2
LaunchTemplate:
LaunchTemplateSpecification:
LaunchTemplateId: !Ref AutoScalingLaunchTemplateForApiServers
Version: !GetAtt AutoScalingLaunchTemplateForApiServers.LatestVersionNumber
LoadBalancerNames:
- !Ref ElasticLoadBalancerForApiServers
I have two questions:
1) If one spot instance terminates and there's no another spot instance available, will it launch On-Demand instance and then scale down it to 0?
2) Upon receiving 2-minute termination notice will it automatically throw out instance from referenced load balancers/target groups or do I have manually handle it with CloudWatch/SNS/Lambda?
1) If one spot instance terminates and there's no another spot instance available, will it launch On-Demand instance and then scale down it to 0?
it will try to maintain the OnDemand:Spot ratio which you would have mentioned in config. for example if you have set 30% OnDemand and 70% Spot. then it will keep the autoscaling group instances as 30% OnDemand and 70% Spot. if Spot instances are not available then it wont launch OnDemand instances to compensate it.
2) Upon receiving 2-minute termination notice will it automatically throw out instance from referenced load balancers/target groups or do I have manually handle it with CloudWatch/SNS/Lambda?
if you have linked AutoScaling group with Targetgroup, then it will automatically removes the Terminated or unavailable instances from the TargetGroups.
I'm implementing ECS health-check functionality and and I'm thinking about the best way to do that.
For now I have found several solutions:
Using AWS ECS metrics and Dimensions and check whether some metric has insufficient value
Using CloudWatch Alarm:
ECSHealthAlarm:
Type: AWS::CloudWatch::Alarm
Properties:
AlarmDescription: Alarm for ECS StatusCheckFailed Metric
ComparisonOperator: GreaterThanOrEqualToThreshold
EvaluationPeriods: 2
Statistic: Maximum
MetricName: StatusCheckFailed
Namespace: AWS/ECS
Period: 30
Threshold: 1.0
AlarmActions:
- !Ref AlarmTopic
InsufficientDataActions:
- !Ref AlarmTopic
Dimensions:
- Name: ClusterName
Value: !Ref ClusterName
- Name: ServiceName
Value: !GetAtt service.Name
Using CloudWatch event:
EventRule:
Type: "AWS::Events::Rule"
Properties:
Name: CloudWatchRMExtensionECSStoppedRule
Description: "Notify when ECS container stopped"
EventPattern:
source: ["aws.ecs"]
detail-type: ["ECS Task State Change", "ECS Container Instance State Change"]
detail:
clusterArn: [ 'clusterArn' ]
lastStatus: [ "STOPPED" ]
stoppedReason: [ "Essential container in task exited" ]
group: [ 'service-group' ]
State: "ENABLED"
Targets:
- Arn: !Ref ECSAlarmSNSTopic
Id: "PublishAlarmTopic"
InputTransformer:
InputPathsMap:
stopped-reason: "$.detail.stoppedReason"
InputTemplate: '"This micro-service has been stopped with the following reason: <stopped-reason>"'
Could you please advice whether those variants are correct or there is ant other way to do that more efficient? Thanks for any help!
I am not able to put comment, so here are some thoughts. I am bit unclear of your requirement, whether you are looking for alerts from EC2 server level status check or each ECS service tasks level. I am adding all the possible options here.
I would run ECS cluster EC2 instances under an Auto-Scaling Group and based on ASG CloudWatch metrics, setup a SNS notification when instances are being added/removed.
https://docs.aws.amazon.com/autoscaling/ec2/userguide/healthcheck.html
We can have AWS ecs-agent docker container logs also sent to CloudWatch and get some SNS notifications based on errors or filtered events.
We can have subscription to CW from ECS event stream as well when each service tasks being started/stopped. References - https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cloudwatch_event_stream.html https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_cwet.html
Example event entries are in below link – https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_cwe_events.html
Reference for setting alarm based on log events.
https://medium.com/#martatatiana/insufficient-data-cloudwatch-alarm-based-on-custom-metric-filter-4e41c1f82050
Adding healthcheck for each ECS service wise and have containers restarted if they are not doing well.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#container_definition_healthcheck
Please do let me know your thoughts as well :).
I recently started using ECS. I was able to deploy a container image in ECR and create task definition for my container with CPU/Memory limits. My use case is that each container will be a long running app (no webserver, no port mapping needed). The containers will be spawned on demand 1 at a time and deleted on demand 1 at a time.
I am able to create a cluster with N server instances. But I'd like to be able for the server instances to automatically scale up/down. For example if there isn't enough CPU/Memory in the cluster, I'd like a new instance to be created.
And if there is an instance with no containers running in it, I'd like that specific instance to be scaled down / deleted. This is to avoid auto scale down termination of a server instance that has running tasks in it.
What steps are needed to be able to achieve this?
Considering that you already have an ECS Cluster created, AWS provides instructions on Scaling cluster instances with CloudWatch Alarms.
Assuming that you want to scale the cluster based on the memory reservation, at a high level, you would need to do the following:
Create an Launch Configuration for your Auto Scaling Group. This
Create an Auto Scaling Group, so that the size of the cluster can be scaled up and down.
Create a CloudWatch Alarm to scale the cluster up if the memory reservation is over 70%
Create a CloudWatch Alarm to scale the cluster down if the memory reservation is under 30%
Because it's more of my specialty I wrote up an example CloudFormation template that should get you started for most of this:
Parameters:
MinInstances:
Type: Number
MaxInstances:
Type: Number
InstanceType:
Type: String
AllowedValues:
- t2.nano
- t2.micro
- t2.small
- t2.medium
- t2.large
VpcSubnetIds:
Type: String
Mappings:
EcsInstanceAmis:
us-east-2:
Ami: ami-1c002379
us-east-1:
Ami: ami-9eb4b1e5
us-west-2:
Ami: ami-1d668865
us-west-1:
Ami: ami-4a2c192a
eu-west-2:
Ami: ami-cb1101af
eu-west-1:
Ami: ami-8fcc32f6
eu-central-1:
Ami: ami-0460cb6b
ap-northeast-1:
Ami: ami-b743bed1
ap-southeast-2:
Ami: ami-c1a6bda2
ap-southeast-1:
Ami: ami-9d1f7efe
ca-central-1:
Ami: ami-b677c9d2
Resources:
Cluster:
Type: AWS::ECS::Cluster
Role:
Type: AWS::IAM::Role
Properties:
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
-
Effect: Allow
Action:
- sts:AssumeRole
Principal:
Service:
- ec2.amazonaws.com
InstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Path: /
Roles:
- !Ref Role
LaunchConfiguration:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
ImageId: !FindInMap [EcsInstanceAmis, !Ref "AWS::Region", Ami]
InstanceType: !Ref InstanceType
IamInstanceProfile: !Ref InstanceProfile
UserData:
Fn::Base64: !Sub |
#!/bin/bash
echo ECS_CLUSTER=${Cluster} >> /etc/ecs/ecs.config
AutoScalingGroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
MinSize: !Ref MinInstances
MaxSize: !Ref MaxInstances
LaunchConfigurationName: !Ref LaunchConfiguration
HealthCheckGracePeriod: 300
HealthCheckType: EC2
VPCZoneIdentifier: !Split [",", !Ref VpcSubnetIds]
ScaleUpPolicy:
Type: AWS::AutoScaling::ScalingPolicy
Properties:
AdjustmentType: ChangeInCapacity
AutoScalingGroupName: !Ref AutoScalingGroup
Cooldown: '1'
ScalingAdjustment: '1'
MemoryReservationAlarmHigh:
Type: AWS::CloudWatch::Alarm
Properties:
EvaluationPeriods: '2'
Statistic: Average
Threshold: '70'
AlarmDescription: Alarm if Cluster Memory Reservation is to high
Period: '60'
AlarmActions:
- Ref: ScaleUpPolicy
Namespace: AWS/ECS
Dimensions:
- Name: ClusterName
Value: !Ref Cluster
ComparisonOperator: GreaterThanThreshold
MetricName: MemoryReservation
ScaleDownPolicy:
Type: AWS::AutoScaling::ScalingPolicy
Properties:
AdjustmentType: ChangeInCapacity
AutoScalingGroupName: !Ref AutoScalingGroup
Cooldown: '1'
ScalingAdjustment: '-1'
MemoryReservationAlarmLow:
Type: AWS::CloudWatch::Alarm
Properties:
EvaluationPeriods: '2'
Statistic: Average
Threshold: '30'
AlarmDescription: Alarm if Cluster Memory Reservation is to Low
Period: '60'
AlarmActions:
- Ref: ScaleDownPolicy
Namespace: AWS/ECS
Dimensions:
- Name: ClusterName
Value: !Ref Cluster
ComparisonOperator: LessThanThreshold
MetricName: MemoryReservation
This creates an ECS Cluster, a Launch Configuration, An AutoScaling Group, As well as the Alarms based on the ECS Memory Reservation.
Now we can get to the interesting discussions.
Why can't we scale up based on the CPU Utilization And Memory Reservation?
The short answer is you totally can But you're likely to pay a lot for it. EC2 has a known property that when you create an instance, you pay for a minimum of 1 hour, because partial instance hours are charged as full hours. Why that's relevant is, imagine you have multiple alarms. Say you have a bunch of services that are currently running idle, and you fill the cluster. Either the CPU Alarm scales down the cluster, or the Memory Alarm scales up the cluster. One of these will likely scale the cluster to the point that it's alarm is no longer triggered. After the cooldown, period, the other alarm will undo it's last action, After the next cooldown, the action will likely be redone. Thus instances are created then destroyed repeatedly on every other cooldown.
After giving a bunch of thought to this, the strategy that I came up with was to use Application Autoscaling for ECS Services based on CPU Utilization, and Memory Reservation based on the cluster. So if one service is running hot, an extra task will be added to share the load. This will slowly fill the cluster memory reservation capacity. When the memory gets full, the cluster scales up. When a service is cooling down, the services will start shutting down tasks. As the memory reservation on the cluster drops, the cluster will be scaled down.
The thresholds for the CloudWatch Alarms might need to be experimented with, based on your task definitions. The reason for this is that if you put the scale up threshold too high, it may not scale up as the memory gets consumed, and then when autoscaling goes to place another task, it will find that there isn't enough memory available on any instance in the cluster, and therefore be unable to place another task.
As part of this year's re:Invent conference, AWS announced cluster auto scaling for Amazon ECS. Clusters configured with auto scaling can now add more capacity when needed and remove capacity that is not necessary. You can find more information about this in the documentation.
However, depending on what you're trying to run, AWS Fargate could be a better option. Fargate allows you to run containers without provisioning and managing the underlying infrastructure; i.e., you don't have to deal with any EC2 instances. With Fargate, you can make an API call to run your container, the container can run, and then there's nothing to clean up once the container stops running. Fargate is billed per-second (with a 1-minute minimum) and is priced based on the amount of CPU and memory allocated (see here for details).