There's memory-utilization metric under both of
ClusterName, ServiceName
ClusterName
I can't tell the difference of the two..
ClusterName
This dimension filters the data that you request for all resources in a specified cluster. All Amazon ECS metrics are filtered by ClusterName.
ServiceName
This dimension filters the data that you request for all resources in a specified service within a specified cluster.
Cluster metrics are scoped as ECS > ClusterName and service utilization metrics are scoped as ECS > Cluster
Related
I am using CDK to run standalone containerized ECS tasks on EC2.
I'm not using ECS service, because my task should be triggered using a request (in this a lambda), so they are not long running (or aways on). I use the following code, which works for one instance of the task:
this.autoScalingGroup = new autoscaling.AutoScalingGroup(this, 'ASG', {
vpc: this.vpc,
instanceType: ec2.InstanceType.of(
ec2.InstanceClass.C5,
ec2.InstanceSize.XLARGE,
),
machineImage: ecs.EcsOptimizedImage.amazonLinux2(),
minCapacity: 1,
maxCapacity: 10,
updatePolicy: UpdatePolicy.rollingUpdate(),
});
const capacityProvider = new ecs.AsgCapacityProvider(
this,
'AsgCapacityProvider',
{
autoScalingGroup: this.autoScalingGroup,
},
);
this.cluster.addAsgCapacityProvider(capacityProvider);
The problem is that I cannot run concurrent instances. So, it doesn't scale up to more instances. It only runs one instance (my container takes up ~80% of the EC2 instance resources). How would I trigger autoscaling such that when multiple tasks are run more instances get added?
p.s. obviosuly if I set minCapacity to x, x number of concurrent tasks can run, but I'd like to only keep 1 instance up and scale up and down.
The CDK-created capacity provider will auto-scale the instances based on standalone task activity. I suspect scaling is not working for you because your run-task invocations are missing a capacity provider strategy that tells ECS what capacity provider to use. There are a couple of ways to fix this:
Option 1: Set a strategy in the run-task call:
aws ecs run-task \
--cluster <cluster-arn> \
--task-definition <task-definition-name> \
--capacity-provider-strategy capacityProvider=<capacity-provider-name>,weight=1,base=1 \
--count 3 \
Option 2: Set the cluster's *default* capacity provider strategy
The CDK's L2 Cluster construct does not set a default capacity provider strategy on your cluster. See this github issue for context and workarounds. The default can also be set in the console.
I created a Capacity Provider.
Then, I deployed a Service with that Capacity Provider and with binpack (CPU) strategy.
Service:
Type: AWS::ECS::Service
Properties:
ServiceName: my-service
TaskDefinition: my-td
Cluster: my-cluster
CapacityProviderStrategy:
- CapacityProvider: my-capacity-provider
Weight: 1
PlacementStrategies:
- Type: binpack
Field: cpu
DesiredCount: 1
This also creates the alarms in CW for high and low CPU usage.
When I add 15 tasks, after a while, the high alarm goes on and new EC2 instances are added.
Then, if I update the number of tasks to 6... Nothing happens. The low usage alarm is never triggered and the EC2 instances are not removed.
However, the service deregisters the extra tasks but the required 6 and enters to steady state.
Does anyone have this issue?
What can I do?
I know I could go to ECS Cluster, then in the ECS instances tab select the extra EC2 instances and drain them. However, this manual action would kill the purpose o using CloudFormation.
I am a little hazy about the need for both. I am using both in ECS CloudFormation. It appears that AWS::AutoScaling::AutoScalingGroup is for scaling EC2 instances whereas AWS::ApplicationAutoScaling::ScalableTarget is for scaling containers/tasks. Is my understanding correct?
That is pretty much correct.
AWS::AutoScaling::AutoScalingGroup is used for creating an Auto Scaling group of EC2 instances
AWS::ApplicationAutoScaling::ScalableTarget is used to specify an application resource that can scale - that is any of a range of resources including ECS service, EMR cluster or DynamoDB. A list of all the resources that can be scaled can be found in the ResourceId parameter description for RegisterScalableTarget.
By default, AWS ElasticBeanstalk scales on NetworkOut
However, I am wanting to scale on two scenarios, network out and CPU utilization.
Is there a way to do this so that if either of these exceeds their limit, it will scale?
In ElasticBeanstalk console Configuration > Scaling > Scaling Trigger you can only set one Trigger Measurement like CPUUtilization or NetworkIn or NetworkOut or other options present.
If you need multiple scaling policies, you can add them manually or through ebextensions config file to ElasticBeanstalk's Auto Scaling group like described here. Add a Simple Scaling policy or Target Tracking Scaling policy.
Add below to ebextension config file to create a Target tracking scaling policy:
Resources:
POL:
Type: 'AWS::AutoScaling::ScalingPolicy'
Properties:
AutoScalingGroupName: !Ref AWSEBAutoScalingGroup
PolicyType: TargetTrackingScaling
TargetTrackingConfiguration:
PredefinedMetricSpecification:
PredefinedMetricType: ASGAverageCPUUtilization
TargetValue: 80
To create a Simple scaling policy, you should create a scaling policy and CloudWatch resource like here
I'm using Ansible to configure AWS Auto Scaling Groups (ASG). Looking at the ec2_asg_module options, there's none for enabling Monitoring in cloudWatch. However, that option can be enabled either form the AWS CLI or the AWS Console.
In the Console, it is labeled as "Group Metric Collection".
Keep in mind that I do not want to monitor the EC2 instances, but the Auto Scaling Group itself.
Thank you.
I submitted a PR last year to add 2 AWS modules : boto3 and boto3_wait.
These 2 modules allow you to interact with AWS API using boto3.
For instance, you could enable group metrics on the ASG by calling enable_metrics_collection method on AutoScaling service :
- name: Enable group metrics
boto3:
service: autoscaling
region: us-east-1
operation: enable_metrics_collection
parameters:
AutoScalingGroupName: my-auto-scaling-group
Granularity: 1Minute
Feel free to give the PR a thumbs-up if you like it! ;)