How do I add a reference to the latest version number of the launch template in the auto-scaling group? The launch template and auto scaling group are in separate files as the launch template is used by other autoscaling groups as well. Can Fn::GetAtt and Ref be used here to refer to another resource located in a separate file? If so, how can I use it? The end goal is when I make an update to the launch template, the autoscaling group should automatically refer to the latest launch template.
Note - I am creating the launch template completely separate before creating the auto-scaling group. It works this way, but I would like the auto-scaling group to refer to the latest launch template. It currently defaults to version 1.
launch_template.yaml
---
AWSTemplateFormatVersion: '2010-09-09'
Resources:
LaunchTemplate:
Type: 'AWS::EC2::LaunchTemplate'
Properties:
LaunchTemplateName: testtemplate
LaunchTemplateData:
IamInstanceProfile:
Arn: >-
arn:aws:iam::blah
ImageId: ami-12345
InstanceRequirements:
InstanceGenerations:
- current
MemoryMiB:
Min: 8192
VCpuCount:
Min: 2
Max: 4
BlockDeviceMappings:
- Ebs:
VolumeSize: 16
VolumeType: gp2
DeleteOnTermination: true
Encrypted: true
DeviceName: /dev/xvda
Monitoring:
Enabled: true
KeyName: testkey
SecurityGroupIds:
- sg-1234
PrivateDnsNameOptions:
HostnameType: ip-name
InstanceInitiatedShutdownBehavior: terminate
autoscaling_group.yaml
---
Resources:
testautoscalinggroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
AutoScalingGroupName: test-autoscaling-group
MaxSize: '2'
MinSize: '1'
DesiredCapacity: '1'
VPCZoneIdentifier:
- subnet-1234
TargetGroupARNs:
- Ref: targetgroup
MixedInstancesPolicy:
InstancesDistribution:
OnDemandAllocationStrategy: lowest-price
LaunchTemplate:
LaunchTemplateSpecification:
LaunchTemplateName: testtemplate
Tags:
- Key: Cluster
Value: test
PropagateAtLaunch: true
dynamicscalingpolicy:
Type: AWS::AutoScaling::ScalingPolicy
Properties:
AutoScalingGroupName:
Ref: testautoscalinggroup
PolicyType: TargetTrackingScaling
TargetTrackingConfiguration:
DisableScaleIn: false
PredefinedMetricSpecification:
PredefinedMetricType: ASGAverageCPUUtilization
TargetValue: 30
targetgroup:
Type: 'AWS::ElasticLoadBalancingV2::TargetGroup'
Properties:
HealthCheckEnabled: true
HealthCheckIntervalSeconds: 30
HealthCheckProtocol: TCP
HealthCheckTimeoutSeconds: 10
HealthyThresholdCount: 3
Name: testtargetgroup
Protocol: TCP_UDP
Port: 53
UnhealthyThresholdCount: 3
VpcId: vpc-1234
Related
I'm trying to create an ASG with dynamic and predictive scaling through Cloudformation. However, I'm getting the below error. Can I use dynamic and predictive scaling simultaneously within the same template?
Resource handler returned message: "You can't specify PredictiveScalingConfiguration for policy type: TargetTrackingScaling (Service: AutoScaling, Status Code: 400, Request ID: bd851c95-ad78-4afc-979b-f5e2e5bf188a)" (RequestToken: 387601f6-9734-ac0c-36bc-5e006f892bf2, HandlerErrorCode: GeneralServiceException)
---
Resources:
myasg:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
MaxSize: '10'
MinSize: '2'
DesiredCapacity: '2'
VPCZoneIdentifier:
- subnet1
- subnet2
MixedInstancesPolicy:
InstancesDistribution:
OnDemandAllocationStrategy: lowest-price
OnDemandBaseCapacity: 0
OnDemandPercentageAboveBaseCapacity: 0
SpotAllocationStrategy: lowest-price
SpotInstancePools: 2
LaunchTemplate:
LaunchTemplateSpecification:
LaunchTemplateName: mylaunchtemplate
Version: 1
Tags:
- Key: Environment
Value: Production
PropagateAtLaunch: true
- Key: Purpose
Value: WebServerGroup
PropagateAtLaunch: false
scalingpolicy:
Type: AWS::AutoScaling::ScalingPolicy
Properties:
AutoScalingGroupName:
Ref: myasg
PolicyType: TargetTrackingScaling
TargetTrackingConfiguration:
DisableScaleIn: false
PredefinedMetricSpecification:
PredefinedMetricType: ASGAverageCPUUtilization
TargetValue: 30
PredictiveScalingConfiguration:
MaxCapacityBreachBehavior: IncreaseMaxCapacity
MaxCapacityBuffer: 0
MetricSpecifications:
- TargetValue: 30
Mode: ForecastAndScale
The documentation indicates that it is a best practice to "Use predictive scaling with dynamic scaling."
So, yes it should be possible. The way to do this in CloudFormation is to associate multiple scaling policies with a single AutoScalingGroup.
For example (untested):
scalingpolicy1:
Type: AWS::AutoScaling::ScalingPolicy
Properties:
AutoScalingGroupName:
Ref: myasg
PolicyType: TargetTrackingScaling
TargetTrackingConfiguration:
DisableScaleIn: false
PredefinedMetricSpecification:
PredefinedMetricType: ASGAverageCPUUtilization
TargetValue: 30
scalingpolicy2:
Type: AWS::AutoScaling::ScalingPolicy
Properties:
AutoScalingGroupName:
Ref: myasg
PolicyType: PredictiveScaling
PredictiveScalingConfiguration:
MaxCapacityBreachBehavior: IncreaseMaxCapacity
MaxCapacityBuffer: 0
MetricSpecifications:
- TargetValue: 30
Mode: ForecastAndScale
I have an AutoScale and a LaunchConfig that I created earlier. I want to replace AMI ID with Cloudformation in LaunchConfig. How can I do that ?
I wonder if there is any sample template that will be a reference for me?
Simple example you can find : https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig.html#aws-properties-as-launchconfig--examples
---
AWSTemplateFormatVersion: 2010-09-09
Parameters:
LatestAmiId:
Description: Region specific image from the Parameter Store
Type: 'AWS::SSM::Parameter::Value<AWS::EC2::Image::Id>'
Default: '/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2'
InstanceType:
Description: Amazon EC2 instance type for the instances
Type: String
AllowedValues:
- t3.micro
- t3.small
- t3.medium
Default: t3.micro
Resources:
myLaunchConfig:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
ImageId: !Ref LatestAmiId
SecurityGroups:
- Ref: "myEC2SecurityGroup"
InstanceType:
Ref: "InstanceType"
BlockDeviceMappings:
- DeviceName: /dev/sda1
Ebs:
VolumeSize: 30
VolumeType: "gp3"
- DeviceName: /dev/sdm
Ebs:
VolumeSize: 100
DeleteOnTermination: "false"
I have an instance created with cloudformation like below:
EC2Instance:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref ServerAMI
InstanceType: !Ref ServerInstanceType
KeyName: !Ref KeyName
BlockDeviceMappings:
- DeviceName: /dev/xvda
Ebs:
VolumeSize: 30
NetworkInterfaces:
- AssociatePublicIpAddress: 'false'
DeleteOnTermination: 'true'
DeviceIndex: '0'
GroupSet:
- Ref: ServerSecurityGroup
SubnetId: !Ref SubnetID
Tags:
- { Key: Name, Value: !Ref AWS::StackName }
My root volume in this case is created at 30GB. If I try increase this root volume size by setting the VolumeSize value then my ec2 instance is terminated and recreated.
Yet in the console I am able to increase the size of my root volume without recreation of my instance.
Is there any work around for this in order to prevent ec2 instance from being terminated when trying to increase root volume size via cloudformation?
Edit:
Here is a small test stack I'm using to test this again. Deployed once, then change VolumeSize and redeploy - it wants to replace the instance:
AWSTemplateFormatVersion: '2010-09-09'
Description: Test stack for a single ec2 instance
Parameters:
ServerAMI:
Type: String
Default: ami-096f43ef67d75e998
ServerInstanceType:
Type: String
Default: t2.small
DefaultVPCID:
Type: String
SubnetID:
Type: String
KeyName:
Type: AWS::EC2::KeyPair::KeyName
Resources:
EC2Instance:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref ServerAMI
InstanceType: !Ref ServerInstanceType
KeyName: !Ref KeyName
BlockDeviceMappings:
- DeviceName: /dev/xvda #Linux
Ebs:
VolumeSize: 30
NetworkInterfaces:
- AssociatePublicIpAddress: 'false'
DeleteOnTermination: 'true'
DeviceIndex: '0'
GroupSet:
- Ref: ServerSecurityGroup
SubnetId: !Ref SubnetID
Tags:
- { Key: Name, Value: !Ref AWS::StackName }
ServerSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Webserver security group
VpcId: !Ref DefaultVPCID
SecurityGroupIngress:
- { IpProtocol: tcp, FromPort: '22', ToPort: '22', CidrIp: '127.0.0.1/32', Description: 'Test Instance' }
Unfortunately, I don't believe you can - per the CloudFormation documentation:
After the instance is running, you can modify only the DeleteOnTermination parameter for the attached volumes without interrupting the instance. Modifying any other parameter results in instance replacement.
I am trying to develop an entire AWS architecture by usin CloudFormation only, however I am having some issues with the integration of CodeDeploy with CloudFormation and AutoScaling Group.
The problem is that, since I need to associate the CodeDeploy DeploymentGroup to an AutoScaling Group in order for the auto-deployment to work, CloudFormation recognizes the group as being required before creating the deployment group.
What happens is that the ASG gets created, instances start to spin up BEFORE the deployment group has been created, which means that these instances will never get deployed. I tried to think of a Lambda function to forcefully deploy these instances, however the problems persists because the CodeDeploy Deployment Group will still not be available yet most likely, or if it was, it's not reliable.
This problem only occurs when the stack is created for the first time.
This is my CloudFormation template:
[...]
UpdateApiAutoscalingGroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
AutoScalingGroupName:
Fn::Join:
- ''
- - !ImportValue UpdateApiCodeDeployApplication
- -autoscaling-group
- !Ref Environment
DesiredCapacity: !Ref MinimumApiAmount
HealthCheckGracePeriod: 30
HealthCheckType: ELB
LaunchConfigurationName: !Ref UpdateApiAutoscalingLaunchConfiguration
TargetGroupARNs:
- !Ref UpdateApiTargetGroup
MaxSize: !Ref MaximumApiAmount
MinSize: !Ref MinimumApiAmount
VPCZoneIdentifier:
- Fn::Select:
- 0
- !Split
- ","
- Fn::ImportValue:
!Sub "PrivateSubnets-${Environment}"
Tags:
- Key: Environment
Value: !Ref Environment
PropagateAtLaunch: true
- Key: CompanySshAccess
Value: 1
PropagateAtLaunch: true
- Key: Application
Value: update-api
PropagateAtLaunch: true
# Defines how the Update API servers should be provisioned in the scaling group.
UpdateApiAutoscalingLaunchConfiguration:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
AssociatePublicIpAddress: false
IamInstanceProfile: !Ref UpdateApiInstanceRole
ImageId: !FindInMap [Api, Image, !Ref Environment]
InstanceType: !FindInMap [Api, InstanceType, !Ref Environment]
LaunchConfigurationName: !Sub 'update-api-launchconfig-${Environment}'
SecurityGroups:
- Fn::ImportValue: !Sub 'InternalBastionSecurityGroupId-${Environment}'
- !GetAtt LoadBalancerProtectedSecurityGroup.GroupId
UpdateApiCodeDeploymentGroup:
Type: AWS::CodeDeploy::DeploymentGroup
Properties:
DeploymentGroupName: !Ref Environment
DeploymentConfigName: "atleast-one-instance-online"
ServiceRoleArn: !Ref CodeDeployServiceRoleArn # TODO: create CodeDeployServiceRole using CloudFormation
ApplicationName: !ImportValue UpdateApiCodeDeployApplication
LoadBalancerInfo:
ElbInfoList:
- Name: !GetAtt UpdateApiLoadBalancer.LoadBalancerName
DeploymentStyle:
DeploymentOption: !FindInMap [Api, DeploymentStyleOption, !Ref Environment]
DeploymentType: !FindInMap [Api, DeploymentStyleType, !Ref Environment]
AutoScalingGroups:
- !Ref UpdateApiAutoscalingGroup
[...]
I have created CloudFormation template that creates ECS service and task and has autoscaling for tasks. It is pretty basic - if MemoruUtilization for tasks reaches certain value then add 1 task and vice verse. Here are some of the most relevant parts form template.
EcsTd:
Type: AWS::ECS::TaskDefinition
DependsOn: LogGroup
Properties:
Family: !Sub ${EnvironmentName}-${PlatformName}-${Type}
ContainerDefinitions:
- Name: !Sub ${EnvironmentName}-${PlatformName}-${Type}
Image: !Sub ${AWS::AccountId}.dkr.ecr.{AWS::Region}.amazonaws.com/${PlatformName}:${ImageVersion}
Environment:
- Name: APP_ENV
Value: !If [isProd, "production", "staging"]
- Name: APP_DEBUG
Value: "false"
...
PortMappings:
- ContainerPort: 80
HostPort: 0
Memory: !Ref Memory
Essential: true
EcsService:
Type: AWS::ECS::Service
DependsOn: WaitForLoadBalancerListenerRulesCondition
Properties:
ServiceName: !Sub ${EnvironmentName}-${PlatformName}-${Type}
Cluster:
Fn::ImportValue: !Sub ${EnvironmentName}-ECS-${Type}
DesiredCount: !Sub ${DesiredCount}
TaskDefinition: !Ref EcsTd
Role: "learningEcsServiceRole"
LoadBalancers:
- !If
- isWeb
- ContainerPort: 80
ContainerName: !Sub ${EnvironmentName}-${PlatformName}-${Type}
TargetGroupArn: !Ref AlbTargetGroup
- !Ref AWS::NoValue
ServiceScalableTarget:
Type: "AWS::ApplicationAutoScaling::ScalableTarget"
Properties:
MaxCapacity: !Sub ${MaxCount}
MinCapacity: !Sub ${MinCount}
ResourceId: !Join
- /
- - service
- !Sub ${EnvironmentName}-${Type}
- !GetAtt EcsService.Name
RoleARN: arn:aws:iam::645618565575:role/learningEcsServiceRole
ScalableDimension: ecs:service:DesiredCount
ServiceNamespace: ecs
ServiceScaleOutPolicy:
Type : "AWS::ApplicationAutoScaling::ScalingPolicy"
Properties:
PolicyName: !Sub ${EnvironmentName}-${PlatformName}-${Type}- ScaleOutPolicy
PolicyType: StepScaling
ScalingTargetId: !Ref ServiceScalableTarget
StepScalingPolicyConfiguration:
AdjustmentType: ChangeInCapacity
Cooldown: 1800
MetricAggregationType: Average
StepAdjustments:
- MetricIntervalLowerBound: 0
ScalingAdjustment: 1
MemoryScaleOutAlarm:
Type: AWS::CloudWatch::Alarm
Properties:
AlarmName: !Sub ${EnvironmentName}-${PlatformName}-${Type}-MemoryOver70PercentAlarm
AlarmDescription: Alarm if memory utilization greater than 70% of reserved memory
Namespace: AWS/ECS
MetricName: MemoryUtilization
Dimensions:
- Name: ClusterName
Value: !Sub ${EnvironmentName}-${Type}
- Name: ServiceName
Value: !GetAtt EcsService.Name
Statistic: Maximum
Period: '60'
EvaluationPeriods: '1'
Threshold: '70'
ComparisonOperator: GreaterThanThreshold
AlarmActions:
- !Ref ServiceScaleOutPolicy
- !Ref EmailNotification
...
So when ever task starts to run out of memory we'll add new task. However at some point we'll reach the limit how much memory are available in out cluster.
So for example is Cluster consists of one t2.small instance then we have 2Gb RAM. A small amount of that is used by ECS task running in instance so we have less then 2GB RAM. If we set the value of Task's memory to 512Mb then we can put only 3 tasks in that cluster unless we scale up the cluster.
By default ECS service has MemoryReservation metrics that can be used for autoscaling cluster. We would tell that when MemoryReservation in more then 75% then add 1 instance to cluster. That's relatively easy.
EcsCluster:
Type: AWS::ECS::Cluster
Properties:
ClusterName: !Sub ${EnvironmentName}-${Type}
SgEcsHost:
...
ECSLaunchConfiguration:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
ImageId: !FindInMap [AWSRegionToAMI, !Ref 'AWS::Region', AMIID]
InstanceType: !Ref InstanceType
SecurityGroups: [ !Ref SgEcsHost ]
AssociatePublicIpAddress: true
IamInstanceProfile: "ecsInstanceRole"
KeyName: !Ref KeyName
UserData:
Fn::Base64: !Sub |
#!/bin/bash
echo ECS_CLUSTER=${EnvironmentName}-${Type} >> /etc/ecs/ecs.config
ECSAutoScalingGroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
VPCZoneIdentifier:
- Fn::ImportValue: !Sub ${EnvironmentName}-SubnetEC2AZ1
- Fn::ImportValue: !Sub ${EnvironmentName}-SubnetEC2AZ2
LaunchConfigurationName: !Ref ECSLaunchConfiguration
MinSize: !Ref AsgMinSize
MaxSize: !Ref AsgMaxSize
DesiredCapacity: !Ref AsgDesiredSize
Tags:
- Key: Name
Value: !Sub ${EnvironmentName}-ECS
PropagateAtLaunch: true
ScalePolicyUp:
Type: AWS::AutoScaling::ScalingPolicy
Properties:
AdjustmentType: ChangeInCapacity
AutoScalingGroupName:
Ref: ECSAutoScalingGroup
Cooldown: '1'
ScalingAdjustment: '1'
MemoryReservationAlarm:
Type: AWS::CloudWatch::Alarm
Properties:
EvaluationPeriods: '1'
Statistic: Average
Threshold: '75'
AlarmDescription: Alarm if MemoryReservation is more then 75%
Period: '60'
AlarmActions:
- Ref: ScalePolicyUp
- Ref: EmailNotification
Namespace: AWS/EC2
Dimensions:
- Name: AutoScalingGroupName
Value:
Ref: ECSAutoScalingGroup
ComparisonOperator: GreaterThanThreshold
MetricName: MemoryReservation
However it does not make sense because that would happen when the third task is added so the new instance will be empty until 4th tasks is scaled. That means we'll be paying for instance that we don't use.
I have noticed that when ECS service tries to add task to cluster where there is not enough free Memory I get
service Production-admin-worker was unable to place a task because no
container instance met all of its requirements. The closest matching
container-instance ################### has
insufficient memory available.
In this example the template's parameters are:
EnvironmentName=Production
PlatformName=Admin
Type=worker
Is it possible to create AWS::CloudWatch::Alarm that looks at ECS cluster events and looks for that particular pattern? The idea would be to scale up instance count in cluster using AWS::AutoScaling::AutoScalingGroup only when AWS::ApplicationAutoScaling::ScalingPolicy adds tasks that does not have space in cluster. And scale down the cluster when MemoryReservation is less then 25% (meaning that there are no tasks running there - AWS::ApplicationAutoScaling::ScalingPolicy has removed them).
That means we'll be paying for instance that we don't use.
Either you pay for the extra/backup capacity in advance, or implement logic to retry the ones that failed due to low capacity.
Couple of ways I can think of:
You could create a custom script/lambda (https://forums.aws.amazon.com/thread.jspa?threadID=94984) that reports a metric say load_factor calculated as number of tasks / number of instances and then base an your auto scaling policy on that. Lambda can be triggered by a CW Rule.
You could also report this from your task implementation instead of a new custom lambda/script.
Create a metric filter that looks for a specific pattern in a log file/group and reports a metric. Then of course use this metric for scaling.
From docs:
When a metric filter finds one of the terms, phrases, or values in your log events, you can increment the value of a CloudWatch metric.