Why does ECS CloudFormation template create an EC2 Spot Fleet - amazon-web-services

I created a cluster in ECS with basic settings, nothing specific about the configuration except that I am using 1 On Demand t2.micro EC2 instance for the cluster.
I wanted to see what exactly was created and took a look at the CloudFormation template the cluster created.
I noticed in the template it has a configuration for EcsSpotFleet
EcsSpotFleet:
Condition: CreateWithSpot
Type: AWS::EC2::SpotFleet
Properties:
SpotFleetRequestConfigData:
AllocationStrategy: !Ref SpotAllocationStrategy
IamFleetRole: !Ref IamSpotFleetRoleArn
TargetCapacity: !Ref AsgMaxSize
SpotPrice: !If [ CreateWithSpotPrice, !Ref SpotPrice, !Ref 'AWS::NoValue' ]
TerminateInstancesWithExpiration: true
LaunchSpecifications:
....
I am wondering why is this created? Because I know the Cluster instances are created with ASG + LC. My only explanation is this fleet is used for running the CloudFormation stack. I cannot find an explanation to this in the documentation, not even sure if instances are needed for CloudFormation stack run.
p.s. I am very new to AWS, also have very little knowledge on CloudFormation.

Not all code in CloudFormation will be executed. It still depends on the "Condition" flag.
AWS usually create a template that covers most of the user cases and enables/disable parts of the template using the "Condition"
You can read more about Condition in AWS documentation here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/conditions-section-structure.html

Related

AWS ECS EC2 Capacity Provider not listed

I'm trying to create a new Capacity Provider
I deployed the following CloudFormation snippet:
myECSCapacityProvider:
Type: AWS::ECS::CapacityProvider
Properties:
Name: my-project-asg-cp
AutoScalingGroupProvider:
AutoScalingGroupArn: !Ref myAutoScalingGroup
ManagedScaling:
MaximumScalingStepSize: 10
MinimumScalingStepSize: 1
Status: ENABLED
TargetCapacity: 100
ManagedTerminationProtection: DISABLED
I have the resource ASG (myAutoScalingGroup) created.
When I go to the ECS console and then to the Capacity Providers tab it is blank, no CPs are listed.
If I try to create the same CP through the console, using the name my-project-asg-cp I see the following error:
There was an error creating the capacity provider.
Fix the following error and try again.
The specified Auto Scaling group ARN is already being used by another capacity provider.
Specify a unique Auto Scaling group ARN and try again.
So it seems somehow the CP was created but it is not listed.
And of course, I don't have any error in CloudFormation.
If I check the resources tab I can see the resource created:
myECSCapacityProvider my-project-asg-cp AWS::ECS::CapacityProvider CREATE_COMPLETE
Also, the cli doesn't show it either.
Does anyone has faced this error?
Associate the capacity provided with the cluster:
ECSCluster:
Type: 'AWS::ECS::Cluster'
Properties:
ClusterName: <your-ecs-cluster>
CapacityProviders:
- !Ref myECSCapacityProvider

How to get/set AWS ECS container instance role

When creating an EC2 based ECS cluster in the AWS console you can specify the container instance role:
However, after the cluster has been created, I don't see any way to view which role was attached to the cluster.
In addition, I don't see any way to specify the container instance role when creating a cluster using the cli or in Cloudformation (or, by extension, the CDK).
My question is two-fold:
Can this property be specified in the API/Cloudformation
Is there any way to view this property on an existing cluster either in the console or using the API/CLI
The console leads you to believe it's an ECS property but in fact it's simply an EC2 property known as "IAM Instance Profile". You have to specify this role by setting the IamInstanceProfile property on a AWS::EC2::Instance or even better on a AWS::EC2::LaunchTemplate resource that can be used inside an AutoScaling group. Small caveat, you won't be able to directly add the role to that property just yet, you will need to create a AWS::IAM::InstanceProfile first like so:
EcsInstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Roles:
- ecsInstanceRole
For the sake of completeness, here's how you would then set the property inside a launch template:
LaunchTemplate:
Type: AWS::EC2::LaunchTemplate
Properties:
LaunchTemplateData:
IamInstanceProfile:
Arn: !GetAtt EcsInstanceProfile.Arn
...

Cloudformation Attach instance to Auto scaling group

In my CloudFormation file, I created an instance, a Launch Configuration and an Auto Scaling Group.
LogstashInstance:
Type: AWS::EC2::Instance
Properties:
IamInstanceProfile:
Ref: LogstashInstanceProfile
InstanceType: t2.micro
KeyName: chuongtest
ImageId: ami-0cd31be676780afa7
UserData:
SecurityGroupIds:
- Ref: LogstashSecurityGroup
SubnetId: subnet-0e5691582096fe1e6
Tags:
- Key: Name
Value: Logstash Instance
LogstashLaunchConfiguration:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
EbsOptimized: false
IamInstanceProfile:
Ref: LogstashInstanceProfile
ImageId: ami-0cd31be676780afa7
InstanceMonitoring: true
InstanceType: t2.micro
KeyName: chuongtest
LaunchConfigurationName: LogstashLaunchConfiguration
SecurityGroups:
- Ref: LogstashSecurityGroup
UserData:
LogstashAutoScalingGroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
AutoScalingGroupName: LogstashAutoScalingGroup
AvailabilityZones:
- ap-southeast-1b
DesiredCapacity: 1
LaunchConfigurationName:
Ref: LogstashLaunchConfiguration
MaxSize: 1
MinSize: 1
Tags:
- Key: Name
PropagateAtLaunch: "false"
Value: Logstash ASG
- Key: Instances
PropagateAtLaunch: "true"
Value: Logstash
My idea is to create an instance, attach it to the ASG and use the ASG to keep the work continue.
But this code will launch 2 instances.
The first instances has userdata different from the later instances so I can not delete the instance.
I have looked in the documentation but I can not find anything make sense. Is there a way to configure in the template, or the only way is scripting?
I have looked in the documentation but I can not find anything make sense. Is there a way to configure in the template, or the only way is scripting?
It does not make sense, because its not practical. ASG will be launching instances only from the associated AWS::AutoScaling::LaunchConfiguration. Thus, there is not much sense on "manually" attaching instances in your cause the the ASG. ASG will not re-launch it when it fails.
If the "manually" attached instances gets terminated for some reason, e.g. hardware failure, ASG will launch a replacement based on the launch configuration.
userdata different from the later instances
In this case the best would be to provide some condition in your UserData which would determine which version of your UserData to run.
Alternatively, just have two ASGs. One for the first instance, while the other group for the remaining instances.
Edit
Having discussed via the comments I would highly recommend the approach of using EFS as it is designed for the workload design you're doing.
In addition it is multi AZ vs the approach of rotating a single EBS volume as instances fail in an autoscaling group which would not persist should the AZ fail and potentially lead to loss of data.
Original
If that first instance is different from the rest you should be careful about adding it to the autoscaling group (as you want it to persist), especially in how it handles the replacement of instances.
Autoscaling groups are designed to scale similar instances that are generally immutable, so by having your host in this autoscaling group it will be treated as if it is one of the other hosts.
An Auto Scaling group contains a collection of Amazon EC2 instances that are treated as a logical grouping for the purposes of automatic scaling and management.
If you must add it to the autoscaling group here is the general approach:
Create a Custom Resource which involves a Lambda attaching the EC2 to the ASG using attach_instances
Add set_instance_protection to your instance to prevent it being replaced during a scale in event (or the instance failing)
Also ensure you set both the MinSize and DesiredCapacity to 0 to remove any extra instance that gets launched.
As I stated earlier you should not add this instance to the ASG if it can be avoided, tt will probably lead to confusion.

Creating subnet per availability zone using cloudformation

I need to create subnet per availability zone in particular region using cloudformation.
For example if the region is Mumbai, it is with three availability zones and the CF template should create a public and a private subnet in each availability zone: 1a,1b and 1c. Is it really possible? I have done the same using terraform but have no idea how can I achieve this in CF.
It would be great if someone could help on this.
Thanks in advance.
Sadly, there are no loops in plain CloudFormation. Thus you can't create any constructs that would loop over AZs, get their IDs and create a pair of private-public subnets in each AZ.
If you really want to keep everything in CloudFormation than you would have to look at custom resources or marcros.
Both of them would require you to write your own lambda function that would use AWS API to get the number of AZs, their names and perform iteration to create the subnets.
If you already are using terraform successful, maybe its worth considering to keep using it, as it has loops useful in your use-case.
You might be able to write it in cloudformation. For example with Fn::If and then using Fn::GetAZs and creating the resources only if enough azs a parameter you can hardcode.
PrivateSubnet6:
Condition: Has6AZs
Type: AWS::EC2::Subnet
Properties:
VpcId:
Ref: MyVPC
CidrBlock: 10.0.20.0/22
AvailabilityZone:
Fn::Select:
- 5
- Fn::GetAZs: !Ref 'AWS::Region'
Tags:
- Key: "Name"
Value: "PrivateSubnet6"
Has6AZs:
Fn::Equals: [!Ref AWS::Region, "us-east-1"]

How to fix AWS Cloud Formation Drift after Start and Stop of an EC2 instance?

I have a AWS Cloud Formation Stack. I Started and Stopped the EC2 instance in that stack. Now the stack is drifted and below are the drift results. How do I resolve this issue as Expected and Actual both are same.
I don't believe this is related to restarting your AWS::EC2::Instance. I've both rebooted and stopped and started an instance using this template and the drift status is still IN_SYNC after running drift detection:
Parameters:
ImageId:
Type: AWS::SSM::Parameter::Value<AWS::EC2::Image::Id>
Default: /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2
Resources:
Instance:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref ImageId
Tags:
- Key: Key
Value: Value
Do the tags on the EC2 instance match the tags from the template?