I am looking to create a Spot Fleet in Cloudformation which runs a single game server at a time; if prices spike and the server needs to be terminated it will use the 2 minute heads-up to gracefully shutdown and store anything to be persisted on an EBS Volume. The next instance started by the fleet will then mount the volume and restart the game server from where the previous one left off.
SpotFleet:
Type: "AWS::EC2::SpotFleet"
Properties:
SpotFleetRequestConfigData:
IamFleetRole: !Sub arn:aws:iam::${AWS::AccountId}:role/aws-ec2-spot-fleet-tagging-role
TargetCapacity: 1
LaunchSpecifications:
- InstanceType: "m5.large"
ImageId: "ami-abcd1234"
IamInstanceProfile: !GetAtt InstanceProfile.Arn
WeightedCapacity: 1
Now I'm stuck on defining the persisted volume in the cf template. Initially I would just add it as a resource:
Volume:
Type: "AWS::EC2::Volume"
Properties:
Size: 10
AvailabilityZone: !Ref AWS::Region
But then how do I reference it in the fleet? You can define BlockDeviceMappings on LaunchSpeficiations within a fleet as per
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-spotfleet-spotfleetrequestconfigdata-launchspecifications-blockdevicemappings.html
but from the attributes available I can't seem to reference existing volumes and as such am getting the idea that these volumes are not persisted.
Alternatively I thought of attaching the volume to the spot instance via a VolumeAttachment:
VolumeAttachment:
Type: "AWS::EC2::VolumeAttachment"
Properties:
Device: "dev/server"
InstanceId: !Ref SpotFleet
VolumeId: !Ref Volume
but obviously the SpotFleet reference here returns the fleet name, not the id of any created instances. And neither !Ref nor !GetAtt seem to be able to extract those ids from a fleet.
Am I overlooking anything cruicial as to how to accomplish the above in CloudFormation or should I be looking at adding the EC2:AttachVolume and EC2:DetachVolume permissions to the InstanceProfile and simply attaching the volume manually from within the EC2 instance?
Many thanks,
EC2 Spot instances now support the option of setting the "Interruption behavior" to stop instead of terminate.
When this option is selected, a spot instance retains its instance ID, EBS volumes, its private and elastic IP address, and its EBS volumes, which remain in place and attached.
Some instance types also support a "hybernate" option that writes a snapshot of the entire system state to EBS to allow the instance to "resume" rather than reboot when capacity becomes available again.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-interruptions.html
What you are looking for is the BlockDeviceMappings property, found in the SpotFleet SpotFleetRequestConfigData LaunchSpecifications, which is a property of SpotFleetRequestConfigData, which is a property of the AWS::EC2::SpotFleet resource type.
The BlockDeviceMappings property will allow you to define additional EBS volumes to attach to your launch specification. This is the specification that controls device mappings at launch time.
For exmample:
"BlockDeviceMappings" : [{
"DeviceName" : "/dev/sdf",
"Ebs" : {"VolumeSize": "10", "VolumeType" : "gp2", "DeleteOnTermination" : "true"}
}],
will specify a 10GB volume on the /dev/sdf device of your spot fleet instance.
Related
In my CloudFormation file, I created an instance, a Launch Configuration and an Auto Scaling Group.
LogstashInstance:
Type: AWS::EC2::Instance
Properties:
IamInstanceProfile:
Ref: LogstashInstanceProfile
InstanceType: t2.micro
KeyName: chuongtest
ImageId: ami-0cd31be676780afa7
UserData:
SecurityGroupIds:
- Ref: LogstashSecurityGroup
SubnetId: subnet-0e5691582096fe1e6
Tags:
- Key: Name
Value: Logstash Instance
LogstashLaunchConfiguration:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
EbsOptimized: false
IamInstanceProfile:
Ref: LogstashInstanceProfile
ImageId: ami-0cd31be676780afa7
InstanceMonitoring: true
InstanceType: t2.micro
KeyName: chuongtest
LaunchConfigurationName: LogstashLaunchConfiguration
SecurityGroups:
- Ref: LogstashSecurityGroup
UserData:
LogstashAutoScalingGroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
AutoScalingGroupName: LogstashAutoScalingGroup
AvailabilityZones:
- ap-southeast-1b
DesiredCapacity: 1
LaunchConfigurationName:
Ref: LogstashLaunchConfiguration
MaxSize: 1
MinSize: 1
Tags:
- Key: Name
PropagateAtLaunch: "false"
Value: Logstash ASG
- Key: Instances
PropagateAtLaunch: "true"
Value: Logstash
My idea is to create an instance, attach it to the ASG and use the ASG to keep the work continue.
But this code will launch 2 instances.
The first instances has userdata different from the later instances so I can not delete the instance.
I have looked in the documentation but I can not find anything make sense. Is there a way to configure in the template, or the only way is scripting?
I have looked in the documentation but I can not find anything make sense. Is there a way to configure in the template, or the only way is scripting?
It does not make sense, because its not practical. ASG will be launching instances only from the associated AWS::AutoScaling::LaunchConfiguration. Thus, there is not much sense on "manually" attaching instances in your cause the the ASG. ASG will not re-launch it when it fails.
If the "manually" attached instances gets terminated for some reason, e.g. hardware failure, ASG will launch a replacement based on the launch configuration.
userdata different from the later instances
In this case the best would be to provide some condition in your UserData which would determine which version of your UserData to run.
Alternatively, just have two ASGs. One for the first instance, while the other group for the remaining instances.
Edit
Having discussed via the comments I would highly recommend the approach of using EFS as it is designed for the workload design you're doing.
In addition it is multi AZ vs the approach of rotating a single EBS volume as instances fail in an autoscaling group which would not persist should the AZ fail and potentially lead to loss of data.
Original
If that first instance is different from the rest you should be careful about adding it to the autoscaling group (as you want it to persist), especially in how it handles the replacement of instances.
Autoscaling groups are designed to scale similar instances that are generally immutable, so by having your host in this autoscaling group it will be treated as if it is one of the other hosts.
An Auto Scaling group contains a collection of Amazon EC2 instances that are treated as a logical grouping for the purposes of automatic scaling and management.
If you must add it to the autoscaling group here is the general approach:
Create a Custom Resource which involves a Lambda attaching the EC2 to the ASG using attach_instances
Add set_instance_protection to your instance to prevent it being replaced during a scale in event (or the instance failing)
Also ensure you set both the MinSize and DesiredCapacity to 0 to remove any extra instance that gets launched.
As I stated earlier you should not add this instance to the ASG if it can be avoided, tt will probably lead to confusion.
I need to do the following actions in sequence and wondering if I should use CloudFormation to achieve this:
Launch a new EC2 instance (currently I'm manually doing it by selecting "Launch more like these" on a specific instance.
Stop the new instance.
Detach the volume from the new instance.
Create a new volume from a previously created snapshot.
Attached that newly created volume to the new EC2 instance created in step 1.
Restart the EC2 instance.
If this can't be done via CloudFormation would it be possible to automate it somehow?
It sounds like you are wanting to launch an Amazon EC2 instance with the boot disk coming from an Amazon EBS Snapshot.
Might I suggest a simpler process?
Rather than creating a Snapshot of the Amazon EBS volume, instead create an Amazon Machine Image (AMI) of the original instance. Then, when launching the new Amazon EC2 instance, simply select the AMI. This will result in a new instance starting up with the desired boot disk.
Alternatively, you can create an AMI from an existing Amazon EBS Snapshot by selecting the Snapshot and choosing the Create Image command. (But I think this only works for Linux, not Windows.) Then, launch new EC2 instances from the AMI.
Behind-the-scenes, an AMI is actually just an Amazon EBS Snapshot with some additional information.
Take Johns advice and use an AMI. This sample will get you started, it launches a single EC2 using an AMI (latest patched one) in an Auto Scale Group of Min 1 - Max 1 so one EC2 instance will always be on regardless of a power failure, AZ going down, etc.
Replace XYZ with your products name:
Parameters:
KeyPairName:
Description: >-
Mandatory. Enter a Public/private key pair. If you do not have one in this region,
please create it before continuing
Type: 'AWS::EC2::KeyPair::KeyName'
EnvType:
Description: Environment Name
Default: dev
Type: String
AllowedValues: [dev, test, prod]
Subnet1ID:
Description: 'ID of the subnet 1 for auto scaling group into'
Type: 'AWS::EC2::Subnet::Id'
Subnet2ID:
Description: 'ID of the subnet 2 for auto scaling group'
Type: 'AWS::EC2::Subnet::Id'
Subnet3ID:
Description: 'ID of the subnet 3 for auto scaling group'
Type: 'AWS::EC2::Subnet::Id'
Resources:
XYZMainLogGroup:
Type: 'AWS::Logs::LogGroup'
SSHMetricFilter:
Type: 'AWS::Logs::MetricFilter'
Properties:
LogGroupName: !Ref XYZMainLogGroup
FilterPattern: ON FROM USER PWD
MetricTransformations:
- MetricName: SSHCommandCount
MetricValue: 1
MetricNamespace: !Join
- /
- - AWSQuickStart
- !Ref 'AWS::StackName'
XYZAutoScalingGroup:
Type: 'AWS::AutoScaling::AutoScalingGroup'
Properties:
LaunchConfigurationName: !Ref XYZLaunchConfiguration
AutoScalingGroupName: !Join
- '.'
- - !Ref 'AWS::StackName'
- 'ASG'
VPCZoneIdentifier:
- !Ref Subnet1ID
- !Ref Subnet2ID
- !Ref Subnet3ID
MinSize: 1
MaxSize: 1
Cooldown: '300'
DesiredCapacity: 1
Tags:
- Key: Name
Value: 'The Name'
PropagateAtLaunch: 'true'
XYZLaunchConfiguration:
Type: 'AWS::AutoScaling::LaunchConfiguration'
Properties:
AssociatePublicIpAddress: 'false'
PlacementTenancy: default
KeyName: !Ref KeyPairName
ImageId: ami-123432164a1b23da1
IamInstanceProfile: "BaseInstanceProfile"
InstanceType: t2.small
SecurityGroups:
- Fn::If: [CreateDevResources, !Ref DevSecurityGroup, !Ref "AWS::NoValue"]
Yes, you can automated all these tasks using SSM Automation.
Specifically, your SSM Automation can consist of the following documents/actions:
AWS-AttachEBSVolume
AWS-DetachEBSVolume
AWS-StopEC2Instance
AWS-StartEC2Instance
AWS-RestartEC2Instance
Your SSM Automation can be triggered by CloudWatch Events. Also the SSM Automation can be constructed using CloudFormation.
I am trying to create an EBS Volume and attach it to my EC2 instance. The instance has its own Auto Scaling Group and Launch Configuration. I want it such that if this instance becomes unhealthy and terminates, the EBS volume should automatically get attached to the new instance that is spun up by the Auto Scaling Group. The mount commands are in the Launch Configuration so that's not a problem.
Here is my code:
Influxdbdata1Asg:
Type: 'AWS::AutoScaling::AutoScalingGroup'
Properties:
TargetGroupARNs:
- !Ref xxxx
VPCZoneIdentifier:
- !GetAtt 'NetworkInfo.PrivateSubnet1Id'
LaunchConfigurationName: !Ref yyyy
MinSize: 1
MaxSize: 1
DesiredCapacity: 1
Data1:
Type: AWS::EC2::Volume
DeletionPolicy: Retain
Properties:
Size: !Ref 'DataEbsVolumeSize'
AvailabilityZone: !GetAtt 'NetworkInfo.PrivateSubnet1Id'
Tags:
- Key: Name
Value: !Join
- '-'
- - !Ref 'AWS::StackName'
- data1
Attachdata1:
Type: AWS::EC2::VolumeAttachment
Properties:
InstanceId: !Ref ????
VolumeId: !Ref Data1
Device: /dev/xvdb
Unfortunately you can't do this using:
Attachdata1:
Type: AWS::EC2::VolumeAttachment
Properties:
InstanceId: !Ref ????
VolumeId: !Ref Data1
Device: /dev/xvdb
The reason is that instance are being launched by ASG and you will not have its ideas.
Attaching must be done outside of CloudFormation, as can't know upfront what would be the instance id in future. As other answer mentions Lifecycle Hooks.
Or even better use, storage independent of ASG, such as EFS which would automatically persist between instance launches and terminations and could be mounted by multiple instances.
For this problem you would specifically want to make use of Lifecycle Hooks which trigger whenever an instance terminates or is launched.
To do this your lifecycle hook would notify your SNS notification, which would then invoke a Lambda function. This Lambda function would perform the change, before acknowledging the lifecycle action is complete.
There is a blog post written about this here.
Your question mentions CloudFormation, however this would still involve lifecycle hooks to trigger the action. You would need a CloudFormation stack with a AWS::EC2::VolumeAttachment resource. The Lambda would need to update the "InstanceId" property in the stack to perform this change.
I'm using AutoscalingGroup with mixed policy, where OnDemandBaseCapacity and OnDemandPercentageAboveBaseCapacity are 0, so it won't launch any On-Demand instance but always try to request and launch spot instance when needed.
My Cloudformation Spec for autoscaling group:
AutoScalingGroupForApiServers:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
VPCZoneIdentifier: !Ref VpcSubnetsForApiLoadBalancer
MinSize: !Ref ASGMinSizeForApiServers
MaxSize: !Ref ASGMaxSizeForApiServers
HealthCheckType: !Ref HealthCheckTypeForApiServers
HealthCheckGracePeriod: !FindInMap [ Constants, '-', AutoScalingGroupDefaultHealthCheckGracePeriod ]
MixedInstancesPolicy:
InstancesDistribution:
OnDemandBaseCapacity: 0
OnDemandPercentageAboveBaseCapacity: 0
SpotAllocationStrategy: lowest-price
SpotInstancePools: 2
LaunchTemplate:
LaunchTemplateSpecification:
LaunchTemplateId: !Ref AutoScalingLaunchTemplateForApiServers
Version: !GetAtt AutoScalingLaunchTemplateForApiServers.LatestVersionNumber
LoadBalancerNames:
- !Ref ElasticLoadBalancerForApiServers
I have two questions:
1) If one spot instance terminates and there's no another spot instance available, will it launch On-Demand instance and then scale down it to 0?
2) Upon receiving 2-minute termination notice will it automatically throw out instance from referenced load balancers/target groups or do I have manually handle it with CloudWatch/SNS/Lambda?
1) If one spot instance terminates and there's no another spot instance available, will it launch On-Demand instance and then scale down it to 0?
it will try to maintain the OnDemand:Spot ratio which you would have mentioned in config. for example if you have set 30% OnDemand and 70% Spot. then it will keep the autoscaling group instances as 30% OnDemand and 70% Spot. if Spot instances are not available then it wont launch OnDemand instances to compensate it.
2) Upon receiving 2-minute termination notice will it automatically throw out instance from referenced load balancers/target groups or do I have manually handle it with CloudWatch/SNS/Lambda?
if you have linked AutoScaling group with Targetgroup, then it will automatically removes the Terminated or unavailable instances from the TargetGroups.
I have
Parameters
Zookeeper1SubnetParam:
Description: Subnet where Zookeeper 1 should run
Type: AWS::EC2::Subnet::Id
Zookeeper1AZ:
Description: Availability Zone of the Subnet
Type: AWS::EC2::AvailabilityZone::Name
From this I'm creating an ENI (which requires a subnet) and an EBS Volume (which requires an availability zone).
Here's the ENI:
Zookeeper1IPResource:
Properties:
Description: Zookeeper1-IP
GroupSet:
- Fn::GetAtt:
- ZookeeperSecurityGroup
- GroupId
PrivateIpAddress:
Ref: Zookeeper1IPParam
SubnetId:
Ref: Zookeeper1SubnetParam
Type: AWS::EC2::NetworkInterface
And here's the EBS:
Zookeeper1EBSVolume:
Properties:
AvailabilityZone:
Ref: Zookeeper1AZ
Size: 8
VolumeType: gp2
Type: AWS::EC2::Volume
I find it really bad for user experience, to also ask as a parameter for an availability zone, because it can be deducted from the selected subnet
Now, the million dollar question, how do I get the Availability Zone from the Subnet in CloudFormation? As far as I can tell, I can't do a GetAtt for AZ on my ENI.
Any solution welcome!
To answer your question, you can't retrieve the Availability Zone from the Subnet.
But if you have total control of the template or resources that supplies the parameter to your template there are workarounds
If have control over the source the provides you the Subnet parameter, you can return also the Availability Zone from that source as an Outputs and supply it in your template as a parameter where you create ENI and EBS.
In addition, you could also create the Subnet in the same template you will create the ENI and EBS and use the { "Fn::GetAtt" : [ "mySubnet", "AvailabilityZone" ] }
Question(sorry, my rep can't allow me to comment yet)
Do you happen to have dynamic values or resources to be created that depends on availability zones? If yes, you can create Mappings and if that is not enough, you could add Conditions in your template.
I don't know if it is something new, but according to the documentation you can get the AZ of a subnet with GetAttr.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-subnet.html#aws-resource-ec2-subnet-returnvalues
Quoting the documentation:
{ "Fn::GetAtt" : [ "mySubnet", "AvailabilityZone" ] }
UPDATE:
This suggestion is wrong, let me quote another documentation from AWS:
Supported Functions
For the Fn::GetAtt logical resource name, you cannot use functions. You must specify a string that is a resource's logical ID.
For the Fn::GetAtt attribute name, you can use the Ref function.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-getatt.html#getatt-supported-functions