Auto Scaling Group Reattach Volume - amazon-web-services

I have an ECS cluster running one docker container which I want to run for only several hours per day. When the instance does not need to be used I want it to be stopped but I found that I can't stop an instance in ASG because it will get terminated automatically. So when I set the desired count of ASG to 0 my instance is getting terminated and all volumes that were attached to that instance also gets wiped so I lose my data.
I've set the BlockDeviceMappings of my cloud formation template to persist those volumes
BlockDeviceMappings:
- DeviceName: '/dev/xvda'
Ebs:
DeleteOnTermination: false
- DeviceName: '/dev/xvdcz'
Ebs:
DeleteOnTermination: false
When instance is terminated I would like to reattach those existing EBS but instead two new separate volumes are created. How could I make it that It reattach to the already existing EBS volumes instead of making new ones of each new ec2 instance creation.

Related

How can I attach an existing EBS volume to an EC2 instance with Ansible?

I've looked through the Ansible documentation for ec2_instance and ec2_vol but both seem to only support creating a new EBS volume (either blank or from an EBS snapshot). I would like to attach an existing EBS volume directly to an instance, not create a snapshot of it and then create a new volume from that snapshot. Is there an Ansible module that does this or should I just use shell and run the right AWS CLI command?
You're looking for the union of the id: and instance: properties, as indicated by the language next to the id: property:
Volume id if you wish to attach an existing volume (requires instance) or remove an existing volume
- amazon.aws.ec2_vol:
id: '{{ my_volume_id }}'
instance: '{{ my_inst_id }}'
device_name: /dev/xvdf # or whatever
Be aware (as this bit me) when they say instance: "None" to detach it, they really mean None, and not null or whatever else

What's happening with EBS Volume when an EC2 instance is terminated?

When I terminated an EC2 instance, I thought the EC2 instance would be terminated after additional EBS Volume (not root volume) was detached.
However, even if I look up Cloudtrail, I couldn't find the event named DetachVolume.
When I terminate an EC2 instance, does EBS do something like disconnection without being detached?
What's happening with EBS Volume when an EC2 instance is terminated?
When an AWS EC2 instance is terminated, the AWS EBS volume attached to it either gets detached and deleted or just gets detached and doesn't get deleted. That depends on the value of the attribute named Delete on termination of the attached AWS EBS volume. You can see this on the AWS EC2 console by selecting the AWS EC2 instance and then navigating to the storage tab.
By default, its value is True for the root volume and False for the other volumes.
You can modify this value using AWS CLI only. From the AWS EC2 console, you can set its value when launching a new instance only. For already running AWS EC2 instance, use AWS CLI.
Examples using AWS CLI are below:
Using a .json file: aws ec2 modify-instance-attribute --instance-id i-a3ef245 --block-device-mappings /path/to/file.json
.json file format:
[
{
"DeviceName": "/dev/sda1",
"Ebs": {
"DeleteOnTermination": false
}
}
]
Using a .json object inline: aws ec2 modify-instance-attribute --instance-id i-a3ef245 --block-device-mappings "[{\"DeviceName\": \"/dev/sda\",\"Ebs\":{\"DeleteOnTermination\":false}}]"
For more information, check this out: How can I prevent my Amazon EBS volumes from being deleted when I terminate Amazon EC2 instances?
When an instance terminates, the value of the DeleteOnTermination attribute for each attached EBS volume determines whether to preserve or delete the volume. By default, the DeleteOnTermination attribute is set to True for the root volume, and is set to False for all other volume types.
Delete on termination - false
Volume ID Device name Size Status Encrypt KMS ID Delete on Termination
vol-09*** /dev/xvda 8 Attached No – Yes
vol-03** /dev/sdb 8 Attached No – No
Status after termination of instance : Available
Delete on Termination - True
Volume ID Device name Size Status Encrypt KMS ID Delete on Termination
vol-09*** /dev/xvda 8 Attached No – Yes
vol-03** /dev/sdb 8 Attached No – Yes
Status of EBS vol. apart from Root volume after termination of instance : deleted

How to set EBS root volume to persist for an EC2 instance within Elastic Beanstalk using Terraform

I have written Terraform to manage my AWS Elastic Beanstalk environment and application, using the default docker solution stack for my region.
The EC2 instance created by autoscaling has the standard/default EBS root volume which is set to "true" value for the setting "DeleteOnTermination" -- meaning that when the instance is replaced or destroyed, the volume (and hence all the data) is also destroyed.
I would like to change this to false and persist the volume.
For some reason, I cannot find valid Terraform documentation for how to change this setting so that the root volume persists. The closest thing I can find is for the autoscaling launchconfiguration, a "root_block_device" mapping can be supplied to update it. Unfortunately, it is unclear from the documentation how exactly to use this. If I create a launchconfiguration resource, how do I use that within my beanstalk definition. I think I'm on the right track here but need some guidance.
Do I create the autoscaling resource and then reference it within my beanstalk definition? Or do I add a particular setting to my beanstalk definition with this mapping inside? Thanks for any help or example you can provide.
This can done at EB level through Resources.
Specifically, you have to modify settings of AWSEBAutoScalingLaunchConfiguration that EB is using to launch your instances from.
Here is an example of such a config file:
.ebextensions/40_ebs_delete_on_termination.config
Resources:
AWSEBAutoScalingLaunchConfiguration:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
BlockDeviceMappings:
- DeviceName: /dev/xvda
Ebs:
DeleteOnTermination: false
Then to verify the setting, you can use AWS CLI:
aws ec2 describe-volumes --volume-ids <id-of-your-eb-instance-volume>
or simply terminate the environment and check the Volumes in EC2 console.
You can use the ebs_block_device block within the aws_instance resource. This will by default delete the ebs volume when the instance is terminated.
https://www.terraform.io/docs/providers/aws/r/instance.html#block-devices
You have to use the above instead of the aws_volume_attachment resource.
delete_on_termination - (Optional) Whether the volume should be
destroyed on instance termination (Default: true).

Encrypt root volume of EC2 while creating stack using cloud formation

Working on cloud formation script which will create simple ec2 instance. here i want to encrypt a root volume at the time of launch. its possible to create a separate EBS, encrypt it and attach it as boot volume. but i couldn't find a way to encrypt it while launching. any way to do this?
Thanks In Advance
It looks like AWS has recently released a feature to launch an instance with encrypted volume based on non-encrypted AMI.
Launch encrypted EBS backed EC2 instances from unencrypted AMIs in a single step
From the CloudFormation perspective, you need to overwrite AMI block device configuration. So for example, you can write like this:
BlockDeviceMappings:
- DeviceName: "/dev/xvda"
Ebs:
VolumeSize: '8'
Encrypted: 'true'
This will start an instance with encrypted root EBS from non-encrypted AMI with a default KMS key
We can't encrypt root volume during the launch. Here is what you need to do.
Always use custom KMS keys.
If you have the unencrypted AMI, just copy the AMI to the same region and use encrypt option there.
Then use that AMI in your cloudformation.

Attaching an EBS to an EC2 instance from a Spot Fleet

I am looking to create a Spot Fleet in Cloudformation which runs a single game server at a time; if prices spike and the server needs to be terminated it will use the 2 minute heads-up to gracefully shutdown and store anything to be persisted on an EBS Volume. The next instance started by the fleet will then mount the volume and restart the game server from where the previous one left off.
SpotFleet:
Type: "AWS::EC2::SpotFleet"
Properties:
SpotFleetRequestConfigData:
IamFleetRole: !Sub arn:aws:iam::${AWS::AccountId}:role/aws-ec2-spot-fleet-tagging-role
TargetCapacity: 1
LaunchSpecifications:
- InstanceType: "m5.large"
ImageId: "ami-abcd1234"
IamInstanceProfile: !GetAtt InstanceProfile.Arn
WeightedCapacity: 1
Now I'm stuck on defining the persisted volume in the cf template. Initially I would just add it as a resource:
Volume:
Type: "AWS::EC2::Volume"
Properties:
Size: 10
AvailabilityZone: !Ref AWS::Region
But then how do I reference it in the fleet? You can define BlockDeviceMappings on LaunchSpeficiations within a fleet as per
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-spotfleet-spotfleetrequestconfigdata-launchspecifications-blockdevicemappings.html
but from the attributes available I can't seem to reference existing volumes and as such am getting the idea that these volumes are not persisted.
Alternatively I thought of attaching the volume to the spot instance via a VolumeAttachment:
VolumeAttachment:
Type: "AWS::EC2::VolumeAttachment"
Properties:
Device: "dev/server"
InstanceId: !Ref SpotFleet
VolumeId: !Ref Volume
but obviously the SpotFleet reference here returns the fleet name, not the id of any created instances. And neither !Ref nor !GetAtt seem to be able to extract those ids from a fleet.
Am I overlooking anything cruicial as to how to accomplish the above in CloudFormation or should I be looking at adding the EC2:AttachVolume and EC2:DetachVolume permissions to the InstanceProfile and simply attaching the volume manually from within the EC2 instance?
Many thanks,
EC2 Spot instances now support the option of setting the "Interruption behavior" to stop instead of terminate.
When this option is selected, a spot instance retains its instance ID, EBS volumes, its private and elastic IP address, and its EBS volumes, which remain in place and attached.
Some instance types also support a "hybernate" option that writes a snapshot of the entire system state to EBS to allow the instance to "resume" rather than reboot when capacity becomes available again.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-interruptions.html
What you are looking for is the BlockDeviceMappings property, found in the SpotFleet SpotFleetRequestConfigData LaunchSpecifications, which is a property of SpotFleetRequestConfigData, which is a property of the AWS::EC2::SpotFleet resource type.
The BlockDeviceMappings property will allow you to define additional EBS volumes to attach to your launch specification. This is the specification that controls device mappings at launch time.
For exmample:
"BlockDeviceMappings" : [{
"DeviceName" : "/dev/sdf",
"Ebs" : {"VolumeSize": "10", "VolumeType" : "gp2", "DeleteOnTermination" : "true"}
}],
will specify a 10GB volume on the /dev/sdf device of your spot fleet instance.