How to set EBS root volume to persist for an EC2 instance within Elastic Beanstalk using Terraform - amazon-web-services

I have written Terraform to manage my AWS Elastic Beanstalk environment and application, using the default docker solution stack for my region.
The EC2 instance created by autoscaling has the standard/default EBS root volume which is set to "true" value for the setting "DeleteOnTermination" -- meaning that when the instance is replaced or destroyed, the volume (and hence all the data) is also destroyed.
I would like to change this to false and persist the volume.
For some reason, I cannot find valid Terraform documentation for how to change this setting so that the root volume persists. The closest thing I can find is for the autoscaling launchconfiguration, a "root_block_device" mapping can be supplied to update it. Unfortunately, it is unclear from the documentation how exactly to use this. If I create a launchconfiguration resource, how do I use that within my beanstalk definition. I think I'm on the right track here but need some guidance.
Do I create the autoscaling resource and then reference it within my beanstalk definition? Or do I add a particular setting to my beanstalk definition with this mapping inside? Thanks for any help or example you can provide.

This can done at EB level through Resources.
Specifically, you have to modify settings of AWSEBAutoScalingLaunchConfiguration that EB is using to launch your instances from.
Here is an example of such a config file:
.ebextensions/40_ebs_delete_on_termination.config
Resources:
AWSEBAutoScalingLaunchConfiguration:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
BlockDeviceMappings:
- DeviceName: /dev/xvda
Ebs:
DeleteOnTermination: false
Then to verify the setting, you can use AWS CLI:
aws ec2 describe-volumes --volume-ids <id-of-your-eb-instance-volume>
or simply terminate the environment and check the Volumes in EC2 console.

You can use the ebs_block_device block within the aws_instance resource. This will by default delete the ebs volume when the instance is terminated.
https://www.terraform.io/docs/providers/aws/r/instance.html#block-devices
You have to use the above instead of the aws_volume_attachment resource.
delete_on_termination - (Optional) Whether the volume should be
destroyed on instance termination (Default: true).

Related

What's happening with EBS Volume when an EC2 instance is terminated?

When I terminated an EC2 instance, I thought the EC2 instance would be terminated after additional EBS Volume (not root volume) was detached.
However, even if I look up Cloudtrail, I couldn't find the event named DetachVolume.
When I terminate an EC2 instance, does EBS do something like disconnection without being detached?
What's happening with EBS Volume when an EC2 instance is terminated?
When an AWS EC2 instance is terminated, the AWS EBS volume attached to it either gets detached and deleted or just gets detached and doesn't get deleted. That depends on the value of the attribute named Delete on termination of the attached AWS EBS volume. You can see this on the AWS EC2 console by selecting the AWS EC2 instance and then navigating to the storage tab.
By default, its value is True for the root volume and False for the other volumes.
You can modify this value using AWS CLI only. From the AWS EC2 console, you can set its value when launching a new instance only. For already running AWS EC2 instance, use AWS CLI.
Examples using AWS CLI are below:
Using a .json file: aws ec2 modify-instance-attribute --instance-id i-a3ef245 --block-device-mappings /path/to/file.json
.json file format:
[
{
"DeviceName": "/dev/sda1",
"Ebs": {
"DeleteOnTermination": false
}
}
]
Using a .json object inline: aws ec2 modify-instance-attribute --instance-id i-a3ef245 --block-device-mappings "[{\"DeviceName\": \"/dev/sda\",\"Ebs\":{\"DeleteOnTermination\":false}}]"
For more information, check this out: How can I prevent my Amazon EBS volumes from being deleted when I terminate Amazon EC2 instances?
When an instance terminates, the value of the DeleteOnTermination attribute for each attached EBS volume determines whether to preserve or delete the volume. By default, the DeleteOnTermination attribute is set to True for the root volume, and is set to False for all other volume types.
Delete on termination - false
Volume ID Device name Size Status Encrypt KMS ID Delete on Termination
vol-09*** /dev/xvda 8 Attached No – Yes
vol-03** /dev/sdb 8 Attached No – No
Status after termination of instance : Available
Delete on Termination - True
Volume ID Device name Size Status Encrypt KMS ID Delete on Termination
vol-09*** /dev/xvda 8 Attached No – Yes
vol-03** /dev/sdb 8 Attached No – Yes
Status of EBS vol. apart from Root volume after termination of instance : deleted

AWS EMR provisioning fails when I use custom AMI

Problem:
I have an EMR cluster (along with a number of other resources) defined in a cloudformation template. I use the AWS rest api to provision my stack. It works, I can provision the stack successfully.
Then, I made one change: I specified a custom AMI for my EMR cluster. And now the EMR provisioning fails when I provision my stack.
And now my stack creation fails, due to EMR provisioning failing. The only information I can find is an error on the console: null: Error provisioning instances.. Digging into each instance, I see that the master node failed with error Status: Terminated. Last state change reason:Time out occurred during bootstrap
I have s3 logging configured for my EMR cluster, but there are no logs in the s3 bucket.
Details:
I updated my cloudformation script like so:
my_stack.cfn.yaml:
rMyEmrCluster:
Type: AWS::EMR::Cluster
...
Properties:
...
CustomAmiId: "ami-xxxxxx" # <-- I added this
Custom AMI details:
I am adding a custom AMI because I need to encrypt the root EBS volume on all of my nodes. (This is required per documentation)
The steps I took to create my custom AMI:
I launched the base AMI that is used by AWS for EMR nodes: emr 5.7.0-ami-roller-27 hvm ebs (ID: ami-8a5cb8f3)
I created an image from my running instance
I created a copy of this image, with EBS root volume encryption enabled. I use the default encryption key. (I must create my own base image from a running instance, because you are not allowed to create an encrypted copy from an AMI you don't own)
I wonder if this might be a permissions issue, or perhaps my AMI is misconfigured in some way. But it would be prudent for me to find some logs first, to figure out exactly what is going wrong with node provisioning.
I feel stupid. I accidentally used a completely un-related AMI (a redhat 7 image) as the base image, instead of the AMI that EMR uses for it's nodes by default: emr 5.7.0-ami-roller-27 hvm ebs (ami-8a5cb8f3)
I'll leave this question and answer up in case someone else makes the same mistake.
Make sure you create your custom AMI from the correct base AMI: emr 5.7.0-ami-roller-27 hvm ebs (ami-8a5cb8f3)
You mention that you created your custom AMI based on an EMR AMI. However, according to the documentation you linked, you should actually base your AMI on "the most recent EBS-backed Amazon Linux AMI". Your custom AMI does not need to be based on an EMR AMI, and indeed I suppose that doing so could cause some problems (though I have not tried it myself).

Encrypt root volume of EC2 while creating stack using cloud formation

Working on cloud formation script which will create simple ec2 instance. here i want to encrypt a root volume at the time of launch. its possible to create a separate EBS, encrypt it and attach it as boot volume. but i couldn't find a way to encrypt it while launching. any way to do this?
Thanks In Advance
It looks like AWS has recently released a feature to launch an instance with encrypted volume based on non-encrypted AMI.
Launch encrypted EBS backed EC2 instances from unencrypted AMIs in a single step
From the CloudFormation perspective, you need to overwrite AMI block device configuration. So for example, you can write like this:
BlockDeviceMappings:
- DeviceName: "/dev/xvda"
Ebs:
VolumeSize: '8'
Encrypted: 'true'
This will start an instance with encrypted root EBS from non-encrypted AMI with a default KMS key
We can't encrypt root volume during the launch. Here is what you need to do.
Always use custom KMS keys.
If you have the unencrypted AMI, just copy the AMI to the same region and use encrypt option there.
Then use that AMI in your cloudformation.

Auto Scaling Group Reattach Volume

I have an ECS cluster running one docker container which I want to run for only several hours per day. When the instance does not need to be used I want it to be stopped but I found that I can't stop an instance in ASG because it will get terminated automatically. So when I set the desired count of ASG to 0 my instance is getting terminated and all volumes that were attached to that instance also gets wiped so I lose my data.
I've set the BlockDeviceMappings of my cloud formation template to persist those volumes
BlockDeviceMappings:
- DeviceName: '/dev/xvda'
Ebs:
DeleteOnTermination: false
- DeviceName: '/dev/xvdcz'
Ebs:
DeleteOnTermination: false
When instance is terminated I would like to reattach those existing EBS but instead two new separate volumes are created. How could I make it that It reattach to the already existing EBS volumes instead of making new ones of each new ec2 instance creation.

How to import changes to a EBS volume after sizing it up back to Terraform?

After running out of space I had to resize my EBS Volume, now I wanted to make the size part of my Terraform configurated and added the following block to the aws_instance resource:
ebs_block_device {
device_name = "/dev/sda1"
volume_size = 32
volume_type = "gp2"
}
Now after running terraform plan it wanted to destroy the existing volume, which is terrible. I also tried to import the existing one using terraform import but it wanted me to use a different name for the resource which is also not great.
So what is the correct procedure here?
The aws_instance resource docs mention that changes to any EBS block devices will cause the instance to be recreated.
To get around this you can use something other than Terraform to grow the EBS volumes using AWS' new elastic volumes feature. Terraform also cannot detect changes to any of the attached block devices created in the aws_instance resource:
NOTE: Currently, changes to *_block_device configuration of existing resources cannot be automatically detected by Terraform. After making updates to block device configuration, resource recreation can be manually triggered by using the taint command.
As such you shouldn't need to go back and change anything in your Terraform configuration unless you are wanting to rebuild the instance using Terraform at some point at which point the worry about losing the instance is obviously moot.
However, if for some reason you want to be able to make the change to your Terraform configuration and keep the instance from being destroyed then you would need to manipulate your state file.