Working on cloud formation script which will create simple ec2 instance. here i want to encrypt a root volume at the time of launch. its possible to create a separate EBS, encrypt it and attach it as boot volume. but i couldn't find a way to encrypt it while launching. any way to do this?
Thanks In Advance
It looks like AWS has recently released a feature to launch an instance with encrypted volume based on non-encrypted AMI.
Launch encrypted EBS backed EC2 instances from unencrypted AMIs in a single step
From the CloudFormation perspective, you need to overwrite AMI block device configuration. So for example, you can write like this:
BlockDeviceMappings:
- DeviceName: "/dev/xvda"
Ebs:
VolumeSize: '8'
Encrypted: 'true'
This will start an instance with encrypted root EBS from non-encrypted AMI with a default KMS key
We can't encrypt root volume during the launch. Here is what you need to do.
Always use custom KMS keys.
If you have the unencrypted AMI, just copy the AMI to the same region and use encrypt option there.
Then use that AMI in your cloudformation.
Related
When I terminated an EC2 instance, I thought the EC2 instance would be terminated after additional EBS Volume (not root volume) was detached.
However, even if I look up Cloudtrail, I couldn't find the event named DetachVolume.
When I terminate an EC2 instance, does EBS do something like disconnection without being detached?
What's happening with EBS Volume when an EC2 instance is terminated?
When an AWS EC2 instance is terminated, the AWS EBS volume attached to it either gets detached and deleted or just gets detached and doesn't get deleted. That depends on the value of the attribute named Delete on termination of the attached AWS EBS volume. You can see this on the AWS EC2 console by selecting the AWS EC2 instance and then navigating to the storage tab.
By default, its value is True for the root volume and False for the other volumes.
You can modify this value using AWS CLI only. From the AWS EC2 console, you can set its value when launching a new instance only. For already running AWS EC2 instance, use AWS CLI.
Examples using AWS CLI are below:
Using a .json file: aws ec2 modify-instance-attribute --instance-id i-a3ef245 --block-device-mappings /path/to/file.json
.json file format:
[
{
"DeviceName": "/dev/sda1",
"Ebs": {
"DeleteOnTermination": false
}
}
]
Using a .json object inline: aws ec2 modify-instance-attribute --instance-id i-a3ef245 --block-device-mappings "[{\"DeviceName\": \"/dev/sda\",\"Ebs\":{\"DeleteOnTermination\":false}}]"
For more information, check this out: How can I prevent my Amazon EBS volumes from being deleted when I terminate Amazon EC2 instances?
When an instance terminates, the value of the DeleteOnTermination attribute for each attached EBS volume determines whether to preserve or delete the volume. By default, the DeleteOnTermination attribute is set to True for the root volume, and is set to False for all other volume types.
Delete on termination - false
Volume ID Device name Size Status Encrypt KMS ID Delete on Termination
vol-09*** /dev/xvda 8 Attached No – Yes
vol-03** /dev/sdb 8 Attached No – No
Status after termination of instance : Available
Delete on Termination - True
Volume ID Device name Size Status Encrypt KMS ID Delete on Termination
vol-09*** /dev/xvda 8 Attached No – Yes
vol-03** /dev/sdb 8 Attached No – Yes
Status of EBS vol. apart from Root volume after termination of instance : deleted
I have written Terraform to manage my AWS Elastic Beanstalk environment and application, using the default docker solution stack for my region.
The EC2 instance created by autoscaling has the standard/default EBS root volume which is set to "true" value for the setting "DeleteOnTermination" -- meaning that when the instance is replaced or destroyed, the volume (and hence all the data) is also destroyed.
I would like to change this to false and persist the volume.
For some reason, I cannot find valid Terraform documentation for how to change this setting so that the root volume persists. The closest thing I can find is for the autoscaling launchconfiguration, a "root_block_device" mapping can be supplied to update it. Unfortunately, it is unclear from the documentation how exactly to use this. If I create a launchconfiguration resource, how do I use that within my beanstalk definition. I think I'm on the right track here but need some guidance.
Do I create the autoscaling resource and then reference it within my beanstalk definition? Or do I add a particular setting to my beanstalk definition with this mapping inside? Thanks for any help or example you can provide.
This can done at EB level through Resources.
Specifically, you have to modify settings of AWSEBAutoScalingLaunchConfiguration that EB is using to launch your instances from.
Here is an example of such a config file:
.ebextensions/40_ebs_delete_on_termination.config
Resources:
AWSEBAutoScalingLaunchConfiguration:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
BlockDeviceMappings:
- DeviceName: /dev/xvda
Ebs:
DeleteOnTermination: false
Then to verify the setting, you can use AWS CLI:
aws ec2 describe-volumes --volume-ids <id-of-your-eb-instance-volume>
or simply terminate the environment and check the Volumes in EC2 console.
You can use the ebs_block_device block within the aws_instance resource. This will by default delete the ebs volume when the instance is terminated.
https://www.terraform.io/docs/providers/aws/r/instance.html#block-devices
You have to use the above instead of the aws_volume_attachment resource.
delete_on_termination - (Optional) Whether the volume should be
destroyed on instance termination (Default: true).
Problem:
I have an EMR cluster (along with a number of other resources) defined in a cloudformation template. I use the AWS rest api to provision my stack. It works, I can provision the stack successfully.
Then, I made one change: I specified a custom AMI for my EMR cluster. And now the EMR provisioning fails when I provision my stack.
And now my stack creation fails, due to EMR provisioning failing. The only information I can find is an error on the console: null: Error provisioning instances.. Digging into each instance, I see that the master node failed with error Status: Terminated. Last state change reason:Time out occurred during bootstrap
I have s3 logging configured for my EMR cluster, but there are no logs in the s3 bucket.
Details:
I updated my cloudformation script like so:
my_stack.cfn.yaml:
rMyEmrCluster:
Type: AWS::EMR::Cluster
...
Properties:
...
CustomAmiId: "ami-xxxxxx" # <-- I added this
Custom AMI details:
I am adding a custom AMI because I need to encrypt the root EBS volume on all of my nodes. (This is required per documentation)
The steps I took to create my custom AMI:
I launched the base AMI that is used by AWS for EMR nodes: emr 5.7.0-ami-roller-27 hvm ebs (ID: ami-8a5cb8f3)
I created an image from my running instance
I created a copy of this image, with EBS root volume encryption enabled. I use the default encryption key. (I must create my own base image from a running instance, because you are not allowed to create an encrypted copy from an AMI you don't own)
I wonder if this might be a permissions issue, or perhaps my AMI is misconfigured in some way. But it would be prudent for me to find some logs first, to figure out exactly what is going wrong with node provisioning.
I feel stupid. I accidentally used a completely un-related AMI (a redhat 7 image) as the base image, instead of the AMI that EMR uses for it's nodes by default: emr 5.7.0-ami-roller-27 hvm ebs (ami-8a5cb8f3)
I'll leave this question and answer up in case someone else makes the same mistake.
Make sure you create your custom AMI from the correct base AMI: emr 5.7.0-ami-roller-27 hvm ebs (ami-8a5cb8f3)
You mention that you created your custom AMI based on an EMR AMI. However, according to the documentation you linked, you should actually base your AMI on "the most recent EBS-backed Amazon Linux AMI". Your custom AMI does not need to be based on an EMR AMI, and indeed I suppose that doing so could cause some problems (though I have not tried it myself).
As a requirement I need to have all my EBS volume encrypted with a customer KMS (and not de fault aws/ebs one)
In the LaunchConfig's BlockDeviceMappings properties I do see a property "Encrypted" but I do not see anyway of specifying a custom KMS
I see a snapshotId property which could allow me to point to an encrypted snapshot but how will this behave? Will each box that spin create an empty volume from that snapshot ?
What is the best way to achieve this ? Is my only option to create volume in the user-data and attach it there ?
AWS AutoScaling groups does not support specifying alternate KMS keys when EC2 instances are launched.
When you run an EC2 instance via ec2:RunInstances, ec2:RequestSpotFleet, or ec2:RequestSpotInstances, you can specify a alternate KMS key to use to encrypt the EBS volumes. When this KMS key is omitted, the KMS key used to encrypt the EBS snapshot is used instead.
However, Auto Scaling launch configurations does not support the KMS key specification. So it's not possible to use an alternative KMS key when launching Auto Scaling groups. The KMS key used to encrypt the snapshots will always be used.
Source: https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_Ebs.html
Is it possible to use the custom encryption key for ebs data volumes using packer? kms_key_id will only use for the encryption of the boot volume. how can we encrypt block device mappings? (data EBS volumes)
Unfortunately that doesn't seem to be supported by AWS. See http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_EbsBlockDevice.html and http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_RunInstances.html
As a workaround you can prepare a CMK encrypted (empty) snapshot and attach that in your device mapping block in Packer. That should give you a snapshot encrypted with the KMS key you want.