I have an AWS EC2 machine I want to attach storage to which after its shutdown isn't deleted. The management should be done using Cloudformation.
I so far, do this using the following snippet:
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sda",
"Ebs": {
"DeleteOnTermination": "false",
"VolumeSize": "10",
"VolumeType": "gp2"
}
}
],
Reading also about AWS:EC2:Volume and AWS:EC2:VolumeAttachment can somebody explain the differences? What are the benefits and disadvantage using one way over the other? How do I use the other methods together with an EC2 instance?
AWS:EC2:Volume just creates a new EBS volume. It's not Available for Use
AWS:EC2:VolumeAttachment allows you to attach the new volume to a running EC2 instance where it will be exposed as a block (storage) device.
So, you need to do AWS:EC2:Volume first to know the VolumeId, and then supply it to AWS:EC2:VolumeAttachment
{
"Type":"AWS::EC2::VolumeAttachment",
"Properties" : {
"Device" : String,
"InstanceId" : String,
"VolumeId" : String
}
}
You use BlockDeviceMappings when you create an AMI or when you launch a new EC2 instance.
You use AWS::EC2::VolumeAttachment when you attach an EBS volume to a running EC2 instance. You can attach multiple additional EBS volumes.
You can also attach and detach root device as mentioned here
If an EBS volume is the root device of an instance, you must stop the instance before you can detach the volume.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-detaching-volume.html
Related
I am trying to delete orphaned snapshots, but the query I am using keeps giving me snapshots that are deleted. Is there a query I can use to avoid deleted snapshots?
aws ec2 describe-snapshots --snapshot-id snap-00012345cac2b3de1
{
"Snapshots": [
{
"Description": "DescriptionHere",
"Encrypted": false,
"OwnerId": "123456088429",
"Progress": "100%",
"SnapshotId": "snap-00012345cac2b3de1",
"StartTime": "2018-01-24T06:42:50+00:00",
"State": "completed",
"VolumeId": "vol-00123dc456ad5117",
"VolumeSize": 6,
"StorageTier": "standard"
}
]
}
To test your situation, I did the following:
Went to the EC2 Management Console and displayed Amazon EBS Volumes
Created a Snapshot of an EBS Volume: snap-036851d7351b78712
Ran aws ec2 describe-snapshots --snapshot-id snap-036851d7351b78712
It returned a result similar yours
Deleted the Snapshot in the Management Console
Ran the above command again. The result was:
An error occurred (InvalidSnapshot.NotFound) when calling the DescribeSnapshots operation: The snapshot 'snap-036851d7351b78712' does not exist.
So, I was unable to reproduce your situation.
I then wondered whether the Snapshot might be associated with an AMI. I did the following:
Created an AMI of an existing Amazon EC2 instance
Waited until the AMI creation was complete
Listed Snapshots in the console -- a new snapshot appeared snap-047563373ab4c1088
I then tried to delete the snapshot, but received the message:
snap-047563373ab4c1088: The snapshot snap-047563373ab4c1088 is currently in use by ami-0fc62425d087dbbe8
I then 'deregistered' (deleted) the AMI and it told me that the associated Snapshot would not be deleted:
I then manually deleted the Snapshot in the console
I used describe-snapshots and it said that the snapshot did not exist
So, perhaps your Snapshot is associated with an AMI and it was never actually deleted?
I'm trying to create an AMI by packer in a AWS codebuild project.
This AMI will be used to launch template
and the launch template will be used to ASG.
and when the ASG get an instance by this launch template, it should work with an existing target group for ALB.
for clarification, my expectation is...
generate AMI in a code build project by packer
create launch template with the #1 AMI
use the #2 launch template to ASG
ASG launch a new instance
existing target group do health check #4 instance.
In the step 5, my existing target group failed to do health check well for the new instance because it had different vpc.
(existing target group is using a custom VPC and the #4 instance had default vpc)
So, I backed to #1 to set the same VPC during the AMI generation.
But the codebuild project failed when it called the packer template in it.
it returned below
==> amazon-ebs: Prevalidating AMI Name...
amazon-ebs: Found Image ID: ami-12345678
==> amazon-ebs: Creating temporary keypair: packer_6242d99f-6cdb-72db-3299-12345678
==> amazon-ebs: Launching a source AWS instance...
==> amazon-ebs: Error launching source instance: UnauthorizedOperation: You are not authorized to perform this operation.
Before this update, there were no vpc and subnet related settings in the packer template, and they worked.
I added some vpc related permissions for this code build project but no lucks yet.
Below is my builders configuration on the packer-template.json
"builders": [
{
"type": "amazon-ebs",
"region": "{{user `aws_region`}}",
"instance_type": "t2.micro",
"ssh_username": "ubuntu",
"associate_public_ip_address": true,
"subnet_id": "subnet-12345678",
"vpc_id": "vpc-12345678",
"iam_instance_profile": "blah-profile-12345678",
"security_group_id": "sg-12345678",
"ami_name": "{{user `new_ami_name`}}",
"ami_description": "AMI from Packer {{isotime \"20060102-030405\"}}",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"name": "{{user `source_ami_name`}}",
"root-device-type": "ebs"
},
"owners": ["************"],
"most_recent": true
},
"tags": {
"Name": "{{user `new_ami_name`}}"
}
}
],
Added on this step (not exist before)
subnet_id
vpc_id
iam_instance_profile
security_group_id
Q1. Is this correct configuration to use VPC on here?
Q1-1. If yes, which permissions are required to allow this task?
Q1-2. If not, could you let me know the correct format of this?
Q2. Or... Is this correct way to get some instances which are able to communicate with my existing target groups...?
Thanks in advance. Your any kind of mentions will be helpful to me.
I got some helps from a local community.
And now I see I wrote too much wide and not good question without enough informations. There were several issues.
I should have used CloudTrail instead of CloudWatch to know which role and actions are making problems. My codebuild project had not ec2.RunInstances permission.
After I saw this on CloudTrail, I updated the role policy for the codebuild project and it was passed. But there was another issue.
After launch the instance by packer, it failed to connect with ssh. I got some answers from Stack overflow by searching about packer's timeout issue by ssh. and update the security group to allow ssh for packer.
Will remove this question if it is required.
Thanks for my local community and the previous answers & questioners on Stack overflow.
I'm running a shell activity in EC2 resource sample json for creating EC2 resource.
{
"id" : "MyEC2Resource",
"type" : "Ec2Resource",
"actionOnTaskFailure" : "terminate",
"actionOnResourceFailure" : "retryAll",
"maximumRetries" : "1",
"instanceType" : "m5.large",
"securityGroupIds" : [
"sg-12345678",
"sg-12345678"
],
"subnetId": "subnet-12345678",
"associatePublicIpAddress": "true",
"keyPair" : "my-key-pair"
}
Above json is creating EC2 resource using data pipeline but I want to give a name to the above resource when I will open EC2 resource in AWS console it will show EC2 resource name with other attributes, currently it's showing blank.
See attached image for more details
You have to tag the instance with:
Key: Name
Value: MyName
MyName is an example name. You need to change it to what you want it to be.
Adding the tag to the pipeline should propagate the tags to instances. From docs:
Applying a tag to a pipeline also propagates the tags to its underlying resources (for example, Amazon EMR clusters and Amazon EC2 instances)
But probably it does not work retrospectively. If you already have a pipeline with instances, its unlikely new tags will propagate. Propagation usually only works at the resource creation. For existing instances you may need to use EC2 console instead.
I've specified a Spot Fleet with a dynamic subnet and an (on-demand) EC2 instance in my CloudFormation template, like so:
"Resources": {
"myInstance": {
"Type": "AWS::EC2::Instance",
"Properties": {
...
}
},
"myFleet": {
"Type": "AWS::EC2::SpotFleet",
"Properties" : {
"SpotFleetRequestConfigData" : {
...
"LaunchSpecifications": [
{
...
"SubnetId": "subnet-1a1a1a, subnet-2b2b2b, subnet-3c3c3c"
}
]
}
}
}
}
Each of my subnets is in a different Availability Zone, so e.g. subnet-1a1a1a is in us-east-1a, subnet-2b2b2b is in us-east-1b, etc.
I want to place myInstance in the same subnet as my spot fleet's instances to avoid paying for network traffic that crosses Availability Zones, but I'm not sure how to do this:
"myInstance": {
"Type": "AWS.::EC2::Instance",
"Properties": {
...
"SubnetId": ???? // WHAT GOES HERE?
}
}
It doesn't look like I can use Fn::GetAtt on myFleet, and even if I make myInstance depend on myFleet, I'm still not sure how to look up the result of the Spot Fleet's placement.
Can I accomplish this? Thanks!
In your spot fleet configuration, if you do the following:
"SubnetId": "subnet-1a1a1a, subnet-2b2b2b, subnet-3c3c3c"
then you're telling the spot fleet to place spot instances in any of those three subnets. It's possible that spot instances may be in all 3 subnets at the same time.
Since you have a single EC2 instance, your EC2 instance cannot be guaranteed to be in the same subnet as all of your spot instances.
Your EC2 instance can only exist in a single subnet, and you must either specify that subnet, or let AWS decide for you.
So you have a choice:
Specify one of the three subnets for the EC2 instance, this way, the instance will (hopefully) be in the same AZ as some of the spot fleet, or
Reduce your spot fleet to a single subnet
I am trying to launch an autoscaling group with a single m3.medium instance and attached EBS using CloudFormation (CFN). I have succeeded in doing everything but the EBS part. I've tried adding the following block to my CFN template (as a property of the AWS::AutoScaling::LaunchConfiguration block):
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sdf",
"Ebs": { "VolumeSize": 100, "VolumeType": "gp2" }
}
]
Without this the launch succeeds. When I include it, aws hangs while trying to create the autoscaling group. There are no error messages to help debug this issue. I've tried creating an EBS through aws console and attaching to the launched m3 instance manually, and this works, but I need to do it through CFN to conform to our automated deployment pipeline.
Are there other resources I need to create in the CFN template to make this work?
If that's a verbatim block, then you add quotes to volume size (doc is very misleading, as all data types are strings). Here's one that's worked fine for me, and I see no differences:
"BlockDeviceMappings": [
{
"DeviceName": {
"Ref": "SecondaryDevice"
},
"Ebs": {
"VolumeType": "gp2",
"VolumeSize": "10"
}
}
]
In general, if you need to troubleshoot ASGs, add SNS notifs for launch failures to the auto scaling group (http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/ASGettingNotifications.html). You may find that you're on your last hundred gigs of EBS limit (not likely) or that your AMI doesn't like the device type or label you're trying to use (somewhat more likely).
Update:
After speaking with AWS support, I resolved this issue. It turns out that AWS makes a distinction between an instance-store-backed and ebs-backed ami. You can only add the BlockDeviceMappings property when using an ebs-backed ami, and I was using the other kind. Luckily, there is a way to convert instance-store-backed to ebs-backed, using this procedure:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-an-ami-instance-store.html#Using_ConvertingS3toEBS