Using CloudFormation to launch an AWS autoscaling group with attached EBS - amazon-web-services

I am trying to launch an autoscaling group with a single m3.medium instance and attached EBS using CloudFormation (CFN). I have succeeded in doing everything but the EBS part. I've tried adding the following block to my CFN template (as a property of the AWS::AutoScaling::LaunchConfiguration block):
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sdf",
"Ebs": { "VolumeSize": 100, "VolumeType": "gp2" }
}
]
Without this the launch succeeds. When I include it, aws hangs while trying to create the autoscaling group. There are no error messages to help debug this issue. I've tried creating an EBS through aws console and attaching to the launched m3 instance manually, and this works, but I need to do it through CFN to conform to our automated deployment pipeline.
Are there other resources I need to create in the CFN template to make this work?

If that's a verbatim block, then you add quotes to volume size (doc is very misleading, as all data types are strings). Here's one that's worked fine for me, and I see no differences:
"BlockDeviceMappings": [
{
"DeviceName": {
"Ref": "SecondaryDevice"
},
"Ebs": {
"VolumeType": "gp2",
"VolumeSize": "10"
}
}
]
In general, if you need to troubleshoot ASGs, add SNS notifs for launch failures to the auto scaling group (http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/ASGettingNotifications.html). You may find that you're on your last hundred gigs of EBS limit (not likely) or that your AMI doesn't like the device type or label you're trying to use (somewhat more likely).

Update:
After speaking with AWS support, I resolved this issue. It turns out that AWS makes a distinction between an instance-store-backed and ebs-backed ami. You can only add the BlockDeviceMappings property when using an ebs-backed ami, and I was using the other kind. Luckily, there is a way to convert instance-store-backed to ebs-backed, using this procedure:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-an-ami-instance-store.html#Using_ConvertingS3toEBS

Related

How to use VPC with packer to generate AMI in AWS codebuild project?

I'm trying to create an AMI by packer in a AWS codebuild project.
This AMI will be used to launch template
and the launch template will be used to ASG.
and when the ASG get an instance by this launch template, it should work with an existing target group for ALB.
for clarification, my expectation is...
generate AMI in a code build project by packer
create launch template with the #1 AMI
use the #2 launch template to ASG
ASG launch a new instance
existing target group do health check #4 instance.
In the step 5, my existing target group failed to do health check well for the new instance because it had different vpc.
(existing target group is using a custom VPC and the #4 instance had default vpc)
So, I backed to #1 to set the same VPC during the AMI generation.
But the codebuild project failed when it called the packer template in it.
it returned below
==> amazon-ebs: Prevalidating AMI Name...
amazon-ebs: Found Image ID: ami-12345678
==> amazon-ebs: Creating temporary keypair: packer_6242d99f-6cdb-72db-3299-12345678
==> amazon-ebs: Launching a source AWS instance...
==> amazon-ebs: Error launching source instance: UnauthorizedOperation: You are not authorized to perform this operation.
Before this update, there were no vpc and subnet related settings in the packer template, and they worked.
I added some vpc related permissions for this code build project but no lucks yet.
Below is my builders configuration on the packer-template.json
"builders": [
{
"type": "amazon-ebs",
"region": "{{user `aws_region`}}",
"instance_type": "t2.micro",
"ssh_username": "ubuntu",
"associate_public_ip_address": true,
"subnet_id": "subnet-12345678",
"vpc_id": "vpc-12345678",
"iam_instance_profile": "blah-profile-12345678",
"security_group_id": "sg-12345678",
"ami_name": "{{user `new_ami_name`}}",
"ami_description": "AMI from Packer {{isotime \"20060102-030405\"}}",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"name": "{{user `source_ami_name`}}",
"root-device-type": "ebs"
},
"owners": ["************"],
"most_recent": true
},
"tags": {
"Name": "{{user `new_ami_name`}}"
}
}
],
Added on this step (not exist before)
subnet_id
vpc_id
iam_instance_profile
security_group_id
Q1. Is this correct configuration to use VPC on here?
Q1-1. If yes, which permissions are required to allow this task?
Q1-2. If not, could you let me know the correct format of this?
Q2. Or... Is this correct way to get some instances which are able to communicate with my existing target groups...?
Thanks in advance. Your any kind of mentions will be helpful to me.
I got some helps from a local community.
And now I see I wrote too much wide and not good question without enough informations. There were several issues.
I should have used CloudTrail instead of CloudWatch to know which role and actions are making problems. My codebuild project had not ec2.RunInstances permission.
After I saw this on CloudTrail, I updated the role policy for the codebuild project and it was passed. But there was another issue.
After launch the instance by packer, it failed to connect with ssh. I got some answers from Stack overflow by searching about packer's timeout issue by ssh. and update the security group to allow ssh for packer.
Will remove this question if it is required.
Thanks for my local community and the previous answers & questioners on Stack overflow.

I can't create a template with the output of get-launch-template-data

I am starting to play with AWS.
I have created an EC2 instance using the AWS management console.
I would like to be able to create new, similar instances using the CLI so I've been looking at get-launch-template-data (which states "Retrieves the configuration data of the specified instance. You can use this data to create a launch template.") and expected the output of that to be valid input to create-launch-template.
I've viewed the AWS CLI documentation, and looked on StackOverflow but the only related issues I've found has been these ones:
Unable to create launchtemplate using awscli and
Amazon Launch Template - Updated AMI
I've been running:
aws ec2 get-launch-template-data --instance-id "i-xxx" --query "LaunchTemplateData" > MyLaunchData
aws ec2 create-launch-template --launch-template-name xxx --launch-template-data file://MyLaunchData
The error I get is:
An error occurred (InvalidInterfaceType.Malformed) when calling the CreateLaunchTemplate operation: '%s' is not a valid value for interface type. Enter a valid value and try again.
What I think is the relevant part of MyLaunchData is:
"NetworkInterfaces": [
{
"AssociatePublicIpAddress": true,
"DeleteOnTermination": true,
"Description": "",
"DeviceIndex": 0,
"Groups": [
"sg-xxx"
],
"InterfaceType": "interface",
"Ipv6Addresses": [],
"PrivateIpAddresses": [
{
"Primary": true,
"PrivateIpAddress": "xxx"
}
],
"SubnetId": "subnet-xxx"
}
],
Can someone point me in the right direction please?
(I've obviously replaced what I think is my data with xxx for privacy)
Many thanks
The InterfaceType is not allowed:
'%s' is not a valid value for interface type
I know that you are using the output from get-launch-template-data but there is no "InterfaceType": "interface".
AWS EC2 documentation
InterfaceType
Indicates the type of network interface. To create an Elastic Fabric Adapter (EFA),
specify efa. For more information, see Elastic Fabric Adapter in the Amazon Elastic
Compute Cloud User Guide.
Required: No
Type: String
Allowed Values: efa
To add to vo1umen's answer, to get around this issue you obviously need to remove InterfaceType from the NetworkInterfaces array. One way of doing this is to use jq:
jq -c 'del(.NetworkInterfaces[0]["InterfaceType"])' <<< $TemplateData
The -c is to keep the JSON flat formatted, you may not need to do that when storing in a file.

AWS Data Pipeline is not creating all slave / core instance nodes

I have tried creating AWS Data pipelines using the CLI and also using the GUI. Either way, when I specify more than one slave node, it doesn't get created properly. Here is an example definition:
{
"name": "EmrClusterForLoad",
"coreInstanceCount": "16",
"coreInstanceType": "r3.xlarge",
"releaseLabel": "emr-5.13.0",
"id": "EmrClusterForLoad",
"masterInstanceType": "r3.xlarge",
"region": "#{myDDBRegion}",
"type": "EmrCluster"
},
Any suggestions or thoughts?
The only reason I can think of, if you are exhausting your account's EC2 resource limit. Datapipeline honors this limit.
If you are not exhausting limit, then go to AWS console for EMR, find the corresponding booted cluster >> Debug >> check logs for steps, see if something stands out.
You can also launch a EMR cluster directly from console, and see if you can spin up more than 1 slave core nodes.
Other than that configuration wise you look good, I would recommend reaching out to AWS support for further debugging.
Weird. I think this may be a bug. The "fix" was to change the value of the "Resize Cluster Before Running:" from true to false. If it's not a bug, then I am not sure I understand the option.
If you are creating the pipeline via CLI, then the entry is:
"resizeClusterBeforeRunning": "false"
When I changed this value, all of the sudden the EC2 instances started to be created.

AWS EC2 Launch logs in cloudwatch windows 2016 image

I'm trying to forward the EC2 Launch logs to cloudwatch from my win 2016-based EC2 instance.
For some reason I can't see the log groups for this specific category.
Here's example of my AWS.EC2.Windows.CloudWatch.json:
{
"IsEnabled": true,
"EngineConfiguration": {
"PollInterval": "00:00:15",
"Components": [
{
"Id": "Ec2Config",
"FullName": "AWS.EC2.Windows.CloudWatch.CustomLog.CustomLogInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"LogDirectoryPath": "C:\\ProgramData\\Amazon\\EC2-Windows\\Launch\\Log",
"TimestampFormat": "yyyy-MM-ddTHH:mm:ss.fffZ:",
"Encoding": "UTF-8",
"Filter": "UserdataExecution.log",
"CultureName": "en-US",
"TimeZoneKind": "UTC"
}
},
{
"Id": "EC2ConfigSink",
"FullName": "AWS.EC2.Windows.CloudWatch.CloudWatchLogsOutput,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"Region": "eu-west-1",
"LogGroup": "/my-customer/deployment/ec2config-userdata",
"LogStream": "ec2config-userdata"
}
}
...
I have a few more definitions in this file
...],
"Flows": {
"Flows":
[
"Ec2Config,EC2ConfigSink",
... other references here
]
}
}
Cloudwatch agent starts and doesn't report any errors, I can see data from other sources (some application log files - I skipped the definitions intentionally)
It means the cloudwatch config file is correct and is applied / placed in a correct directory.
Logs are coming through with no problem except for the EC2 launch logs.
I'm wondering if anybody ran into this problem? It works perfectly on Windows 2012 - based images
Apparently, the SSM Agent starts after the EC2 Launch executes UserDatascript. I can see it from the SSM Agent's log file modification timestamps.
Therefore, there's no log forwarding happening during the EC2 Launch.
When the SSM Agent starts and loads the cloudwatch plugin, the log files are already filled with entries and never change (wallpaper log is the only exception) So they never end up in cloudwatch console.
There's been a lot of changes implemented on AWS side: they switch to .Net core, removed EC2 config service and moved the log forwarding logic to SSM Agent (cloudwatch Plugin) for Windows 2016-based AMIs
It looks like the behavior has changed quite significantly too so there's no way to get the EC2 launch logs in cloudwatch (when using AWS toolset-only)
Basically we have to stick to our Application logs only which is very unfortunate. We rely on EC2 launch logs to see if the instance started & successfully executed user data.

EC2 and EBS how and what are the differences?

I have an AWS EC2 machine I want to attach storage to which after its shutdown isn't deleted. The management should be done using Cloudformation.
I so far, do this using the following snippet:
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sda",
"Ebs": {
"DeleteOnTermination": "false",
"VolumeSize": "10",
"VolumeType": "gp2"
}
}
],
Reading also about AWS:EC2:Volume and AWS:EC2:VolumeAttachment can somebody explain the differences? What are the benefits and disadvantage using one way over the other? How do I use the other methods together with an EC2 instance?
AWS:EC2:Volume just creates a new EBS volume. It's not Available for Use
AWS:EC2:VolumeAttachment allows you to attach the new volume to a running EC2 instance where it will be exposed as a block (storage) device.
So, you need to do AWS:EC2:Volume first to know the VolumeId, and then supply it to AWS:EC2:VolumeAttachment
{
"Type":"AWS::EC2::VolumeAttachment",
"Properties" : {
"Device" : String,
"InstanceId" : String,
"VolumeId" : String
}
}
You use BlockDeviceMappings when you create an AMI or when you launch a new EC2 instance.
You use AWS::EC2::VolumeAttachment when you attach an EBS volume to a running EC2 instance. You can attach multiple additional EBS volumes.
You can also attach and detach root device as mentioned here
If an EBS volume is the root device of an instance, you must stop the instance before you can detach the volume.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-detaching-volume.html