How to use VPC with packer to generate AMI in AWS codebuild project? - amazon-iam

I'm trying to create an AMI by packer in a AWS codebuild project.
This AMI will be used to launch template
and the launch template will be used to ASG.
and when the ASG get an instance by this launch template, it should work with an existing target group for ALB.
for clarification, my expectation is...
generate AMI in a code build project by packer
create launch template with the #1 AMI
use the #2 launch template to ASG
ASG launch a new instance
existing target group do health check #4 instance.
In the step 5, my existing target group failed to do health check well for the new instance because it had different vpc.
(existing target group is using a custom VPC and the #4 instance had default vpc)
So, I backed to #1 to set the same VPC during the AMI generation.
But the codebuild project failed when it called the packer template in it.
it returned below
==> amazon-ebs: Prevalidating AMI Name...
amazon-ebs: Found Image ID: ami-12345678
==> amazon-ebs: Creating temporary keypair: packer_6242d99f-6cdb-72db-3299-12345678
==> amazon-ebs: Launching a source AWS instance...
==> amazon-ebs: Error launching source instance: UnauthorizedOperation: You are not authorized to perform this operation.
Before this update, there were no vpc and subnet related settings in the packer template, and they worked.
I added some vpc related permissions for this code build project but no lucks yet.
Below is my builders configuration on the packer-template.json
"builders": [
{
"type": "amazon-ebs",
"region": "{{user `aws_region`}}",
"instance_type": "t2.micro",
"ssh_username": "ubuntu",
"associate_public_ip_address": true,
"subnet_id": "subnet-12345678",
"vpc_id": "vpc-12345678",
"iam_instance_profile": "blah-profile-12345678",
"security_group_id": "sg-12345678",
"ami_name": "{{user `new_ami_name`}}",
"ami_description": "AMI from Packer {{isotime \"20060102-030405\"}}",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"name": "{{user `source_ami_name`}}",
"root-device-type": "ebs"
},
"owners": ["************"],
"most_recent": true
},
"tags": {
"Name": "{{user `new_ami_name`}}"
}
}
],
Added on this step (not exist before)
subnet_id
vpc_id
iam_instance_profile
security_group_id
Q1. Is this correct configuration to use VPC on here?
Q1-1. If yes, which permissions are required to allow this task?
Q1-2. If not, could you let me know the correct format of this?
Q2. Or... Is this correct way to get some instances which are able to communicate with my existing target groups...?
Thanks in advance. Your any kind of mentions will be helpful to me.

I got some helps from a local community.
And now I see I wrote too much wide and not good question without enough informations. There were several issues.
I should have used CloudTrail instead of CloudWatch to know which role and actions are making problems. My codebuild project had not ec2.RunInstances permission.
After I saw this on CloudTrail, I updated the role policy for the codebuild project and it was passed. But there was another issue.
After launch the instance by packer, it failed to connect with ssh. I got some answers from Stack overflow by searching about packer's timeout issue by ssh. and update the security group to allow ssh for packer.
Will remove this question if it is required.
Thanks for my local community and the previous answers & questioners on Stack overflow.

Related

AWS data pipeline name tag option for EC2 resource

I'm running a shell activity in EC2 resource sample json for creating EC2 resource.
{
"id" : "MyEC2Resource",
"type" : "Ec2Resource",
"actionOnTaskFailure" : "terminate",
"actionOnResourceFailure" : "retryAll",
"maximumRetries" : "1",
"instanceType" : "m5.large",
"securityGroupIds" : [
"sg-12345678",
"sg-12345678"
],
"subnetId": "subnet-12345678",
"associatePublicIpAddress": "true",
"keyPair" : "my-key-pair"
}
Above json is creating EC2 resource using data pipeline but I want to give a name to the above resource when I will open EC2 resource in AWS console it will show EC2 resource name with other attributes, currently it's showing blank.
See attached image for more details
You have to tag the instance with:
Key: Name
Value: MyName
MyName is an example name. You need to change it to what you want it to be.
Adding the tag to the pipeline should propagate the tags to instances. From docs:
Applying a tag to a pipeline also propagates the tags to its underlying resources (for example, Amazon EMR clusters and Amazon EC2 instances)
But probably it does not work retrospectively. If you already have a pipeline with instances, its unlikely new tags will propagate. Propagation usually only works at the resource creation. For existing instances you may need to use EC2 console instead.

I can't create a template with the output of get-launch-template-data

I am starting to play with AWS.
I have created an EC2 instance using the AWS management console.
I would like to be able to create new, similar instances using the CLI so I've been looking at get-launch-template-data (which states "Retrieves the configuration data of the specified instance. You can use this data to create a launch template.") and expected the output of that to be valid input to create-launch-template.
I've viewed the AWS CLI documentation, and looked on StackOverflow but the only related issues I've found has been these ones:
Unable to create launchtemplate using awscli and
Amazon Launch Template - Updated AMI
I've been running:
aws ec2 get-launch-template-data --instance-id "i-xxx" --query "LaunchTemplateData" > MyLaunchData
aws ec2 create-launch-template --launch-template-name xxx --launch-template-data file://MyLaunchData
The error I get is:
An error occurred (InvalidInterfaceType.Malformed) when calling the CreateLaunchTemplate operation: '%s' is not a valid value for interface type. Enter a valid value and try again.
What I think is the relevant part of MyLaunchData is:
"NetworkInterfaces": [
{
"AssociatePublicIpAddress": true,
"DeleteOnTermination": true,
"Description": "",
"DeviceIndex": 0,
"Groups": [
"sg-xxx"
],
"InterfaceType": "interface",
"Ipv6Addresses": [],
"PrivateIpAddresses": [
{
"Primary": true,
"PrivateIpAddress": "xxx"
}
],
"SubnetId": "subnet-xxx"
}
],
Can someone point me in the right direction please?
(I've obviously replaced what I think is my data with xxx for privacy)
Many thanks
The InterfaceType is not allowed:
'%s' is not a valid value for interface type
I know that you are using the output from get-launch-template-data but there is no "InterfaceType": "interface".
AWS EC2 documentation
InterfaceType
Indicates the type of network interface. To create an Elastic Fabric Adapter (EFA),
specify efa. For more information, see Elastic Fabric Adapter in the Amazon Elastic
Compute Cloud User Guide.
Required: No
Type: String
Allowed Values: efa
To add to vo1umen's answer, to get around this issue you obviously need to remove InterfaceType from the NetworkInterfaces array. One way of doing this is to use jq:
jq -c 'del(.NetworkInterfaces[0]["InterfaceType"])' <<< $TemplateData
The -c is to keep the JSON flat formatted, you may not need to do that when storing in a file.

Using AWS Storages Services (EBS or EFS or S3) as Volumes or Mount Binds with Stanalone Docker Containers not ECS?

I have self-managed AWS Cluster over which I am looking to run Docker Containers.
(At present, ECS and EKS are not in my scope though in future they might... but I need focus on present requirement).
I got to add persistence to few containers by attaching AWS efs/ebs/s3fs storages (as appropriate for the use case). AWS has addressed this use case through a lengthy and verbose blog which takes ECS in to picture. Like said my requirement is simple and this article seems to do many things like cloudFormaton etc etc..
Will appreciate if anyone can simplify this a provide the bare bones step I need to follow.
1) I installed the ebs/efs/s3fs drivers -
docker plugin install --grant-all-permissions rexray/ebs
and so on for efs and s3fs too. s3fs installation ran into trouble.
Error response from daemon: dial unix
/run/docker/plugins/b0b9c534158e73cb07011350887501fe5fd071585af540c2264de760f8e2c0d9/rexray.sock:
connect: no such file or directory
But this is not my problem for the moment unless someone wants to volunteer on solving this issue.
Where I am struck is - what are the next steps to create volumes or directly mount them at run time to containers as volumes or mount binds (is this supported? or just volumes).
here are the steps for ec2-based ecs services (since fargate instances do not support docker volumes as of today):
Update your instance role to include the following permissions:
ec2:AttachVolume
ec2:CreateVolume
ec2:CreateSnapshot
ec2:CreateTags
ec2:DeleteVolume
ec2:DeleteSnapshot
ec2:DescribeAvailabilityZones
ec2:DescribeInstances
ec2:DescribeVolumes
ec2:DescribeVolumeAttribute
ec2:DescribeVolumeStatus
ec2:DescribeSnapshots
ec2:CopySnapshot
ec2:DescribeSnapshotAttribute
ec2:DetachVolume
ec2:ModifySnapshotAttribute
ec2:ModifyVolumeAttribute
ec2:DescribeTags
this should be for all resources in the policy. n.b, the createVolume and deleteVolume permissions can be omitted if you don't want to use autoProvision.
Install rexray on the instance (you've already done this)
If you're not using autoprovision, provision your volume and make sure there is a Name tag matching the name of the volume that you want to use in your service definition. In the example below, we set this value to rexray-vol.
Update your task definition to include the necessary values for the volume to be mounted as a docker container. Here is an example:
"volumes": [{
"name": "rexray-vol",
"dockerVolumeConfiguration": {
"autoprovision": true,
"scope": "shared",
"driver": "rexray/ebs",
"driverOpts": {
"volumetype": "gp2",
"size": "5"
}
}
}]
Update the task definition's container definition to refer your swanky ebs volume:
"mountPoints": [
{
"containerPath": "/var/lib/mysql",
"sourceVolume": "rexray-vol"
}
],

Using CloudFormation to launch an AWS autoscaling group with attached EBS

I am trying to launch an autoscaling group with a single m3.medium instance and attached EBS using CloudFormation (CFN). I have succeeded in doing everything but the EBS part. I've tried adding the following block to my CFN template (as a property of the AWS::AutoScaling::LaunchConfiguration block):
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sdf",
"Ebs": { "VolumeSize": 100, "VolumeType": "gp2" }
}
]
Without this the launch succeeds. When I include it, aws hangs while trying to create the autoscaling group. There are no error messages to help debug this issue. I've tried creating an EBS through aws console and attaching to the launched m3 instance manually, and this works, but I need to do it through CFN to conform to our automated deployment pipeline.
Are there other resources I need to create in the CFN template to make this work?
If that's a verbatim block, then you add quotes to volume size (doc is very misleading, as all data types are strings). Here's one that's worked fine for me, and I see no differences:
"BlockDeviceMappings": [
{
"DeviceName": {
"Ref": "SecondaryDevice"
},
"Ebs": {
"VolumeType": "gp2",
"VolumeSize": "10"
}
}
]
In general, if you need to troubleshoot ASGs, add SNS notifs for launch failures to the auto scaling group (http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/ASGettingNotifications.html). You may find that you're on your last hundred gigs of EBS limit (not likely) or that your AMI doesn't like the device type or label you're trying to use (somewhat more likely).
Update:
After speaking with AWS support, I resolved this issue. It turns out that AWS makes a distinction between an instance-store-backed and ebs-backed ami. You can only add the BlockDeviceMappings property when using an ebs-backed ami, and I was using the other kind. Luckily, there is a way to convert instance-store-backed to ebs-backed, using this procedure:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-an-ami-instance-store.html#Using_ConvertingS3toEBS

ec2-describe-instance-status Client.InvalidInstanceID.NotFound but I KNOW instance exists

I have setup a few of the amazon AWS CLI tools (EC2, Auto Scaling, MOnitoring and ELB). The tools are setup correctly and work perfectly. My environment vars are all set, the relevant ones to this Q being:
export EC2_REGION=eu-west-1
export EC2_URL=https://ec2.$EC2_REGION.amazonaws.com
export AWS_ELB_URL=https://elasticloadbalancing.$EC2_REGION.amazonaws.com
When I run ec2-describe-instance-status i-XXXXXXXX for ANY of my instances, I get:
Client.InvalidInstanceID.NotFound: The instance ID 'i-XXXXXXXX' does not exist
I KNOW the instance ID exists, I copied it out of the AWS web console, and it is in the eu-west-1 region, and my env vars are set to this region.
For the life of me I can't figure out why it will not find my instances. Is there anything glaringly obvious that I am doing incorrectly?
UPDATE: recreating x509 cert/pk solved this... for some reason.
I had the same problem. It was because I wasn't defining a region for my commands. I assumed it would list all instances across all regions but it defaults to us-west-1 and I don't have any instances there.
To describe my machines in Ireland I use the following:
ec2-describe-instances --region eu-west-1
NB: I'm defining my AWS access key and secret elsewhere.
To avoid this problem going forward, I've now set my region via an environment variable on my linux and windows machines: EC2_URL=https://ec2.eu-west-1.amazonaws.com
so that I don't have to be explicit on the command line.
Update May 2014 You can also set the region by adding the following lines to the ~/.aws/config file in your home folder (not tested on Windows). This is my preferred method now, especially on my VM's and containers:
[default]
region = eu-west-1
For more information see the offical docs here.
Update May 2021
Since I work across so many regions now I use Implicit and ephemeral environment variables to define my region for that command and NOT have a default in my .aws/config which can be dangerous. This also makes bash scripting easier as I can define it for the whole script/utility. It's a tiny bit more typing but far safer, more flexible and transparent e.g.:
AWS_DEFAULT_REGION=eu-central-1 aws ec2 describe-instances
# or for a script/utility
AWS_DEFAULT_REGION=us-east-1 ./tagInstances.sh
In my case, I had two sets of credentials in ~/.aws/credentials. To specify the credentials tag, use
aws ec2 describe-instances --instance-id <your-instance-id> --profile <your-profile-name> --region <your-region>
Weird issue - as usual when encountering something weird in software development, one should first question the assumptions:
I KNOW the instance ID exists, I copied it out of the AWS web console,
and it is in the eu-west-1 region, and my env vars are set to this
region.
So the instance ID stems from a different environment than the one you want to use it in - I would try to derive the instance ID via the same environment instead, i.e.:
ec2-describe-instances
I venture the guess that the list won't return the instances you are expecting. This would indicate that you are either using AWS credentials that belong to another account or that these credentials do not have the required Amazon EC2 read permissions assigned via IAM policies for example.
I had a similar issue and I write here the solution for anybody who can find it helpful.
I was stucked with this error during some hours.
Client.InvalidInstanceID.NotFound: The instance ID 'i-XXXXXXXX' does not exist
Finally I found what was happening: I had my instance in a different region than the default region (US East (Northern Virginia)) and I had to update this information. By default the commands look only for instances in the default region!
It is explained in the docs, section (Optional): Set the region http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/SettingUp_CommandLine.html
It's very simple problem. If you are getting this error
Client.InvalidInstanceID.NotFound: The instance ID 'i-XXXXXXXX' does not exist
Then follow the steps.
Check in which country region your instance is image here
Now enter root#Indian:~# aws configure
Enter
AWS Access Key ID [****************D7M2]:
AWS Secret Access Key [****************2h3r]:
Default region name [us-east-1]:
When asked for default region, Change the region to which the instances is residing. Eg: us-east-2. Then press Enter.
Acutally these are the list of available Region names "RegionNames"
"Regions": [
{
"RegionName": "ap-south-1",
"Endpoint": "ec2.ap-south-1.amazonaws.com"
},
{
"RegionName": "eu-west-2",
"Endpoint": "ec2.eu-west-2.amazonaws.com"
},
{
"RegionName": "eu-west-1",
"Endpoint": "ec2.eu-west-1.amazonaws.com"
},
{
"RegionName": "ap-northeast-2",
"Endpoint": "ec2.ap-northeast-2.amazonaws.com"
},
{
"RegionName": "ap-northeast-1",
"Endpoint": "ec2.ap-northeast-1.amazonaws.com"
},
{
"RegionName": "sa-east-1",
"Endpoint": "ec2.sa-east-1.amazonaws.com"
},
{
"RegionName": "ca-central-1",
"Endpoint": "ec2.ca-central-1.amazonaws.com"
},
{
"RegionName": "ap-southeast-1",
"Endpoint": "ec2.ap-southeast-1.amazonaws.com"
},
{
"RegionName": "ap-southeast-2",
"Endpoint": "ec2.ap-southeast-2.amazonaws.com"
},
{
"RegionName": "eu-central-1",
"Endpoint": "ec2.eu-central-1.amazonaws.com"
},
{
"RegionName": "us-east-1",
"Endpoint": "ec2.us-east-1.amazonaws.com"
},
{
"RegionName": "us-east-2",
"Endpoint": "ec2.us-east-2.amazonaws.com"
},
{
"RegionName": "us-west-1",
"Endpoint": "ec2.us-west-1.amazonaws.com"
},
{
"RegionName": "us-west-2",
"Endpoint": "ec2.us-west-2.amazonaws.com"
}
]
}
Default output format [None]:
Leave the output format blank and press Enter. Now You are Done
Now in the console just type
root#Indian-3543:~# aws ec2 describe-instance --instance-id i-06343434322t
MAKE HAPPY BE HAPPY
I got this fixed by changing EC2_URL from 'https://ec2.ap-southeast-1.amazonaws.com' to 'ec2.ap-southeast-1.amazonaws.com'