CloudFormation LaunchConfiguration SSM Association - amazon-web-services

I see that it's now possible to create EC2 instances bootstrapped with SSM association(s):
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssm-document.html
But I don't see analogous properties exposed on Launch Configurations...
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig.html
Is this just not possible yet or is there a way of getting SSM associations added automatically to EC2 instances launched via a Launch Configuration?

One solution would be to just call the CreateAssociation API directly when each instance is booted. You could use a Auto Scaling Event to invoke a Lambda function, or (probably easier) add a shell-script line into your Launch Configuration's UserData that invokes the API through the AWS CLI (aws ssm create-association), retrieving the currently-running instance ID through instance metadata:
aws ssm create-association \
--name mySSMDocumentName \
--instance-id $(curl http://169.254.169.254/latest/meta-data/instance-id)
You'll need to provide the Launch Configuration an IAM instance profile with "ssm:CreateAssociation" permissions and also provide the AWS CLI with the current region (either exported to the AWS_DEFAULT_REGION environment variable, or with an explicit --region parameter.)

Related

How to delete autoscaling groups with aws cli?

I am trying to write a bash script that will delete my EC2 instances and the auto scaling group that launched them:
EC2s=$(aws ec2 describe-instances --region=eu-west-3 \
--filters "Name=tag:Name,Values=*-my-dev-eu-west-3" \
--query "Reservations[].Instances[].InstanceId" \
--output text)
for id in $EC2s
do
aws ec2 terminate-instances --region=eu-west-3 --instance-ids $id
done
aws autoscaling delete-auto-scaling-group --region eu-west-3 \
--auto-scaling-group-name my-asg-dev-eu-west-3
But it fails with this error:
An error occurred (ResourceInUse) when calling the DeleteAutoScalingGroup operation:
You cannot delete an AutoScalingGroup while there are instances or pending Spot
instance request(s) still in the group.
There is no issue if I use the AWS console to do the same thing. Why does the aws cli prevent me from deleting the ASG if I have terminated all the instances?
if you really want to do this with CLI, you may first want to use aws autoscaling suspend-processes command to prevent ASG from creating new instances. Then use aws ec2 terminate-instances like you are doing. Then use aws ec2 wait instance-terminated command and pass instance ids. Once all that is done, you should be able use aws autoscaling delete-auto-scaling-group
aws ec2 terminate-instances will return before the instances have finished terminating (which could take several minutes).
I highly recommend using something like CloudFormation or Terraform for this sort of thing instead of the AWS CLI tool.
You can force delete the ASG with active spot instance requests with AWS cli:
aws autoscaling delete-auto-scaling-group --auto-scaling-group-name Your-ASG-Name --force-delete

AWS - ECS load S3 files in entrypoint script

Hi all!
Code: (entrypoint.sh)
printenv
CREDENTIALS=$(curl -s "http://169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI")
ACCESS_KEY_ID=$(echo "$CREDENTIALS" | jq .AccessKeyId)
SECRET_ACCESS_KEY=$(echo "$CREDENTIALS" | jq .SecretAccessKey)
TOKEN=$(echo "$CREDENTIALS" | jq .Token)
export AWS_ACCESS_KEY_ID=$ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=$SECRET_ACCESS_KEY
export AWS_SESSION_TOKEN=$TOKEN
aws s3 cp s3://BUCKET/file.txt /PATH/file.txt
Problem:
I'm trying to fetch AWS S3 files to ECS inspired by:
AWS Documentation
(But I'm fetching from S3 directly, not throught VPC endpoint)
I have configured bucket policy & role policy (that is passed in taskDefinition as taskRoleArn & executionRoleArn)
Locally when I'm fetching with aws cli and passing temporary credentials (that I logged in ECS with printenv command in entrypoint script) everything works fine. I can save files on my pc.
On ECS I have error:
fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden
Where can I find solution? Someone had similar problem?
Frist thing, If you are working inside AWS, It strongly recommended to use AWS ECS service role or ECS task role or EC2 role. you do need to fetch credentials from metadata.
But seems like the current role does have permission to s3 or the entrypoint not exporting properly the Environment variable.
If your container instance has already assing role then do not need to export Accesskey just call the aws s3 cp s3://BUCKET/file.txt /PATH/file.txt and it should work.
IAM Roles for Tasks
With IAM roles for Amazon ECS tasks, you can specify an IAM role that
can be used by the containers in a task. Applications must sign their
AWS API requests with AWS credentials, and this feature provides a
strategy for managing credentials for your applications to use,
similar to the way that Amazon EC2 instance profiles provide
credentials to EC2 instances. Instead of creating and distributing
your AWS credentials to the containers or using the EC2 instance’s
role, you can associate an IAM role with an ECS task definition or
RunTask API operation.
So the when you assign role to ECS task or ECS service your entrypoint will be that simple.
printenv
aws s3 cp s3://BUCKET/file.txt /PATH/file.txt
Also, your export will not work as you are expecting, the best way to pass ENV to container form task definition, export will not in this case.
I will suggest assigning role to ECS task and it should work as you are expecting.

How do I enable the AWS CLI on an EC2 instance?

How do I enable the AWS CLI on an EC2 instance? After I create the EC2 instance, I can SSH into the machine, but when I try to do something like aws s3 ls, it prompts me to do aws configure first, which I then have to enter my keys. I want to be able to automate this so that I can grab additional artifacts from S3 buckets to install. Note that I am using the AWS CLI on my computer to create the EC2 instance, but I need to use the AWS CLI on the EC2 instance itself.
My AWS command to create a simple EC2 instance looks like the following (this is done on my computer).
aws ec2 run-instances \
--image-id ami-14c5486b \
--count 1 \
--instance-type t2.micro \
--key-name testkey \
--subnet-id subnet-xxxxxxxx \
--security-group-ids sg-xxxxxxxx \
--tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=test}]'
--user-data file://install-software.sh
The install-software.sh looks something like the following (this is submitted to the EC2 instance).
#!/bin/bash
aws s3 cp s3://mybucket/some-archive.tar.gz some-archive.tar.gz
tar xf some-archive.tar.gz
sudo some-archive/bin/install.sh
You need to use an instance profile when launching your EC2 instance – if it has an instance profile attached then the AWS CLI will automatically use the permissions set in it to grant access to resources, rather than relying on your providing credentials.
You need to assign an instance role to your instance. Give it rights to get objects from your bucket. Then the aws cli will get the credentials from instance metadata automatically so you won't need to configure aws first.

Is it possible to create and Auto Scaling Group Launch config with the CLI and define the instance tags in one command?

Is it possible to create and Auto Scaling Group Launch config with the CLI and define the instance tags in one command?
Maybe I am missing something but right now it looks like have to do it in two steps.
i.e.
aws autoscaling create-launch-configuration ...
and then
aws autoscaling create-or-update-tags --tags ...
Since you need to have asg LC created first to tag it, it is two step process as you mentioned.
https://docs.aws.amazon.com/cli/latest/reference/autoscaling/create-launch-configuration.html
This example creates a launch configuration based on an existing instance. In addition, it also specifies launch configuration attributes such as a security group, tenancy, Amazon EBS optimization, and a bootstrapping script:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/autoscaling-tagging.html
aws autoscaling create-launch-configuration --launch-configuration-name my-launch-config --key-name my-key-pair --instance-id i-7e13c876 --security-groups sg-eb2af88e --instance-type m1.small --user-data file://myuserdata.txt --instance-monitoring Enabled=true --no-ebs-optimized --no-associate-public-ip-address --placement-tenancy dedicated --iam-instance-profile my-autoscaling-role
aws autoscaling create-or-update-tags --tags "ResourceId=my-asg,ResourceType=auto-scaling-group,Key=environment,Value=test,PropagateAtLaunch=true"

How can I set existing EIPs to Auto scaled instances in AWS when they launch automatically?

I have cloud formation template which creates auto-scaling group with desired state 2. I need instances to be attached to existing eips when they get launched. How can I do this?
You need to write a custom user data script that assigns the elastic IP to the instance. You can not do this using CloudFormation templates yet. The AWS CLI to be used is: aws ec2 associate-address. For this, the best practice would be to assign and IAM role with ec2:AssociateAddress permission.
The command will look like this: aws ec2 associate-address --instance-id $INSTANCE_ID --allocation-id $ALLOCATION_ID --allow-reassociation
While the allocation id will need to be hardcoded in the template, you can get the instance id within the instance using the command: curl -s http://169.254.169.254/latest/meta-data/instance-id. Refer this thread
for more details.