Assign a static elastic IP to an instance in Autoscaling Group - amazon-web-services

I have one instance in ASG, I need to assign an elastic ip that instance. Now when the instance health check fails, the newly launched instance should have the same elastic IP. The IAM role and everything is in the correct order.

INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
MAXWAIT=3
ALLOC_ID=${IPAddresses}
echo "Checking if EIP with ALLOC_ID[$ALLOC_ID] is free...."
ISFREE=$(aws ec2 describe-addresses --allocation-ids $ALLOC_ID --query Addresses[].InstanceId --output text --region ${AWS::Region})
STARTWAIT=$(date +%s)
while [ ! -z "$ISFREE" ]; do
if [ "$(($(date +%s) - $STARTWAIT))" -gt $MAXWAIT ]; then
echo "WARNING: We waited 30 seconds, we're forcing it now."
ISFREE=""
else
echo "Waiting for EIP with ALLOC_ID[$ALLOC_ID] to become free...."
sleep 3
ISFREE=$(aws ec2 describe-addresses --allocation-ids $ALLOC_ID --query Addresses[].InstanceId --output text --region ${AWS::Region})
fi
done
echo Running: aws ec2 associate-address --instance-id $INSTANCE_ID --allocation-id $ALLOC_ID --allow-reassociation --region ${AWS::Region}}
aws ec2 associate-address --instance-id $INSTANCE_ID --allocation-id $ALLOC_ID --allow-reassociation --region ${AWS::Region}
yum install jq -y

Not sure how to take that IP from the resource itself and pass it as a user data in Launch configuration.
In the CFN, it would look similar to the following:
Resources:
MyEIP:
Type: AWS::EC2::EIP
Properties: {}
MyLaunchTemplate:
Type: AWS::EC2::LaunchTemplate
Properties:
UserData:
Fn::Base64: !Sub |
#!/bin/bash -xe
EIP_IP=${MyEIP}
echo ${!EIP_IP}
# use aws cli to attach EIP_IP to the instance
Instance role would be required as well with permissions to attach the EIP.
From docs about !Ref which will be used when EIP_IP=${MyEIP}:
When you pass the logical ID of this resource to the intrinsic Ref function, Ref returns the Elastic IP address.

With EC2 & Auto scaling, You need using user data in EC2 to Auto Attach Elastic IP to EC2 Instance For Auto scaling
#!/bin/bash
aws configure set aws_access_key_id "XYZ..."
aws configure set aws_secret_access_key "ABC..."
aws configure set region "ap-..."
aws ec2 associate-address --instance-id "$(curl -X GET "http://169.254.169.254/latest/meta-data/instance-id")" --public-ip your_elastic_IP
Note: you should create new user & IAM have only permission associate-address to create/get aws key
Hope it be help you :)

Related

AWS CloudShell - List instances by ARN prefix

In AWS Backup, I have created a resource assignment to a backup-plan, which targets all EC2 instances.
The ARN prefix looks like this:
arn:aws:ec2:*:*:instance/*
How can I list all instances that match an ARN prefix? Either in AWS Cloudshell or with the aws cli?
I think you can try using ec2's describe-instances cli command and run it over all AWS regions :
for region in `aws ec2 describe-regions --output text | cut -f3`
do
echo -e "\nListing Instances in region:'$region'..."
aws ec2 describe-instances --region $region
done

Unable to tag EBS Volumes using UserData bash script

I have been trying to tag EBS Volumes attached to EC2 instances in the CloudFormation UserData section. Here was my first attempt:
Example 1:
AWS_INSTANCE_ID=`curl -s http://169.254.169.254/latest/meta-data/instance-id`
ROOT_DISK_ID=`aws ec2 describe-volumes \
--filter Name=attachment.instance-id,Values="${AWS_INSTANCE_ID}" \
--query "Volumes[].VolumeId" --region us-east-1 --out text`
aws ec2 create-tags --resources "${ROOT_DISK_ID}" \
--tags 'Key=VolumeTagName,Value=VolumeTagValue' --region us-east-1
This resulted in a Template format error: Unresolved resource dependencies [AWS_INSTANCE_ID, ROOT_DISK_ID] in the Resources block of the template error.
A post I came across mentioned that using the ! when calling the variable in the Cloudformation UserData script will get around this, so it now looks like this:
Example 2:
AWS_INSTANCE_ID=`curl -s http://169.254.169.254/latest/meta-data/instance-id`
ROOT_DISK_ID=`aws ec2 describe-volumes \
--filter Name=attachment.instance-id,Values="${!AWS_INSTANCE_ID}" \
--query "Volumes[].VolumeId" --region us-east-1 --out text`
aws ec2 create-tags --resources "${!ROOT_DISK_ID}" \
--tags 'Key=VolumeTagName,Value=VolumeTagValue' --region us-east-1
This gets around that error, yet still, no tags appear on the Volume attached to an instance launched with this template. If I ssh into the instance and run Example 1, it works just fine. Example 2 does not give me any errors to work with.
What am I doing wrong in bash, that is specific to Cloudformation?
If I understand correctly you're trying to create your script using cloudformation, and then executing it on the ec2-instance on startup. Using yaml, this is my userdata section:
UserData: !Base64
Fn::Join:
- ''
- - "#!/bin/bash -xe \n"
- "cat << 'EOF' > /home/ec2-user/script.sh \n"
- "AWS_INSTANCE_ID=`curl -s http://169.254.169.254/latest/meta-data/instance-id`\n"
- "ROOT_DISK_ID=`aws ec2 describe-volumes "
- "--filter Name=attachment.instance-id,Values=\"${AWS_INSTANCE_ID}\" "
- "--query \"Volumes[*].[VolumeId]\" --region eu-west-1 --out text`\n"
- "aws ec2 create-tags --resources \"${ROOT_DISK_ID}\" "
- "--tags 'Key=MyAutoTagName,Value=MyAutoTagValue' --region eu-west-1\n"
- "EOF\n"
- "chmod +x /home/ec2-user/script.sh\n"
- "/home/ec2-user/script.sh\n"
I changed the region due to the region I'm using.
If I view the contents of my script.sh file I get the below:
AWS_INSTANCE_ID=`curl -s http://169.254.169.254/latest/meta-data/instance-id`
ROOT_DISK_ID=`aws ec2 describe-volumes --filter Name=attachment.instance-id,Values="${AWS_INSTANCE_ID}" --query "Volumes[*].[VolumeId]" --region eu-west-1 --out text`
aws ec2 create-tags --resources "${ROOT_DISK_ID}" --tags 'Key=MyAutoTagName,Value=MyAutoTagValue' --region eu-west-1
The only difference I can see is your "Volumes[].[VolumeId]*", I'm not sure what your userdata section looks like, so it may be issues with escaping.
Using my UserData section above the tag was created as soon as the instance was spun up and userdata section ran.

AWS CLI Windows Command to Terminate All EC2 Instances

I need a single Windows CMD command that terminate all instances from Ohio region. I found this commands but its not working.
aws ec2 terminate-instances \
--region us-east-2 \
--instance-ids (aws ec2 describe-instances --query "Reservations[].Instances[].[InstanceId]" --region us-east-2)
Try this out in powershell:
foreach ($id in (aws ec2 describe-instances --filters --query "Reservations[].Instances[].[Instance
Id]" --output text --region us-east-2)) { aws ec2 terminate-instances --instance-ids $id }
You can pass the --dry-run flag with terminate instances to confirm first if you'd like.

Know EC2 Instances by region

How to know EC2 instances by region from aws-cli?
Desired output:
Region name name
us-west-1 instance1
us-west-1 instance2
us-west-2 instance1
us-east-1 instance1
You can only list instances via the CLI from one region at a time. So you would write a script that loops through each region, getting the instances in each region.
Here's a good starting point for a script:
#!/bin/bash
all_regions="us-east-1 us-east-2 us-west-1 us-west-2"
echo "Region Name Instance ID"
for region in ${all_regions}; do
aws ec2 describe-instances --region ${region} | \
grep '"InstanceId":' | \
perl -pe "s/.*: \"(i-.*?)\".*/${region} \1/"
done
The aws command above is the AWS command line interface:
https://aws.amazon.com/cli/
describe-instances is one of the commands for the AWS CLI:
http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-instances.html
grep and perl are standard utilities.

Mount ebs volumes automatically with ebs volume id only

Imagine you have a set of ebs volumes for data and you are frequently mounting these SAME set of EBS volumes to a ec2 node that changes over time (because you kill it every time you do not need it anymore and create a new one when you need it again) but on every creation ec2 instance could have different virtype, OS, instance types an so on (for whatever reason), what is the best way to automatically mount these EBS volumes on this a given ec2 instance when all you have is the ebs volume id and access to ec2 api to get the ebs device name?
Any program available to do so?
Btw, I am not talking about attaching the volumes and interested in automatically mounting to known directories on the os file system on instance creation given that the device name varies from os to os when compared to device name on ec2 and also it is preferred to use UUID in /etc/fstab instead of device name.
Use filesystem labels:
$ tune2fs -L "disk1" /dev/xvdf
$ tune2fs -L "disk2" /dev/xvdg
In your /etc/fstab:
LABEL=disk1 /disk1 auto defaults 0 2
LABEL=disk2 /disk2 auto defaults 0 2
In you /etc/rc.local:
# Note: You could store the volume-ids and devices in the ec2 tags of your instance.
INSTANCE_ID=$(curl http://169.254.169.254/latest/meta-data/instance-id)
export AWS_DEFAULT_REGION=$(curl http://169.254.169.254/latest/meta-data/placement/availability-zone | sed 's/[a-z]$//')
aws ec2 attach-volume --volume-id vol-1234abcd --instance-id $INSTANCE_ID --device /dev/xvdf
aws ec2 attach-volume --volume-id vol-1234abcf --instance-id $INSTANCE_ID --device /dev/xvdg
# wait for them to mount
until [ "$(aws ec2 describe-volume-status --volume-id vol-1234abcd --query 'VolumeStatuses[0].VolumeStatus.Status' --output text)" = ok ]; do sleep 5; done
until [ "$(aws ec2 describe-volume-status --volume-id vol-1234abcf --query 'VolumeStatuses[0].VolumeStatus.Status' --output text)" = ok ]; do sleep 5; done
# mount /etc/fstab entries
mount -a
# I also store the EIP as a tag
EIP="$(aws ec2 describe-instances --instance-id $INSTANCE_ID --query 'Reservations[*].Instances[*].[Tags[?Key==`EIP`]|[0].Value]' --output text)"
if [ $? -eq 0 ] && [ "$EIP" != "" ] && [ "$EIP" != "None" ]; then
aws ec2 associate-address --instance-id $INSTANCE_ID --public-ip "$EIP" --query 'return' --output text
fi
You could script this using AWS CLI and the command attach-volume.
From the AWS CLI example your command would look similar to:
aws ec2 attach-volume --volume-id vol-1234abcd --instance-id i-abcd1234 --device /dev/sdf
I would also suggest creating an IAM role and attaching it to the ec2 instances that you launch so that you do not have to put any IAM users' credentials on the instance.
You mentioned that you may be attaching the volume to different Operating Systems across ec2 launches, in that case all the OSs would have to support the filesystem type of the partitions on the volume that they wish to mount.