Mount ebs volumes automatically with ebs volume id only - amazon-web-services

Imagine you have a set of ebs volumes for data and you are frequently mounting these SAME set of EBS volumes to a ec2 node that changes over time (because you kill it every time you do not need it anymore and create a new one when you need it again) but on every creation ec2 instance could have different virtype, OS, instance types an so on (for whatever reason), what is the best way to automatically mount these EBS volumes on this a given ec2 instance when all you have is the ebs volume id and access to ec2 api to get the ebs device name?
Any program available to do so?
Btw, I am not talking about attaching the volumes and interested in automatically mounting to known directories on the os file system on instance creation given that the device name varies from os to os when compared to device name on ec2 and also it is preferred to use UUID in /etc/fstab instead of device name.

Use filesystem labels:
$ tune2fs -L "disk1" /dev/xvdf
$ tune2fs -L "disk2" /dev/xvdg
In your /etc/fstab:
LABEL=disk1 /disk1 auto defaults 0 2
LABEL=disk2 /disk2 auto defaults 0 2
In you /etc/rc.local:
# Note: You could store the volume-ids and devices in the ec2 tags of your instance.
INSTANCE_ID=$(curl http://169.254.169.254/latest/meta-data/instance-id)
export AWS_DEFAULT_REGION=$(curl http://169.254.169.254/latest/meta-data/placement/availability-zone | sed 's/[a-z]$//')
aws ec2 attach-volume --volume-id vol-1234abcd --instance-id $INSTANCE_ID --device /dev/xvdf
aws ec2 attach-volume --volume-id vol-1234abcf --instance-id $INSTANCE_ID --device /dev/xvdg
# wait for them to mount
until [ "$(aws ec2 describe-volume-status --volume-id vol-1234abcd --query 'VolumeStatuses[0].VolumeStatus.Status' --output text)" = ok ]; do sleep 5; done
until [ "$(aws ec2 describe-volume-status --volume-id vol-1234abcf --query 'VolumeStatuses[0].VolumeStatus.Status' --output text)" = ok ]; do sleep 5; done
# mount /etc/fstab entries
mount -a
# I also store the EIP as a tag
EIP="$(aws ec2 describe-instances --instance-id $INSTANCE_ID --query 'Reservations[*].Instances[*].[Tags[?Key==`EIP`]|[0].Value]' --output text)"
if [ $? -eq 0 ] && [ "$EIP" != "" ] && [ "$EIP" != "None" ]; then
aws ec2 associate-address --instance-id $INSTANCE_ID --public-ip "$EIP" --query 'return' --output text
fi

You could script this using AWS CLI and the command attach-volume.
From the AWS CLI example your command would look similar to:
aws ec2 attach-volume --volume-id vol-1234abcd --instance-id i-abcd1234 --device /dev/sdf
I would also suggest creating an IAM role and attaching it to the ec2 instances that you launch so that you do not have to put any IAM users' credentials on the instance.
You mentioned that you may be attaching the volume to different Operating Systems across ec2 launches, in that case all the OSs would have to support the filesystem type of the partitions on the volume that they wish to mount.

Related

Weekly scheduled AMI backup of an Amazon EC2 instance with a root volume

I have DB instances in my AWS account. Many volumes are attached to one instance. I want to create an AMI of an Amazon EC2 instance with a root volume on weekly basis. At any point of time I should have the latest AMIs for an instance.
I have tried with systems manager. It’s creating snapshots of all volumes attached with the instance.
I have written a Bash script to create an AMI of an instance with a root volume. I need an approach to delete older images.
Note: The instance should not reboot the AMI creation
How can I update the script or is there is another way
to achieve it?
#!/bin/bash
root_device=$(aws ec2 describe-instances --instance-ids i-12345 --query 'Reservations[*].Instances[*].RootDeviceName' --output text)
echo root device is $root_device
devices=$(for i in $(aws ec2 describe-instances --instance-ids i-12345 --query 'Reservations[*].Instances[*].BlockDeviceMappings[*].DeviceName' --output text );
do if [ $i != $root_device ];
then echo DeviceName=$i,NoDevice=;
fi;
done)
aws ec2 create-image --instance-id i-12345 --block-device-mappings $devices --name "test-ami" --no-reboot
I have created a lambda function to create an AMI on a weekly basis. That solved my problem.
Another advantage is irrespective of the OS, I can use the function take AMI. :)

Assign a static elastic IP to an instance in Autoscaling Group

I have one instance in ASG, I need to assign an elastic ip that instance. Now when the instance health check fails, the newly launched instance should have the same elastic IP. The IAM role and everything is in the correct order.
INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
MAXWAIT=3
ALLOC_ID=${IPAddresses}
echo "Checking if EIP with ALLOC_ID[$ALLOC_ID] is free...."
ISFREE=$(aws ec2 describe-addresses --allocation-ids $ALLOC_ID --query Addresses[].InstanceId --output text --region ${AWS::Region})
STARTWAIT=$(date +%s)
while [ ! -z "$ISFREE" ]; do
if [ "$(($(date +%s) - $STARTWAIT))" -gt $MAXWAIT ]; then
echo "WARNING: We waited 30 seconds, we're forcing it now."
ISFREE=""
else
echo "Waiting for EIP with ALLOC_ID[$ALLOC_ID] to become free...."
sleep 3
ISFREE=$(aws ec2 describe-addresses --allocation-ids $ALLOC_ID --query Addresses[].InstanceId --output text --region ${AWS::Region})
fi
done
echo Running: aws ec2 associate-address --instance-id $INSTANCE_ID --allocation-id $ALLOC_ID --allow-reassociation --region ${AWS::Region}}
aws ec2 associate-address --instance-id $INSTANCE_ID --allocation-id $ALLOC_ID --allow-reassociation --region ${AWS::Region}
yum install jq -y
Not sure how to take that IP from the resource itself and pass it as a user data in Launch configuration.
In the CFN, it would look similar to the following:
Resources:
MyEIP:
Type: AWS::EC2::EIP
Properties: {}
MyLaunchTemplate:
Type: AWS::EC2::LaunchTemplate
Properties:
UserData:
Fn::Base64: !Sub |
#!/bin/bash -xe
EIP_IP=${MyEIP}
echo ${!EIP_IP}
# use aws cli to attach EIP_IP to the instance
Instance role would be required as well with permissions to attach the EIP.
From docs about !Ref which will be used when EIP_IP=${MyEIP}:
When you pass the logical ID of this resource to the intrinsic Ref function, Ref returns the Elastic IP address.
With EC2 & Auto scaling, You need using user data in EC2 to Auto Attach Elastic IP to EC2 Instance For Auto scaling
#!/bin/bash
aws configure set aws_access_key_id "XYZ..."
aws configure set aws_secret_access_key "ABC..."
aws configure set region "ap-..."
aws ec2 associate-address --instance-id "$(curl -X GET "http://169.254.169.254/latest/meta-data/instance-id")" --public-ip your_elastic_IP
Note: you should create new user & IAM have only permission associate-address to create/get aws key
Hope it be help you :)

How to take EBS snapshot in Boto3 only for running instances?

I am currently migrating the automated EBS snapshot from a Bash script to Python Boto3. In the original Bash shell, the script was just one line below:
ec2-describe-instances --filter "instance-state-code=16" | grep "vol-" | awk '{print $3}' | xargs -n 1 -t ec2-create-snapshot -d "automated daily backup"
instance state code 16 refer to the running EC2 instances. I am new to Boto3, I have searched up everywhere the closest I can find is to taking snapshots of attached volumes, but that is not good enough as the stopped instances will still be snapshot every night despite nothing is changed on its EBS volumes.
With boto3, you can create a filter for the ec2 resource, where you get only the running instances. From the resulting list of instances, iterate over each of them, and check their block_device_mappings.
You can get the volume-id from the above dictionary. Now, all you need to do is create a snapshot.
A rough code would be:
ec2 = boto3.resource('ec2')
for instance in ec2.instances.filter(
Filters=[{
'Name': "instance-state-name",
'Values': ["running"]
}]
):
for device in instance.block_device_mappings:
ec2.create_snapshot(VolumeId=device.get('Ebs').get('VolumeId'))
This doesn't answer your boto question, but I notice you are using the old-style command-line interface. These days, it is recommended to use the AWS Command-Line Interface (CLI) that has some great capabilities.
For example, this command will list the Volume ID for all EBS volumes attached to instances:
aws ec2 describe-instances --query Reservations[*].Instances[*].BlockDeviceMappings[*].Ebs.VolumeId --output text
You could then add a filter to only show running instances:
aws ec2 describe-instances --query Reservations[*].Instances[*].BlockDeviceMappings[*].Ebs.VolumeId --filter Name=instance-state-name,Values=running --output text
Then you could put it within another command to snapshot volumes of running instances:
aws ec2 create-snapshot --volume-id `aws ec2 describe-instances --query Reservations[*].Instances[*].BlockDeviceMappings[*].Ebs.VolumeId --filter Name=instance-state-name,Values=running --output text`
No strange awk/grep commands required!

Can we pass CLI command in user data for EC2 to auto attach and mount EBS volume?

I am using auto-scaling with desired count as 1 for master node. In case the instance terminates, in order to maintain high availability we need to attach the same EBS volume from previously terminated instance with the newly created one.
Provided CLI is configured on my AMI, I tried each of the followings in user data however it did not work.
#!/bin/bash
EC2_INSTANCE_ID=$(ec2metadata --instance-id)
aws ec2 attach-volume --volume-id vol-777099d8 --instance-id $EC2_INSTANCE_ID --device /dev/sdk
#!/bin/bash
echo "aws ec2 attach-volume --volume-id vol-777099d8 --instance-id $(ec2metadata --instance-id) --device /dev/sdk" > /tmp/xyz.sh
sudo chmod 755 /tmp/xyz.sh
sudo sh /tmp/xyz.sh 2>>
#!/bin/bash
var='ec2 attach-volume --volume-id vol-777099d8 --instance-id $(ec2metadata --instance-id) --device /dev/sdk'
aws "$var"
aws ec2 attach-volume --volume-id vol-777099d8 --instance-id $(ec2metadata --instance-id) --device /dev/sdk
Appreciate your help!
It probably did not work because an EBS volume can only be attached to a single instance at one time. If it did not work you should have error messages in response to the CLI commands to help you understand why it did not work so check the instance's log.
I think you should revisit your architecture a bit because trying to do this sends up a red flag for me. First, a HA architecture should not have a single instance running. A good architecture would remain HA as instances are scaled up and down. If you have data that needs to be available to more than one instance then you should use S3 or EFS to store that data and not an EBS volume.

aws command line interface - aws ec2 wait - Max attempts exceeded

I am working on shell script, witch does follow:
creates snapshot of EBS Volume;
creates AMI image based on this snapshot.
1) I use follow command to create snapshot:
SNAPSHOT_ID=$(aws ec2 create-snapshot "${DRYRUN}" --volume-id "${ROOT_VOLUME_ID}" --description "${SNAPSHOT_DESCRIPTION}" --query 'SnapshotId')
2) I use waiter to wait complete state:
aws ec2 wait snapshot-completed --snapshot-ids "${SNAPSHOT_ID}"
When I test it with EBS Volume 8 GB size everything goes well.
When it is 40 GB, I have an exception:
Waiter SnapshotCompleted failed: Max attempts exceeded
Probably, 40 GB takes more time, then 8 GB one, just need to wait.
AWS Docs (http://docs.aws.amazon.com/cli/latest/reference/ec2/wait/snapshot-completed.html) don't have any timeout or attempts quantity option.
May be some of you have faced the same issue?
So, finally, I used follow way to solve it:
Create snapshot
Use loop to check exit status of command aws ec2 wait snapshot-completed
If exit status is not 0 then print current state, progress and run waiter again.
# Create snapshot
SNAPSHOT_DESCRIPTION="Snapshot of Primary frontend instance $(date +%Y-%m-%d)"
SNAPSHOT_ID=$(aws ec2 create-snapshot "${DRYRUN}" --volume-id "${ROOT_VOLUME_ID}" --description "${SNAPSHOT_DESCRIPTION}" --query 'SnapshotId')
while [ "${exit_status}" != "0" ]
do
SNAPSHOT_STATE="$(aws ec2 describe-snapshots --filters Name=snapshot-id,Values=${SNAPSHOT_ID} --query 'Snapshots[0].State')"
SNAPSHOT_PROGRESS="$(aws ec2 describe-snapshots --filters Name=snapshot-id,Values=${SNAPSHOT_ID} --query 'Snapshots[0].Progress')"
echo "### Snapshot id ${SNAPSHOT_ID} creation: state is ${SNAPSHOT_STATE}, ${SNAPSHOT_PROGRESS}%..."
aws ec2 wait snapshot-completed --snapshot-ids "${SNAPSHOT_ID}"
exit_status="$?"
done
If you have something that can improve it, please share with us.
you should probably use until in bash, looks a bit cleaner and you don't have to repeat.
echo "waiting for snapshot $snapshot"
until aws ec2 wait snapshot-completed --snapshot-ids $snapshot 2>/dev/null
do
do printf "\rsnapshot progress: %s" $progress;
sleep 10
progress=$(aws ec2 describe-snapshots --snapshot-ids $snapshot --query "Snapshots[*].Progress" --output text)
done
aws ec2 wait snapshot-completed takes a while to time out. This snippet uses aws ec2 describe-snapshots to get the progress. When it's 100% it calls snapshot-completed.
# create snapshot
SNAPSHOTID=$(aws ec2 create-snapshot --volume-id $VOLUMEID --output text --query "SnapshotId")
echo "Waiting for Snapshot ID: $SNAPSHOTID"
SNAPSHOTPROGRESS=$(aws ec2 describe-snapshots --snapshot-ids $SNAPSHOTID --query "Snapshots[*].Progress" --output text)
while [ $SNAPSHOTPROGRESS != "100%" ]
do
sleep 15
echo "Snapshot ID: $SNAPSHOTID $SNAPSHOTPROGRESS"
SNAPSHOTPROGRESS=$(aws ec2 describe-snapshots --snapshot-ids $SNAPSHOTID --query "Snapshots[*].Progress" --output text)
done
aws ec2 wait snapshot-completed --snapshot-ids "$SNAPSHOTID"
This is essentially the same thing as above, but prints out a progress message every 15 seconds. Snapshots that are completed return 100% immediately.
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-retries.html
You can set a variable or use the config file to increase the timeouts.
AWS_MAX_ATTEMPTS=100
~/.aws/config
[default]
retry_mode = standard
max_attempts = 6
ISSUE: In ci/cd we had command to wait ecs service to be steady and got this error
aws ecs wait services-stable \
--cluster MyCluster \
--services MyService
ERROR MSG : Waiter ServicesStable failed: Max attempts exceeded
FIX
in order to fix this issue we followed this doc
-> https://docs.aws.amazon.com/AmazonECS/latest/bestpracticesguide/load-balancer-healthcheck.html
aws elbv2 modify-target-group --target-group-arn <arn of target group> --healthy-threshold-count 2 --health-check-interval-seconds 5 --health-check-timeout-seconds 4
-> https://docs.aws.amazon.com/AmazonECS/latest/bestpracticesguide/load-balancer-connection-draining.html
aws elbv2 modify-target-group-attributes --target-group-arn <arn of target group> --attributes Key=deregistration_delay.timeout_seconds,Value=10
this fixed the issue
In case you have more target groups to edit just output the target groups arns to a file and run this in a loop.