Get AWS to automatically attach EC2 Volume to Ubuntu instance at startup - amazon-web-services

I would like to attach an EBS volume not a snapshot as a persistent store for my spot instances. I understand how to manually attach the volume, mount it and get it to survive reboots but how would I get it to automatically get attached at startup?
Is there something I could do in the user data at launching the instance?
Presently I have a ami that I run as a spot instance. I have a separate volume that persists and is used for both input to the instance and to save results. I only ever have one instance up at a time. The ami mounts the drive at /data. For the mount to survive reboots, I have edited /etc/fstab to include:
UUID=MY_VOLUME_UUID /data xfs defaults,nofail 0 2
Again Edited to show Passatizhi's Solution
I added the following to the Configure Instance Details > Advanced Details > User data part of the EC2 launch wizard:
#!/bin/bash
INSTANCE_ID=$(curl 169.254.169.254/latest/meta-data/instance-id)
export AWS_DEFAULT_REGION=$(curl 169.254.169.254/latest/meta-data/placement/availability-zone | sed 's/[a-z]$//')
/home/ubuntu/miniconda3/bin/aws ec2 attach-volume --volume-id vol-myVol12345 --instance-id $INSTANCE_ID --device /dev/sdf
sleep 10
sudo mkdir -p /data
sudo mount /dev/nvme1n1 /data
Note:
I needed to add the full path to aws to get it to work. Also as the ami already has the /data setup I don't need the sudo mkdir -p /data

#!/bin/bash INSTANCE_ID=$(curl 169.254.169.254/latest/meta-data/instance-id)
export AWS_DEFAULT_REGION=$(curl 169.254.169.254/latest/meta-data/placement/availability-zone | sed 's/[a-z]$//')
/bin/aws ec2 attach-volume --volume-id vol-0fdb738415896f8f6 --instance-id $INSTANCE_ID --device /dev/sdf
sleep 10
sudo mkdir -p /data
sudo mount /dev/nvme1n1 /data
Try /bin/aws instead aws. I used t3.small, so /dev/nvme1n1

Related

Can't Access Data in EBS Volume When I Attached Volume To New Instance

My directory is looking like it is mounted but if I terminate the instance and mount this volume to the new instance I can not see data from the old instance. Idk maybe I am missing some parts of the mounting. here is how I mount in the new instance.
'aws ec2 attach-volume --device /dev/xvdc --instance-id --volume-id --region=us-east-1',
'sudo mkfs -t ext4 /dev/xvdc',
'sudo mkdir /backup',
'sudo mount /dev/xvdc /backup/'

How can I attach a persistent EBS volume to an EC2 Linux launch template that is used in an autoscaling group?

To Clarify my Autoscaling group removes all instances and their root EBS volumes during inactive hours, then once inside active hours recreates and installs all necessary base programs. However I have a smaller EBS volume that is persistent and holds code and data I do not want getting wiped out during down times. I am currently manually attaching via the console and mounting every time I am working inside active hours using the commands below.
sudo mkdir userVolume
sudo mount /dev/xvdf userVolume
How can I automatically attach and mount this volume to a folder? This is all for the sake of minimizing cost and uptime to when I can actually be working on it.
Use this code:
#!/bin/bash
OUTPUT=$(curl http://169.254.169.254/latest/meta-data/instance-id)
aws ec2 attach-volume --volume-id vol-xxxxxxxxxxxx --device /dev/xvdf --instance-id $OUTPUT --region ap-southeast-1
Set your volume ID and region.
Refer this link for further details: https://aws.amazon.com/premiumsupport/knowledge-center/ec2-linux-spot-instance-attach-ebs-volume/

Attaching an EBS volume to AWS Batch Compute Environments

I want to set up AWS Batch running few python scripts to do some batch operations on file fetched from S3 and post processing they need to be saved to a volume.
For this I want to configure compute environments in AWS batch.
I wish to use spot instances but i need my EBS volume to be there even after instance is terminated and if new instance is spin up it has to mount same volume as used before.
Create a instance-template, provide a bootstrap script, for the mentioned case something like:
sudo mkdir -p /<any directory name where volume will be mounted eg: dir>
aws ec2 attach-volume --volume-id <volume_id> --instance-id $(wget -q -O - http://169.254.169.254/latest/meta-data/instance-id) --device /dev/sdf
sudo mount /dev/sdf /<above mentioned dir rg :dir>
in AWS batch definition, use the above template to launch your ec2 machine.

Can we pass CLI command in user data for EC2 to auto attach and mount EBS volume?

I am using auto-scaling with desired count as 1 for master node. In case the instance terminates, in order to maintain high availability we need to attach the same EBS volume from previously terminated instance with the newly created one.
Provided CLI is configured on my AMI, I tried each of the followings in user data however it did not work.
#!/bin/bash
EC2_INSTANCE_ID=$(ec2metadata --instance-id)
aws ec2 attach-volume --volume-id vol-777099d8 --instance-id $EC2_INSTANCE_ID --device /dev/sdk
#!/bin/bash
echo "aws ec2 attach-volume --volume-id vol-777099d8 --instance-id $(ec2metadata --instance-id) --device /dev/sdk" > /tmp/xyz.sh
sudo chmod 755 /tmp/xyz.sh
sudo sh /tmp/xyz.sh 2>>
#!/bin/bash
var='ec2 attach-volume --volume-id vol-777099d8 --instance-id $(ec2metadata --instance-id) --device /dev/sdk'
aws "$var"
aws ec2 attach-volume --volume-id vol-777099d8 --instance-id $(ec2metadata --instance-id) --device /dev/sdk
Appreciate your help!
It probably did not work because an EBS volume can only be attached to a single instance at one time. If it did not work you should have error messages in response to the CLI commands to help you understand why it did not work so check the instance's log.
I think you should revisit your architecture a bit because trying to do this sends up a red flag for me. First, a HA architecture should not have a single instance running. A good architecture would remain HA as instances are scaled up and down. If you have data that needs to be available to more than one instance then you should use S3 or EFS to store that data and not an EBS volume.

Add EBS to Ubuntu EC2 Instance

I'm having problem connecting EBS volume to my Ubuntu EC2 Instance.
Here's what I did:
From the Amazon AWS Console, I created a EBS 150GB volume and attached it to an Ubuntu 11.10 EC2 instance. Under the EBS volume properties, "Attachment" shows: "[my Ubuntu instance id]:/dev/sdf (attached)"
Tried mounting the drive on the Ubuntu box, and it told me "mount: /dev/sdf is not a block device"
sudo mount /dev/sdf /vol
So I checked with fdisk and tried to mount from the new location and it told me it wasn't the right file system.
sudo fdisk -l
sudo mount -v -t ext4 /dev/xvdf /vol
the error:
mount: wrong fs type, bad option, bad superblock on /dev/xvdf, missing
codepage or helper program, or other error In some cases useful info
is found in syslog - try dmesg | tail or so
"dmesg | tail" told me it gave the following error:
EXT4-fs (sda1): VFS: Can't find ext4 filesystem
I also tried putting the configurations into /etc/fstab file as instructed on http://www.webmastersessions.com/how-to-attach-ebs-volume-to-amazon-ec2-instance, but still gave same not the right file system error.
Questions:
Q1: Based on point 1 (above), why was the volume mapped to 'dev/sdf' when it's really mapped to '/dev/xvdf'?
Q2: What else do I need to do to get the EBS volume loaded? I thought it'll just take care of everything for me when I attach it to a instance.
Since this is a new volume, you need to format the EBS volume (block device) with a file system between step 1 and step 2. So the entire process with your sample mount point is:
Create EBS volume.
Attach EBS volume to /dev/sdf (EC2's external name for this particular device number).
Format file system /dev/xvdf (Ubuntu's internal name for this particular device number):
sudo mkfs.ext4 /dev/xvdf
Only format the file system if this is a new volume with no data on it. Formatting will make it difficult or impossible to retrieve any data that was on this volume previously.
Mount file system (with update to /etc/fstab so it stays mounted on reboot):
sudo mkdir -m 000 /vol
echo "/dev/xvdf /vol auto noatime 0 0" | sudo tee -a /etc/fstab
sudo mount /vol
Step 1: create volume
step 2: attach to your instance root volume
step 3: run sudo resize 2fs -p /dev/xvde
step 4: restart apache2 sudo service apache2 restart
step 4: run df -h
You can see total volume attached to your instance.