Is the EBS volume mounted ? and where? - amazon-web-services

In my EC2 instance, that is attached to a volume EBS of 100GB, I run this command:
[ec2-user ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 100G 0 disk
└─xvda1 202:1 0 8G 0 part /
Here is the file /etc/fstab:
UUID=ue9a1ccd-a7dd-77f8-8be8-08573456abctkv / ext4 defaults 1 1
I want to understand: why only the volume of 8GB has a mount point ?
Also, is the fact of mounting a volume on root '/', means that all the content of root is being stored on EBS volume?

I want to understand: why only the volume of 8GB has a mount point ?
Because additional volumes are not formatted/mounted by default. AWS does not know whether you'd like to have ext4 or NTFS or something else as well as which mount point you'd like to have.
Also, is the fact of mounting a volume on root '/', means that all the content of root is being stored on EBS volume?
Yes if you have EBS-backed instance (unlike so-called instance-backed instances) and if you do not have other volumes mounted (not to be confused with 'attached')
p.s. as far as I see, you initially had created 8GB volume then resized it via AWS console to 100GB. Pls note you resized the EBS volume (xvda) but did not resize the partition (xvda1). AWS will not resize it automatically by the same reason: it doesn't know how you're going to use the extra space.

Related

How to access the EBS volume on AWS via EC2?

I have an EBS (elastic storage) volume on AWS attached to my EC2 instance.
However, how do I make all the gigas available to that EC2 instance?
When I run
sudo file -s /dev/xvda
I get
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 80G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdb 202:16 0 8G 0 disk
So it looks like only the 8G part is mounted but not the whole 80G.
How do I mount the extra space?
I saw an article here https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html
It says I should format this volume, but as I see it has a subvolume xvda1, so I don't want to accidentally format everything before mounting it again.
Any idea how to make it work and mount this additional 80Giga?
Thanks!
Thank you to #jellycsc reference above.
Check file formats on volumes:
sudo file -s /dev/xvd*
List block devices:
lsblk
Then change the size of the block device:
sudo growpart /dev/xvda 1
// 1 is the number of the volume on the main EBS
Verify with:
lsblk
Then extend the file system (before was the volume).
Check:
df -h
Then check the filesystem:
sudo file -s /dev/xvd*
and apply:
sudo resize2fs /dev/xvda1

how to make an attached EBS volume accessible?

I changed my instance type, and now previously attached volumes are not available at startup. How do I attach and mount volumes?
In the volume info in the AWS console:
Attachment information i-e85c62d0 (hongse):/dev/sdf (attached)
however there is nothing at /dev/sdf on the instance.
I tried to mount it following the info on the AWS site such as:
ubuntu#hongse:~$ sudo mkdir /ebs1
ubuntu#hongse:~$ sudo mount /dev/sdf /ebs1
mount: special device /dev/sdf does not exist
but failed.
What other steps could I try to mount an existing volume?
Ok, you're using ubuntu in AWS with an ECS volume. Try this:
ubuntu#hostname1:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 128G 0 disk
└─xvda1 202:1 0 128G 0 part /
xvdf 202:80 0 1000G 0 disk
Note that xvdf (1TB) drive is not mounted in my example.
You will want to type the following to mount your disk:
ubuntu#hostname1:~$ sudo mount /dev/xvdf /ebs1
NOTE: I don't know the reason why the AWS console shows /dev/sdf and the actual host shows /dev/xvdf, but that's the way it is.

AWS block device mapping to mount a snapshot while creating separate root

I want to create a new instance with root mounting from its AMI (sda1), while at the same creating a secondary volume (sda2) from a snapshot.
I am using the following block device mapping to add sda2:
[
{
"DeviceName": "/dev/sda2",
"Ebs": {
"DeleteOnTermination": false,
"SnapshotId": "snap-0daafbeb9409cb652"
}
}
]
However, while an sda1 volume is created from the AMI, it appears that sda2 is mounted as root
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part
xvdb 202:16 0 8G 0 disk
└─xvdb1 202:17 0 8G 0 part /
What should be different to cause xvda1 (which links to sda1) to mount as root instead? I do not want to modify the AMI to do this, the starting point for this process is a stock Ubuntu image.
aws ec2 run-instances --image-id ami-c80b0aa2 ... --block-device-mappings file://mappings.json
This problem is caused by the volume labels of the partitions being mounted. In this specific case, both volumes have the same label, indicating they are the root partition, which is confusing the boot process.
The solution here is to clear the label of the volume that is not being mounted as the root filesystem.

knife-ec2 not expanding volume at bootstrap

How can I create a larger than 8GB boot partition volume using knife-ec2 on an AWS hvm ami at boostrap?
In the old instance type of m1, i could just add --ebs-size 50 then run resize2fs after the system boot strapped.
When doing a new hvm ami (a t2 instance):
knife ec2 server create scott-base -N scott-base -r "role[base]" -I ami-57cfc412 --ebs-size 50
it will create the 50GB volume, but i cannot expand it after I login.
I see this during the build:
Warning: 50GB EBS volume size is larger than size set in AMI of 8GB.
Use file system tools to make use of the increased volume size.
And when I run resize2fs, this is what I get
[root#scott-base ~ ] resize2fs /dev/xvda
resize2fs 1.41.12 (17-May-2010)
resize2fs: Device or resource busy while trying to open /dev/xvda
Couldn't find valid filesystem superblock
I know I can go through the whole process of unmounting, copying and bringing it back up. I also know i can just add a volume after the fact, but I have to believe there is an easier way at bootstrap to get a larger ebs volume than 8GB.
[root#scott-base ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 50G 0 disk
└─xvda1 202:1 0 8G 0 part /
You are trying to apply resize2fs command to device reference /dev/xvda which is not file system itself, you can divide devices into partions, where you create file system (ext3,ext4,etc). You do have partition with filesystem on /dev/xvda1 partion, where you want to use resize2fs. Please read the documentation about devices and partions in linux.
The solution was related to the AMI itself. It turns out that some ami's are just not equipped with the ability to expand online. Our solution:
Launch the ami with a larger partition, knowing it would only default 8GB
Use the cloud-init and dracut module to increase the size during the next reboot
yum install -y cloud-init
yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm
yum install -y dracut-modules-growroot.noarch cloud-utils-growpart
dracut -f -v
Create a a personal image from that instance
Use the personal image to launch a new instance. At boot, it will be the larger size

EC2 machine abruptly terminating right after start

I was in the process of shrinking the size of the EBS volume attached to my ec2 machine. There are many tutorials on how to do this (here is the one I used).
The gist is that, I create a snapshot of the actual drive, and small empty volume. I then copy the content of the snapshot to the small drive via sudo rsync -aHAXxSP /mnt/snap/ /mnt/small.
When checking the contents of the small drive, it seems like it contains all of the folders/files in the original drive.
At the end, after detaching all volumes and attaching the smallest volume, when starting the ec2 instance, it terminates after initializing.
When I get the reason for the termination it says Instance initiated shutdown:
AI2s-MBP-7:~ i-danielk$ aws ec2 describe-instances --instance-ids i-e0ef0910
RESERVATIONS 645962089403 r-440c95a5
GROUPS sg-4cdf6427 zfei_profiler
INSTANCES 0 x86_64 tOGHd1424650611612 True xen ami-146e2a7c i-e0ef0910 r3.2xlarge zfei_profiler 2015-08-23T01:56:40.000Z /dev/xvda ebs hvm
BLOCKDEVICEMAPPINGS /dev/xvda
EBS 2015-08-23T01:55:56.000Z False attached vol-1d08fcf0
MONITORING disabled
PLACEMENT us-east-1c default
SECURITYGROUPS sg-4cdf6427 zfei_profiler
STATE 80 stopped
STATEREASON Client.InstanceInitiatedShutdown Client.InstanceInitiatedShutdown: Instance initiated shutdown
TAGS Name WIKI_TACL_MERGE
Any idea how can I figure out the real reason for my instance shutdown?
Update1: this is really weird ... the "Instance Settings > Get System Log" shows nothing. Here is a screenshot:
Update2: Both cases (original volume, and the small volume) were added as /dev/xvda (I tried /dev/sda but it didn't accept it).
Update3: Here is the content /boot/grub/grub.conf. Based on what I see, there is nothing problematic (and nothing needs to be changed).
[ec2-user#ip-10-167-76-117 target]$ cat /boot/grub/grub.conf
# created by imagebuilder
default=0
timeout=1
hiddenmenu
title Amazon Linux 2014.09 (3.14.33-26.47.amzn1.x86_64)
root (hd0,0)
kernel /boot/vmlinuz-3.14.33-26.47.amzn1.x86_64 root=LABEL=/ console=ttyS0 LANG=en_US.UTF-8 KEYTABLE=us
initrd /boot/initramfs-3.14.33-26.47.amzn1.x86_64.img
title Amazon Linux 2014.09 (3.14.27-25.47.amzn1.x86_64)
root (hd0,0)
kernel /boot/vmlinuz-3.14.27-25.47.amzn1.x86_64 root=LABEL=/ console=ttyS0
initrd /boot/initramfs-3.14.27-25.47.amzn1.x86_64.img
And this is /etc/fstab:
[ec2-user#ip-10-167-76-117 target]$ cat /etc/fstab
#
LABEL=/ / ext4 defaults,noatime 1 1
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0