Fedora EC2 images running out of disk space - amazon-web-services

I am using the official Fedora EC2 cloud AMI for some development work. By default, the root device on these these machines is only 2GB, regardless of which instance type I use. After installing some stuff, it always runs out of space and yum starts to complain.
[fedora#ip-10-75-10-113 ~]$ df -ah
Filesystem Size Used Avail Use% Mounted on
rootfs 2,0G 1,6G 382M 81% /
proc 0 0 0 - /proc
sysfs 0 0 0 - /sys
devtmpfs 1,9G 0 1,9G 0% /dev
securityfs 0 0 0 - /sys/kernel/security
selinuxfs 0 0 0 - /sys/fs/selinux
tmpfs 1,9G 0 1,9G 0% /dev/shm
devpts 0 0 0 - /dev/pts
tmpfs 1,9G 172K 1,9G 1% /run
tmpfs 1,9G 0 1,9G 0% /sys/fs/cgroup
I don't have this problem with other EC2 images, such as CentOS or Ubuntu. Is there any way I can start the AMI with a single, unpartitioned disk? Or am I somehow using it in the wrong way?

When you create the instance, just give yourself a larger EBS volume. I think the default is 5GB (could be very wrong). Just increase it. You can always create an AMI from your current instance and launch based on the AMI with a larger EBS volume if you don't want to duplicate any work you've already done.

Related

Increase size of EC2 volume (non-root) on Ubuntu 18.04 (following AWS instructions fail)

There is a ton of great information on here, but I am struggling with this, I am following instructions EXACTLY as laid out in many responses, AND on AWS's instructions as well, which are basically the same with a lot of extra information in between, however unhelpful.
Here is what I am running and the responses I am getting. I have a secondary volume that I need to expand from 150GB to 200GB.
The thing is before the upgrade from 16.04 to 18.04 this process worked flawlessly... Now, it doesnt.
Please help.
ubuntu#hosting:~$ df -hT
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs tmpfs 1.6G 848K 1.6G 1% /run
/dev/nvme0n1p1 ext4 97G 55G 43G 57% /
tmpfs tmpfs 7.8G 20K 7.8G 1% /dev/shm
tmpfs tmpfs 5.0M 24K 5.0M 1% /run/lock
tmpfs tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/nvme2n1p1 ext4 148G 91G 51G 64% /var/www/vhosts
/dev/nvme1n1 ext4 99G 28G 67G 30% /plesk-backups
tmpfs tmpfs 1.6G 0 1.6G 0% /run/user/1000
tmpfs tmpfs 1.6G 0 1.6G 0% /run/user/10046
ubuntu#hosting:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme2n1 259:0 0 200G 0 disk
└─nvme2n1p1 259:1 0 150G 0 part /var/www/vhosts
nvme1n1 259:2 0 100G 0 disk /plesk-backups
nvme0n1 259:3 0 100G 0 disk
└─nvme0n1p1 259:4 0 100G 0 part /
ubuntu#hosting:~$ sudo growpart /dev/nvme2n1 1
NOCHANGE: partition 1 is size 419428319. it cannot be grown
ubuntu#hosting:~$ sudo resize2fs /dev/nvme2n1p1
resize2fs 1.44.1 (24-Mar-2018)
The filesystem is already 39321339 (4k) blocks long. Nothing to do!```
You can try to use cfdisk tool by sudo user sudo cfdisk, and then allocate the free space to the partition you want to expand on the popup UI (don't forget to write to disk before quite the tool), then run resize2fs again.

AWS EC2: Inconsistent volume name

I am trying to automate the process of backing up some ec2 instance volumes with ansible's respective module.
However, when I log in to my instance:
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 488M 0 488M 0% /dev
tmpfs 100M 11M 89M 11% /run
/dev/xvda1 59G 3.2G 55G 6% /
tmpfs 496M 0 496M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 496M 0 496M 0% /sys/fs/cgroup
/dev/loop4 13M 13M 0 100% /snap/amazon-ssm-agent/495
/dev/loop2 17M 17M 0 100% /snap/amazon-ssm-agent/734
/dev/loop6 88M 88M 0 100% /snap/core/5548
/dev/loop3 88M 88M 0 100% /snap/core/5662
/dev/loop1 17M 17M 0 100% /snap/amazon-ssm-agent/784
/dev/loop0 88M 88M 0 100% /snap/core/5742
tmpfs 100M 0 100M 0% /run/user/1003
tmpfs 100M 0 100M 0% /run/user/1004
When I tried to use /dev/xvda1 as volume name, I got an error that
msg: Could not find volume with name /dev/xvda1 attached to instance i-02a334fgik4062
I had to explicitly use /dev/sda1
Why this inconsistency?
That's not specific to ansible, the AWS EC2 API does the same thing, as specified in the Device Name Considerations section of their documentation; summarized here to avoid the "link-only" answer anti-pattern:
Depending on the block device driver of the kernel, the device could be attached with a different name than you specified. For example, if you specify a device name of /dev/sdh, your device could be renamed /dev/xvdh or /dev/hdh. In most cases, the trailing letter remains the same. In some versions of Red Hat Enterprise Linux (and its variants, such as CentOS), even the trailing letter could change (/dev/sda could become /dev/xvde). In these cases, the trailing letter of each device name is incremented the same number of times. For example, if /dev/sdb is renamed /dev/xvdf, then /dev/sdc is renamed /dev/xvdg. Amazon Linux creates a symbolic link for the name you specified to the renamed device. Other operating systems could behave differently.
In every case I've ever seen, the sd versions are specified to the AWS API, but they materialize as xvd (or even sometimes as nvme) on the actual instance

How to mount both sc1 volume on Amazon EC2 instances

My current 500GB Amazon EBS Cold HDD (sc1) volumes in /dev/sdf is full. Following the tutorial here (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-expand-volume.html#migrate-data-larger-volume), I successfully got a 1.5 TB SC1, mounted in /dev/xvda and attached it to the instance. Please note that the 500 GB sc1 (/dev/sdf) is also attached to the instance.
Sadly, when I turned on the instance, I only see this new 1.5 TB SC1 in /dev/xvda, but not the old 500 GB SC1 in /dev/sdf and the corresponding data. When I do df -h:
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvdg1 1.5T 34G 1.5T 3% /
devtmpfs 7.9G 76K 7.9G 1% /dev
tmpfs 7.9G 0 7.9G 0% /dev/shm
If I turned off the instance, detached the 1.5 TB SC1 (/dev/xvda) from the instance, kept the attached 500 GB TB SC1 (/dev/sdf) to the instance, and finally restarted the instance, I will see the 500 GB TB SC1 (/dev/sdf) and its data in it again.
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdf 500G 492G 8G 99% /
devtmpfs 7.9G 76K 7.9G 1% /dev
tmpfs 7.9G 0 7.9G 0% /dev/shm
Is there any way that we can mount both of these 2 volumes and see/transfer data between these 2 volumes in the same instance? Could any guru enlighten? Thanks.
#
Respond to the comment:
The following is the result of "lsblk" when both 500GB and 1.5GB SC1 are attached.
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part
xvdf 202:80 0 500G 0 disk
└─xvdf1 202:81 0 500G 0 part
xvdg 202:96 0 1.5T 0 disk
└─xvdg1 202:97 0 1.5T 0 part /
The following is the content in "/etc/fstab" when both 500GB and 1.5GB SC1 are attached.
LABEL=/ / ext4 defaults,noatime 1 1
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
extra comments:
UUID results
ls -l /dev/disk/by-uuid
total 0
lrwxrwxrwx 1 root root 11 Oct 19 08:54 43c07df6-e944-4b25-8fd1-5ff848b584b2 -> ../../xvdg1
#2016-10-21-update
After trying the following
uuidgen
tune2fs /dev/xvdf1 -U <the uuid generated before>
making sure both volumes are attached to the instance, restarting the instance, only the 500 GB volume show up.
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvdg1 493G 473G 20G 96% /
devtmpfs 7.9G 76K 7.9G 1% /dev
tmpfs 7.9G 0 7.9G 0% /dev/shm
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part
xvdf 202:80 0 500G 0 disk
└─xvdf1 202:81 0 500G 0 part /
xvdg 202:96 0 1.5T 0 disk
└─xvdg1 202:97 0 1.5T 0 part /
ls -l /dev/disk/by-uuid
total 0
lrwxrwxrwx 1 root root 11 Oct 20 20:48 43c07df6-e944-4b25-8fd1-5ff848b584b2 -> ../../xvdg1
lrwxrwxrwx 1 root root 11 Oct 20 20:48 a0161cdc-2c25-4d18-9f01-a75c6df54ccd -> ../../xvdf1
Also "sudo mount /dev/xvdg1" doesn't help as well. Could you enlighten? Thanks!
If the 2 disk are cloned and have the same UUID they cannot be mounted at the same time and system while booting will mount the first partition it finds.
Generate a new UUID for your disk
uuidgen
Running this will give you a new UUID - as the name implies it will be unique
Apply the new UUID on your disk
In your case xvdf is not mounted so you can change its UUID
tune2fs /dev/xvdf1 -U <the uuid generated before>
Change your mount point
Its getting better, however as they were clone both disk have the same mount point and that is not possible. you need to update your File System Table.
create a new folder which will be your mount point
mkdir /new_drive
mount the drive to your new mount point
sudo mount /dev/xvdg1 /new_drive
Update /etc/fstab so it will be mounted correctly on next reboot
Update the line about /dev/xvdg1 drive , you have something like
/dev/xvdg1 / ext4 ........
Change the 2nd column
/dev/xvdg1 /new_drive ext4 ........
All your data from the 1.5 TB are accessible in /new_drive. You can verify the file is correct by running mount -a
Please adapt this procedure if you want to change the folder name or if you want to keep the 1.5 TB as root mount point and change the 500 GB drive.

Scratch disk visibility on Google Compute Engine VM

I've started an instance which had a -d at the back (this should be a scratch disk).
But on boot the disk space stated is not seen.
It should be:
8 vCPUs, 52 GB RAM, 2 scratch disks (1770 GB, 1770 GB)
But df -h outputs:
Filesystem Size Used Avail Use% Mounted on
rootfs 10G 644M 8.9G 7% /
/dev/root 10G 644M 8.9G 7% /
none 1.9G 0 1.9G 0% /dev
tmpfs 377M 116K 377M 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 753M 0 753M 0% /run/shm
so how does one get to run an instance with boot that's a persistent disk and have scratch disks available?
The thing is that I need high CPU and lots of scratch space.
df does not show scratch disks because they are not formatted and mounted. Issue the following command:
ls -l /dev/disk/by-id/
In the output there will be something like:
lrwxrwxrwx 1 root root ... scsi-0Google_EphemeralDisk_ephemeral-disk-0 -> ../../sdb
lrwxrwxrwx 1 root root ... scsi-0Google_EphemeralDisk_ephemeral-disk-1 -> ../../sdc
Then, you can use mkfs and mount the appropriate disks.
See documentation for more info.

Can I create swap file in the ephemeral storage?

First of all, this is my df -h output:
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.9G 4.2G 3.4G 56% /
udev 1.9G 8.0K 1.9G 1% /dev
tmpfs 751M 180K 750M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 1.9G 0 1.9G 0% /run/shm
/dev/xvdb 394G 8.4G 366G 3% /mnt
I know that the /mnt is an ephemeral storage, all data stores in will be deleted after reboot.
Is it OK to create a /mnt/swap file to use as swap file? I add the following line into /etc/fstab
/mnt/swap1 swap swap defaults 0 0
By the way, what's the /run/shm used to do ?
Thanks.
Ephemeral storage preserves data between reboots, but looses them between restarts (stop/start). Also see this: What data is stored in Ephemeral Storage of Amazon EC2 instance?