Trouble Mounting EBS Volume on EC2 - amazon-web-services

Good afternoon,
I am new to EC2 and have been trying to mount an EBS volume on an EC2 instance. Following the instructions at this StackOverflow question I did the following:
1. Format file system /dev/xvdf (Ubuntu's internal name for this particular device number):
sudo mkfs.ext4 /dev/xvdf
2. Mount file system (with update to /etc/fstab so it stays mounted on reboot):
sudo mkdir -m 000 /vol
echo "/dev/xvdf /vol auto noatime 0 0" | sudo tee -a /etc/fstab
sudo mount /vol
There now appears to be a folder (or volume) at /vol but it has been (prepopulated?) with a folder entitled lost+found, and does not have the 15GB that I assigned to the EBS volume (it has something much smaller).
Any help you could provide would be appreciated. Thanks!
UPDATE 1
After following the first suggestion (sudo mount /dev/xvdf /vol), here is the output of df:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvda1 8256952 791440 7046084 11% /
udev 294216 8 294208 1% /dev
tmpfs 120876 164 120712 1% /run
none 5120 0 5120 0% /run/lock
none 302188 0 302188 0% /run/shm
/dev/xvdf 15481840 169456 14525952 2% /vol
This might indicate that I do in fact have the 15GB on /vol . However I still do have that strange lost+found folder in there. Anything I should be worried about?

Nothing is wrong with your /vol. It was mounted as shown by df output.
lost+found directory is used by filesystem to recover broken files (fsck stores recovered files there), so it's normal you can see it.
Small size problem might refer to kibibytes:
1 kibibyte = 2^10 = 1024 bytes
16G = 14.9Gib

Try at the last line:
sudo mount /dev/xvdf /vol

Related

How to free up space on AWS ec2 Ubuntu instance running nginx and jenkins?

I have an Ubuntu ec2 instance running nginx and Jenkins. There is no more space available to do updates, and every command I try to free up space doesn't work. Furthermore, when trying to reach Jenkins I'm getting 502 Bad Gateway.
When I run sudo apt-get update I get a long list of errors but the main one that stood out was E: Write error - write (28: No space left on device)
I have no idea why there is no more space, or what caused it but df -h gives the following output:
Filesystem Size Used Avail Use% Mounted on
udev 2.0G 0 2.0G 0% /dev
tmpfs 394M 732K 393M 1% /run
/dev/xvda1 15G 15G 0 100% /
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/loop1 56M 56M 0 100% /snap/core18/1988
/dev/loop3 34M 34M 0 100% /snap/amazon-ssm-agent/3552
/dev/loop0 100M 100M 0 100% /snap/core/10958
/dev/loop2 56M 56M 0 100% /snap/core18/1997
/dev/loop4 100M 100M 0 100% /snap/core/10908
/dev/loop5 33M 33M 0 100% /snap/amazon-ssm-agent/2996
tmpfs 394M 0 394M 0% /run/user/1000
I tried to free up the space by running sudo apt-get autoremove and it gave me E: dpkg was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem.
I ran sudo dpkg --configure -a and got pkg: error: failed to write status database record about 'libexpat1-dev:amd64' to '/var/lib/dpkg/status': No space left on device
Lastly, I ran sudo apt-get clean; sudo apt-get autoclean and it gave me the following errors:
Reading package lists... Error!
E: Write error - write (28: No space left on device)
E: IO Error saving source cache
E: The package lists or status file could not be parsed or opened.
Any help to free up space and get the server running again will be greatly appreciated.
In my case, I have an app with nginx, postgresql and gunicorn all containerized. I followed those steps to solve my issue,
I tried to figure out which files are consuming my storage the most using command below:
sudo find / -type f -size +10M -exec ls -lh {} \;
As you can see from the screenshot, it turns of that unused and docker related containers are the source
I then purge all unused, stopped or dangling images: docker system prune -a
I was able to reclaim about 4.4 GB at the end!
For a server (If you're not testing Jenkins & Nginx) running Jenkins & Nginx you must manage the disk partition in a better way. Following are the few possible ways to fix your issue.
Expand the existing EC2 root EBS volume size from 15 GB to a higher value from the AWS EBS console.
OR
Find out the files consuming the high disk space and remove them if not required. Most probably log files consuming the disk spaces. You can execute the following commands to find out the locations that occupied with more space.
cd /
du -sch * | grep G
OR
Add extra EBS volume to your instance and mount it to Jenkins home directory, or to the location where more disk space is using.

No space left on device when pulling docker image from AWS

I am pulling a variety of docker images from my AWS, but it keeps getting stuck on the final image with the following error
ERROR: for <container-name> failed to register layer: Error processing tar file(exit status 1): symlink libasprintf.so.0.0.0 /usr/lib64/libasprintf.so: no space left on device
ERROR: failed to register layer: Error processing tar file(exit status 1): symlink libasprintf.so.0.0.0 /usr/lib64/libasprintf.so: no space left on device
Does anyone know how to fix this problem?
I have tried stopping docker, removing var/lib/docker and starting it back up again but it gets stuck at the same place
result of
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/nvme0n1p1 8.0G 6.5G 1.6G 81% /
devtmpfs 3.7G 0 3.7G 0% /dev
tmpfs 3.7G 0 3.7G 0% /dev/shm
tmpfs 3.7G 17M 3.7G 1% /run
tmpfs 3.7G 0 3.7G 0% /sys/fs/cgroup
tmpfs 753M 0 753M 0% /run/user/0
tmpfs 753M 0 753M 0% /run/user/1000
The issue was with the EC2 instance not having enough EBS storage assigned to it. Following these steps will fix it:
Navigate to ec2
Look at the details of your instance and locate root device and block device
press the path and select EBS ID
click actions in the volume panel
select modify volume
enter the desired volume size (default is 8GB, shouldn’t need much more)
ssh into instance
run lsblk to see available volumes and note the size
run sudo growpart /dev/volumename 1 on the volume you want to resize
run sudo xfs_growfs /dev/volumename (the one with / in mountpoint column of lsblk)
I wrote an article about this after struggling with the same issue. If you have deployed successfully before, you may just need to add some maintenance to your deploy process. In my case, I just added cronjob to run the following:
docker ps -q --filter "status=exited" | xargs --no-run-if-empty docker rm;
docker volume ls -qf dangling=true | xargs -r docker volume rm;
https://medium.com/#_ifnull/aws-ecs-no-space-left-on-device-ce00461bb3cb
It might be that the older docker images, volumes, etc. are still stuck in your EBS storage. From the docker docs:
Docker takes a conservative approach to cleaning up unused objects (often referred to as “garbage collection”), such as images, containers, volumes, and networks: these objects are generally not removed unless you explicitly ask Docker to do so. This can cause Docker to use extra disk space.
SSH into your EC2 instance and verify that the space is actually taken up:
ssh ec2-user#<public-ip>
df -h
Then you can prune the old images out:
docker system prune
Read the warning message from this command!
You can also prune the volumens. Do this if you're not storing files locally (which you shouldn't be anyway, they should be in something like AWS S3)
Use with Caution:
docker system prune --volumes

Error when mounting an EC2 instances

I am following an online tutorial to set up an EC2 instance for a group project. http://www.developintelligence.com/blog/2017/02/analyzing-4-million-yelp-reviews-python-aws-ec2-instance/.
The instance I used is r3.4xlarge, the tutorial says if I chose an instance with an SSD, I need to mount that and run the following code:
lsblk
sudo mkdir /mnt/ssd
sudo mount /dev/xvdb /mnt/ssd
sudo chown -R ubuntu /mnt/ssd
lsblk shows the following:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdb 202:16 0 300G 0 disk
However, when I run sudo mount /dev/xvdb /mnt/ssd, it gives me the error:
mount: wrong fs type, bad option, bad superblock on /dev/xvdb,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
Could someone provide a solution to this error? Thanks!
Before one mounts a filesystem in linux - the filesystem should be created.
In this case it might be
mkfs.ext4 /dev/xvdb
This would create an ext4 filesystem on a /dev/xvdb device.

AWS EBS: If I use a volume of 15G, why does df report 8G?

I use a public snaphot.
I created one volume with 15G, and another with 25G, both from same snapshot. However, after mounting, df shows both at 8G and full. The lsblk shows the block devices with 15G and 25G. Do I need to give an extra argument when mounting?
How can I mount read/write?
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
..
xvdf 202:80 0 25G 0 disk /data
xvdg 202:96 0 15G 0 disk /data2
df
Filesystem 1K-blocks Used Available Use% Mounted on
..
/dev/xvdf 8869442 8869442 0 100% /data
/dev/xvdg 8869442 8869442 0 100% /data2
mount
..
/dev/xvdf on /data type iso9660 (ro,relatime)
/dev/xvdg on /data2 type iso9660 (ro,relatime)
Probably your volume raw capacity is larger than your filesystem size, so, as #avihoo-mamca suggested, you need to extend your filesystem to the volume size, using resize2fs.

Not all storage is available for Amazon EBS

I'm sure the problem appears because of my misunderstanding of Ec2+EBS configuring, so answer might be very simple.
I've created RedHat ec2 instance on the Amazon WS with 30GB EBS storage. But lsblk shows me, that only 6GB of total 30 is available for me:
xvda 202:0 0 30G 0 disk
└─xvda1 202:1 0 6G 0 part /
How can I mount all remaining storage space to my instance?
[UPDATE] commands output:
mount:
/dev/xvda1 on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sudo fdisk -l /dev/xvda:
WARNING: GPT (GUID Partition Table) detected on '/dev/xvda'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/xvda: 32.2 GB, 32212254720 bytes
255 heads, 63 sectors/track, 3916 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/xvda1 1 1306 10485759+ ee GPT
resize2fs /dev/xvda1:
resize2fs 1.41.12 (17-May-2010)
The filesystem is already 1572864 blocks long. Nothing to do!
I believe you are experiencing an issue that seems specific to EC2 and RHEL* where the partition won't extend using the standard tools.
If you follow the instructions of this previous answer you should be able to extend the partition to use the full space. Follow the instructions particularly carefully if expanding the root partition!
unable to resize root partition on EC2 centos
If you update your question with the output of fdisk -l /dev/xvda and mount it should help provide any extra information if the following isnt suitable:
I would assume that you could either re-partition xvda to provision the space for another mount point (/var or /home for example) or grow your current root partition into the extra space available - you can follow this guide here to do this
Obviously be sure to back up any data you have on there, this is potentially destructive!
[Update - how to use parted]
The following link will talk you through using GNU Parted to create a partition, you will essentially just need to create a new partition, then I would temporarily mount this to a directory such as /mnt/newhome, copy across all of the current contents of /home (recursively as root keeping permissions with cp -rp /home/* /mnt/newhome), then I would rename the current /home to /homeold, then make sure you have set up Fstab to have the correct entry: (assuming your new partition is /dev/xvda2)
/dev/xvda2 /home /ext4 noatime,errors=remount-ro 0 1