I have GitLab runner running on AWS and I've connected it to my GitLab Project. During on CI/CD Pipeline I want to run End to End test on my React Native mobile using reactnativecommunity/react-native-android image. But when want to create SDK the job fail and display error No space left on device. However when I run df -h on .gitlab-ci.yml script display disk overlay only 16G. On GitLab Runner config i use amazonec2-instance-type=c5d.large who has 50GB storage disk. My question is how to increase the storage space disk on GitLab runner pipeline more than 16G. Here the output on running df -h:
Filesystem
Size
Used
Avail
Use%
Mounted on
overlay
16G
14G
2.2G
86%
/
tmpfs
64M
0
64M
0%
/dev
tmpfs
1.9G
0
1.9G
0%
/sys/fs/cgroup
/dev/nvme0n1p1
16G
14G
2.2G
86%
/builds
shm
64M
0
64M
0%
/dev/shm
The AWS Documentation has it documented here
This question was also answered here
Related
Processing a large dataset from a Google Cloud Platform notebook, I ran out of disk
space for /home/jupyter:
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 15G 0 15G 0% /dev
tmpfs 3.0G 8.5M 3.0G 1% /run
/dev/sda1 99G 38G 57G 40% /
tmpfs 15G 0 15G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 15G 0 15G 0% /sys/fs/cgroup
/dev/sda15 124M 5.7M 119M 5% /boot/efi
/dev/sdb 492G 490G 2.1G 100% /home/jupyter
I deleted a large number of files and restarted the instance. And no change for /home/jupyter.
/dev/sdb 492G 490G 2.1G 100% /home/jupyter
Decided to explore this a little further to identify what on /home/jupyter was still
taking up space.
$ du -sh /home/jupyter/
490G /home/jupyter/
$ du -sh /home/jupyter/*
254M /home/jupyter/SkullStrip
25G /home/jupyter/R01_2022
68G /home/jupyter/RSNA_ASNR_MICCAI_BraTS2021_TrainingData
4.0K /home/jupyter/Validate-Jay-nifti_skull_strip
284M /home/jupyter/imgbio-vnet-cgan-09012020-05172021
4.2M /home/jupyter/UNet
18M /home/jupyter/scott
15M /home/jupyter/tutorials
505M /home/jupyter/vnet-cgan-10042021
19M /home/jupyter/vnet_cgan_gen_multiplex_synthesis_10202021.ipynb
7.0G /home/jupyter/vnet_cgan_t1c_gen_10082020-12032020-pl-50-25-1
(base) jupyter#tensorflow-2-3-20210831-121523-08212021:~$
This does not add up. I would think that by restarting the instance, the processes that were referencing deleted files would be cleaned up.
What is taking up my disk space and how can I reclaim it?
Any direction would be appreciated.
Thanks,
Jay
Disk was fragmented. Created a new instance from scratch.
I am trying to upload my Docker image to my AWS EC2 instance. I uploaded a gunzipped version, unzipped the file and am trying to load the image with the following command docker image load -i /tmp/harrybotter.tar and encountering the following error:
Error processing tar file(exit status 1): write /usr/local/lib/python3.10/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so: no space left on device
Except, there is plenty of space on the instance, it's brand new, nothing is on it. Docker says the image is only 2.25 GB and the entire instance has 8 GiB of storage space. I have nothing else uploaded to the instance so the storage space is largely free. Every time the upload fails the upload is always at 2.1 GB or so.
Running df -h before the upload returns
Filesystem Size Used Avail Use% Mounted on
devtmpfs 475M 0 475M 0% /dev
tmpfs 483M 0 483M 0% /dev/shm
tmpfs 483M 420K 483M 1% /run
tmpfs 483M 0 483M 0% /sys/fs/cgroup
/dev/xvda1 8.0G 4.2G 3.9G 53% /
tmpfs 97M 0 97M 0% /run/user/1000
I am completely new to docker and AWS instances, so I am at a loss for what to do other than possibly upgrading my EC2 instance above the free tier. But since the instance has additional storage space, I am confused why the upload is running out of storage space. Is there a way I can expand the docker base image size or change the path the image is being uploaded to?
Thanks!
As you mentioned image size is 2.25 GB on loading image it required more space.
Check this out: Make Docker use / load a .tar image without copying it to /var/lib/..?
I have launched i3.large which offer about 475Gb for storage (https://aws.amazon.com/ec2/instance-types/) but when I invoke df -h I got:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.5G 64K 7.5G 1% /dev
tmpfs 7.5G 0 7.5G 0% /dev/shm
/dev/xvda1 7.8G 7.7G 0 100% /
E.g. this instance contains only 22.5Gb. Why? What does 0.475Tb refers from intance types table?
Try copying a large amount of data there - I'm wondering if it expands. I tried doing a df -h on mine and got small numbers but then I built a 200GB database on it.
I've started an instance which had a -d at the back (this should be a scratch disk).
But on boot the disk space stated is not seen.
It should be:
8 vCPUs, 52 GB RAM, 2 scratch disks (1770 GB, 1770 GB)
But df -h outputs:
Filesystem Size Used Avail Use% Mounted on
rootfs 10G 644M 8.9G 7% /
/dev/root 10G 644M 8.9G 7% /
none 1.9G 0 1.9G 0% /dev
tmpfs 377M 116K 377M 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 753M 0 753M 0% /run/shm
so how does one get to run an instance with boot that's a persistent disk and have scratch disks available?
The thing is that I need high CPU and lots of scratch space.
df does not show scratch disks because they are not formatted and mounted. Issue the following command:
ls -l /dev/disk/by-id/
In the output there will be something like:
lrwxrwxrwx 1 root root ... scsi-0Google_EphemeralDisk_ephemeral-disk-0 -> ../../sdb
lrwxrwxrwx 1 root root ... scsi-0Google_EphemeralDisk_ephemeral-disk-1 -> ../../sdc
Then, you can use mkfs and mount the appropriate disks.
See documentation for more info.
First of all, this is my df -h output:
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.9G 4.2G 3.4G 56% /
udev 1.9G 8.0K 1.9G 1% /dev
tmpfs 751M 180K 750M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 1.9G 0 1.9G 0% /run/shm
/dev/xvdb 394G 8.4G 366G 3% /mnt
I know that the /mnt is an ephemeral storage, all data stores in will be deleted after reboot.
Is it OK to create a /mnt/swap file to use as swap file? I add the following line into /etc/fstab
/mnt/swap1 swap swap defaults 0 0
By the way, what's the /run/shm used to do ?
Thanks.
Ephemeral storage preserves data between reboots, but looses them between restarts (stop/start). Also see this: What data is stored in Ephemeral Storage of Amazon EC2 instance?