No more disk space on ec2 - amazon-web-services

I am using a D class EC2 which is supposed to be a storage optimized EC2. I have about 20 GB of files that I need to write to disk. I am getting an out of disk space error once the total size of files reaches 15 GB. The instance is supposed to have way more than this. Why am I getting this error? How can I write more than 15GB of data to a D class EC2?

From Amazon EC2 Instance Types - Amazon Web Services:
D2 instances feature up to 48 TB of HDD-based local storage, deliver high disk throughput, and offer the lowest price per disk throughput performance on Amazon EC2.
Please note that this is local storage, which is very fast but the contents of these disks is lost if the instance is stopped. Therefore, it is recommended for temporary storage only.
Linux
I launched a d2.xlarge Linux instance.
I then followed instructions from How to use “Instance Store Volumes” storage in Amazon EC2?:
$ sudo fdisk -l
Disk /dev/xvda: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 70E4A118-98BD-4BF4-8DF9-6926A964902A
Device Start End Sectors Size Type
/dev/xvda1 4096 16777182 16773087 8G Linux filesystem
/dev/xvda128 2048 4095 2048 1M BIOS boot
Partition table entries are not in disk order.
Disk /dev/xvdb: 1.8 TiB, 2000387309568 bytes, 3907006464 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/xvdc: 1.8 TiB, 2000387309568 bytes, 3907006464 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/xvdd: 1.8 TiB, 2000387309568 bytes, 3907006464 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
This showed 3 disks, each of 1.8TiB.
I then formatted and mounted them:
sudo mkfs.ext4 /dev/xvdb
sudo mkfs.ext4 /dev/xvdc
sudo mkfs.ext4 /dev/xvdd
sudo mkdir /mnt/disk1
sudo mkdir /mnt/disk2
sudo mkdir /mnt/disk3
sudo mount -t ext4 /dev/xvdb /mnt/disk1
sudo mount -t ext4 /dev/xvdc /mnt/disk2
sudo mount -t ext4 /dev/xvdd /mnt/disk3
I then viewed the mounted disks:
$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 15G 0 15G 0% /dev
tmpfs 15G 0 15G 0% /dev/shm
tmpfs 15G 436K 15G 1% /run
tmpfs 15G 0 15G 0% /sys/fs/cgroup
/dev/xvda1 8.0G 1.3G 6.8G 16% /
tmpfs 3.0G 0 3.0G 0% /run/user/1000
/dev/xvdb 1.8T 77M 1.7T 1% /mnt/disk1
/dev/xvdc 1.8T 77M 1.7T 1% /mnt/disk2
/dev/xvdd 1.8T 77M 1.7T 1% /mnt/disk3
I was then able to use the disks. (Tip: Change permissions on the mount folders so you don't need to sudo all the time.)
Windows
I also booted up a d2.xlarge Windows instance. I then had to:
Run Disk Management
Bring each of the 3 disks online
Initialized the disks
Created a new Simple Volume for each disk
They appeared as D:, E: and G:

Please check if there are multiple user profiles, in case if there are you may not have permissions to see them and this will not allow you to see the space occupied by them.

Related

Docker on AWS filling up its thin pool while running somehow?

I've got a server on ElasticBeanstalk on AWS. Even though no images are being pulled, the thin pool continually fills for under a day until the filesystem is re-mounted as read-only and the applications die.
This happens with Docker 1.12.6 on latest Amazon AMI.
I can't really make heads or tails of it.
When an EC2 instance (hosting Beanstalk) starts, it has about 1.3GB in the thin pool. By the time my 1.2GB image is running, it has about 3.6GB (this is remembered info, it is very close to this). OK, that's fine.
Cut to 5 hours later...
(from the EC2 instance hosting it) docker info returns:
Storage Driver: devicemapper
Pool Name: docker-docker--pool
Pool Blocksize: 524.3 kB
Base Device Size: 107.4 GB
Backing Filesystem: ext4
Data file:
Metadata file:
Data Space Used: 8.489 GB
Data Space Total: 12.73 GB
Data Space Available: 4.245 GB
lvs agrees.
In another few hours that will grow to be 12.73GB used and 0 B free.
dmesg will report:
[2077620.433382] Buffer I/O error on device dm-4, logical block 2501385
[2077620.437372] EXT4-fs warning (device dm-4): ext4_end_bio:329: I/O error -28 writing to inode 4988708 (offset 0 size 8388608 starting block 2501632)
[2077620.444394] EXT4-fs warning (device dm-4): ext4_end_bio:329: I/O error
[2077620.473581] EXT4-fs warning (device dm-4): ext4_end_bio:329: I/O error -28 writing to inode 4988708 (offset 8388608 size 5840896 starting block 2502912)
[2077623.814437] Aborting journal on device dm-4-8.
[2077649.052965] EXT4-fs error (device dm-4): ext4_journal_check_start:56: Detected aborted journal
[2077649.058116] EXT4-fs (dm-4): Remounting filesystem read-only
Yet hardly any space is used in the container itself...
(inside the Docker container:) df -h
/dev/mapper/docker-202:1-394781-1exxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 99G 1.7G 92G 2% /
tmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/xvda1 25G 1.4G 24G 6% /etc/hosts
shm 64M 0 64M 0% /dev/shm
du -sh /
1.7G /
How can this space be filling up? My programs are doing very low-volume logging, and the log files are extremely small. I have good reason not to write them to stdout/stderr.
xxx#xxxxxx:/var/log# du -sh .
6.2M .
I also did docker logs and the output is less than 7k:
>docker logs ecs-awseb-xxxxxxxxxxxxxxxxxxx > w4
>ls -alh
-rw-r--r-- 1 root root 6.4K Mar 27 19:23 w4
The same container does NOT do this to my local docker setup. And finally, running du -sh / on the EC2 instance itself reveals less than 1.4GB usage.
It can't be being filled up by logfiles, and it isn't being filled inside the
container. What can be going on? I am at my wits' end!

How to increase logical volume of a disk in AWS EC2 instance

df -h command returns
[root#ip-SERVER_IP ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.8G 5.5G 2.0G 74% /
tmpfs 32G 0 32G 0% /dev/shm
cm_processes 32G 0 32G 0% /var/run/cloudera-scm-agent/process
I have a volume with 500GB of disk space.
Now, I installed some stuff in /dev/xvda1 and it keeps saying that:
The Cloudera Manager Agent's parcel directory is on a filesystem with less than 5.0 GiB of its space free. /opt/cloudera/parcels (free: 1.9 GiB (25.06%), capacity: 7.7 GiB)
Similarly:
The Cloudera Manager Agent's log directory is on a filesystem with less than 2.0 GiB of its space free. /var/log/cloudera-scm-agent (free: 1.9 GiB (25.06%), capacity: 7.7 GiB)
From the memory stats, I see that the Filesystem above stuff is installed in must be:
/dev/xvda1
I believe it needs to be resized so as to have more disk space but I don't think I need to expand the volume. I have only installed some stuff and started with it.
So I would like to know what exact steps I need to follow to expand the space in this partition and where exactly is my 500 GB?
I would like to know it step by step since I cannot afford to lose what I have on the server already. Please help
lsblk:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 500G 0 disk
└─xvda1 202:1 0 8G 0 part /
This question should be asked under serverfault.com.
Don't "expand" the volume. It is not common practice to expand root drive in Linux for none-OS folder. Since your stuff is inside /opt/cloudera/parcels , what you need to do is allocate a new partition, mount it, then copy for it. example is shown here : paritioning & moving. (assume /home is your /top/cloudera/parcels".
To check your disk volume , try this sudo lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
(update)
In fact, if you create your EC2 instance under EBS volume, you don't need to allocate 500GB upfront. You can allocate smaller space, then expand the disk space following this step : Expanding the Storage Space of an EBS Volume on Linux. You should create a snapshot for the EBS before perform those task. However, there is a catch of EBS disk IOPS, if you allocate less space, there will be less IOPS given. So if you allocate 500GB , you will get 500x 3 = max 1500 IOPS.
This AWS LVM link gives you a step by step instruction on how to increase the size of Logical Volume in AWS EC2

The amount of disk used by each volume used in Amazon EC2 machine

I have an EC2 instance with 2 EBS volumes (each 1000 GB), and I want to shrink their size to the proper size. The question is what is this "proper size".
Here is the locations of the volumes shown to me at the AWS page:
When I login to the machine I get amount of space used, I get:
[ec2-user#ip-xx-xx-xxx-xxx ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 1008G 194G 814G 20% /
devtmpfs 30G 60K 30G 1% /dev
tmpfs 31G 0 31G 0% /dev/shm
If I go inside the /dev and do ls (among lots of things) I would see that:
lrwxrwxrwx 1 root root 5 Aug 20 00:50 root -> xvda1
lrwxrwxrwx 1 root root 4 Aug 20 00:50 sda -> xvda
lrwxrwxrwx 1 root root 5 Aug 20 00:50 sda1 -> xvda1
lrwxrwxrwx 1 root root 4 Aug 20 00:49 sdf -> xvdf
First, why inside dev we have xvda, xvda1, xvdf'? (and why sda, sda1?) The console only talks about sdf and xvda.
Second, in the output of df -h, /dev/xvda1 seems to be only one of the EBS volumes (right?) If so, how can get the disk usage for the other EBS volume?
Update: More info:
[ec2-user#ip-10-144-183-22 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 1T 0 disk
└─xvda1 202:1 0 1024G 0 part /
xvdf 202:80 0 1T 0 disk
[ec2-user#ip-10-144-183-22 ~]$ sudo fdisk -l | grep Disk
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Disk /dev/xvda: 1099.5 GB, 1099511627776 bytes, 2147483648 sectors
Disk label type: gpt
Disk /dev/xvdf: 1099.5 GB, 1099511627776 bytes, 2147483648 sectors
These are virtual mounts. The operating system is responsible for the naming of the drive. For example: the public AMI for centos might show /dev/xvdc for sdb which is the primary drive.
You attached sdf, it shows as /dev/xvdf. It's not showing in df -h likely because it's not formatted and mounted. fdisk -l will likely show it. Format it and mount it, df -h will show it. If it is formatted, it's probably just not mounted.

Not all storage is available for Amazon EBS

I'm sure the problem appears because of my misunderstanding of Ec2+EBS configuring, so answer might be very simple.
I've created RedHat ec2 instance on the Amazon WS with 30GB EBS storage. But lsblk shows me, that only 6GB of total 30 is available for me:
xvda 202:0 0 30G 0 disk
└─xvda1 202:1 0 6G 0 part /
How can I mount all remaining storage space to my instance?
[UPDATE] commands output:
mount:
/dev/xvda1 on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sudo fdisk -l /dev/xvda:
WARNING: GPT (GUID Partition Table) detected on '/dev/xvda'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/xvda: 32.2 GB, 32212254720 bytes
255 heads, 63 sectors/track, 3916 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/xvda1 1 1306 10485759+ ee GPT
resize2fs /dev/xvda1:
resize2fs 1.41.12 (17-May-2010)
The filesystem is already 1572864 blocks long. Nothing to do!
I believe you are experiencing an issue that seems specific to EC2 and RHEL* where the partition won't extend using the standard tools.
If you follow the instructions of this previous answer you should be able to extend the partition to use the full space. Follow the instructions particularly carefully if expanding the root partition!
unable to resize root partition on EC2 centos
If you update your question with the output of fdisk -l /dev/xvda and mount it should help provide any extra information if the following isnt suitable:
I would assume that you could either re-partition xvda to provision the space for another mount point (/var or /home for example) or grow your current root partition into the extra space available - you can follow this guide here to do this
Obviously be sure to back up any data you have on there, this is potentially destructive!
[Update - how to use parted]
The following link will talk you through using GNU Parted to create a partition, you will essentially just need to create a new partition, then I would temporarily mount this to a directory such as /mnt/newhome, copy across all of the current contents of /home (recursively as root keeping permissions with cp -rp /home/* /mnt/newhome), then I would rename the current /home to /homeold, then make sure you have set up Fstab to have the correct entry: (assuming your new partition is /dev/xvda2)
/dev/xvda2 /home /ext4 noatime,errors=remount-ro 0 1

Resizing the default 10GB boot drive Google Cloud Platform

How do I increase the default 10GB boot drive when I create an instance on the Google Cloud Platform? I've read through different answers regarding this with nothing super clear. I'm sort of a beginner to the platform and I'd really appreciate it if someone could tell me how to do this in simple terms.
Use the following steps to increase the boot size with CentOS on the Google Cloud Platform.
ssh into vm instance
[user#user-srv ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.9G 898M 8.5G 10% /
tmpfs 296M 0 296M 0% /dev/shm
[user#user-srv ~]$ sudo fdisk /dev/sda
The device presents a logical sector size that is smaller than
the physical sector size. Aligning to a physical sector (or optimal
I/O) size boundary is recommended, or performance may be impacted.
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): p
Disk /dev/sda: 53.7 GB, 53687091200 bytes
4 heads, 32 sectors/track, 819200 cylinders
Units = cylinders of 128 * 512 = 65536 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x0004a990
Device Boot Start End Blocks Id System
/dev/sda1 17 163825 10483712+ 83 Linux
Command (m for help): c
DOS Compatibility flag is not set
Command (m for help): u
Changing display/entry units to sectors
Command (m for help): p
Disk /dev/sda: 53.7 GB, 53687091200 bytes
4 heads, 32 sectors/track, 819200 cylinders, total 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x0004a990
Device Boot Start End Blocks Id System
/dev/sda1 2048 20969472 10483712+ 83 Linux
Command (m for help): p
Disk /dev/sda: 53.7 GB, 53687091200 bytes
4 heads, 32 sectors/track, 819200 cylinders, total 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x0004a990
Device Boot Start End Blocks Id System
/dev/sda1 2048 20969472 10483712+ 83 Linux
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
Partition 1 is already defined. Delete it before re-adding it.
Command (m for help): d
Selected partition 1
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First sector (2048-104857599, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-104857599, default 104857599):
Using default value 104857599
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
[user#user-srv ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.9G 898M 8.5G 10% /
tmpfs 296M 0 296M 0% /dev/shm
[user#user-srv ~]$ sudo reboot
Broadcast message from user#user-srv
(/dev/pts/0) at 3:48 ...
The system is going down for reboot NOW!
[user#user-srv ~]$ Connection to 23.251.144.204 closed by remote host.
Connection to 23.251.144.204 closed.
Robetus-Mac:~ tomassiro$ gcutil listinstances --project="project-name"
+-------+---------------+---------+----------------+----------------+
| name | zone | status | network-ip | external-ip |
+-------+---------------+---------+----------------+----------------+
| srv-1 | us-central1-a | RUNNING | 10.230.224.112 | 107.168.216.20 |
+-------+---------------+---------+----------------+----------------+
ssh into vm instance
[user#user-srv ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.9G 898M 8.5G 10% /
tmpfs 296M 0 296M 0% /dev/shm
[user#user-srv ~]$ sudo resize2fs /dev/sda1
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/sda1 is mounted on /; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 4
Performing an on-line resize of /dev/sda1 to 13106944 (4k) blocks.
The filesystem on /dev/sda1 is now 13106944 blocks long.
[user#user-srv ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 50G 908M 46G 2% /
tmpfs 296M 0 296M 0% /dev/shm
[user#user-srv ~]$ exit
logout
Connection to 23.251.144.204 closed.
The steps are easy:
Create a new disk from an existing source image with bigger size
Create a new instance using the disk you just created (select existing disk)
After the system boot up, using the command "df -h", you can see the storage is still 9.9GB.
Follow the steps (from steps 4-12) in the "Repartitioning a root persistent disk" section in https://developers.google.com/compute/docs/disks
Finished!!
Without reboot or restarts increase boot size in GCP cloud VM or Google cloud engine
Check first disk usage is more than 80% df -h of /dev/sda1 if more than 80% it's dangerous.
Update disk size on the fly for VM without restart
Increase disk size from console first
SSH inside VM : sudo growpart /dev/sda 1
Resize your file system : sudo resize2fs /dev/sda1
Verify : df -h
A safer method than editing the partition directly and which doesn't require maintaining your own images, is dracut's growroot module & cloud-init.
I've used this with CentOS 6 & 7 on Google Compute, AWS & Azure.
## you'll need to be root or use sudo
yum -y install epel-release
yum -y install cloud-init cloud-initramfs-tools dracut-modules-growroot cloud-utils-growpart
rpm -qa kernel | sed -e 's/^kernel-//' | xargs -I {} dracut -f /boot/initramfs-{}.img {}
# reboot for the resize to take affect
The partition will be resized automatically during the next boot.
Notes:
This is built into Ubuntu, which is why you don't see the problem there.
The partition size problem is seen with RedHat & CentOS with most pre-built images, not only Google Cloud. This method should work anywhere.
Note that you cannot umount the /dev/sda1 because it's running your OS. But you can create another partition by following:
See available space.
sudo cfdisk
Move with arrows and select Free space , then:
Click enter and a new partition will be created
Write changes on disk
Quit
Format partition (replace sdb1 with yours):
sudo mkfs -t ext4 /dev/sdb1
Check changes: lsblk -f
Mount new partition
sudo mount /dev/sda3 /mnt