I'm trying to extend my partition size on EC2 from 8GB to 16GB.
lsblk gives
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 89.1M 1 loop /snap/core/8268
loop1 7:1 0 18M 1 loop /snap/amazon-ssm-agent/1480
xvda 202:0 0 16G 0 disk
└─xvda1 202:1 0 8G 0 part /
when i try sudo growpart /dev/xvda 1 it gives
NOCHANGE: partition 1 could only be grown by -33 [fudge=2048]
when i try to do sudo resize2fs /dev/xvda1 i get
The filesystem is already 2096891 (4k) blocks long. Nothing to do!
How do i resize it to 16GB?
Related
//as-is
[root#ip-172-28-72-124 /]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 100G 0 disk
├─nvme0n1p1 259:1 0 100G 0 part /
└─nvme0n1p128 259:2 0 1M 0 part
————————————————————————————————————
// to-be
[root#ip-172-28-72-124 /]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 100G 0 disk
├─nvme0n1p1 259:1 0 50G 0 part /
├─nvme0n1p2 259:1 0 50G 0 part /data
└─nvme0n1p128 259:2 0 1M 0 part
————————————————————————————————————
The file system is xfs.
How can I change nvme0n1p1 to 50G?
I tried to change the size using xfs_growfs but it failed.
xfs_growfs is used for the opposite of what you want. If you are using a RHEL based system, official answer is you cannot.
"After an XFS file system is created, its size cannot be reduced. However, it can still be enlarged using the xfs_growfs command. For more information, see Section 3.4, “Increasing the Size of an XFS File System”)."
You may find some hacky ways of doing it but there is a risk element there.
I would recommend simply creating and attaching a 50Gig ebs volume if you simply need a second disk.
I am running an Ubuntu Amazon EC2 instance with a 100gb volume mounted. However, when I hit 56gb, the FS says that it has run out of storage. It clearly has plenty of space available.
This is the result of the df.
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 97G 56G 42G 58% /
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 3.2G 852K 3.2G 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/xvda15 105M 5.3M 100M 5% /boot/efi
tmpfs 1.6G 4.0K 1.6G 1% /run/user/1000
As you can see, the OS knows that there is space, but seemingly refuses to write. Prior to experiencing this problem, I had increased the volume size from 8GB to 100GB, and used the commands growpart and resize2fs to scale the volume to full size. I have a feeling that this is due to some unknown volume size limit within either Ubuntu or AWS, but I have no idea how to diagnose or solve this problem. My initial guess was that EXT4 had a limit, but that doesn't seem to be applicable here (the limit is 1TB). Here is the result of lsblk:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 25.1M 1 loop /snap/amazon-ssm-agent/5656
loop1 7:1 0 55.5M 1 loop /snap/core18/2409
loop2 7:2 0 61.9M 1 loop /snap/core20/1518
loop3 7:3 0 79.9M 1 loop /snap/lxd/22923
loop4 7:4 0 47M 1 loop /snap/snapd/16010
loop5 7:5 0 47M 1 loop /snap/snapd/16292
xvda 202:0 0 100G 0 disk
├─xvda1 202:1 0 99.9G 0 part /
├─xvda14 202:14 0 4M 0 part
└─xvda15 202:15 0 106M 0 part /boot/efi
Any help here would be greatly appreciated, thanks in advance :)
Following the instructions at https://aws.amazon.com/premiumsupport/knowledge-center/ebs-volume-size-increase/ I have successfully increased the size of my EBS volume in 3 of 4 supposedly identical instances. However, when I attempt the final instance I am receiving errors.
[ec2-user#ip-***-***-***-*** ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 8G 0 disk
├─nvme0n1p1 259:1 0 8G 0 part /
└─nvme0n1p128 259:2 0 1M 0 part
[ec2-user#ip-***-***-***-*** ~]$ lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
nvme0n1
├─nvme0n1p1 ext4 / ac513929-f93a-47d9-9637-9df68c0ca960 /
└─nvme0n1p128
[ec2-user#ip-***-***-***-*** ~]$ sudo growpart /dev/nvme0n1 1
NOCHANGE: disk=/dev/nvme0n1 partition=1: size=16773086, it cannot be grown
Does anyone know what's causing this or how it can be resolved? I've also tried skipping to the next step which obviously isn't working:
[ec2-user#ip-***-***-***-*** ~]$ sudo resize2fs /dev/nvme0n1p1
resize2fs 1.43.5 (04-Aug-2017)
The filesystem is already 2096635 (4k) blocks long. Nothing to do!
I've had a look around for similar issues but I'm not finding a lot with this specific error. I can confirm the disk is not optimising any more and is showing the correct new size in the console (32GB).
I'm trying to mount a few volumes to my instance and they are all NVME.
I read that NVME volumes don't keeping their mapping the same, each time they are given serialized numbers randomly.
The point is I need to keep the mapping consistent, it's for a db and 1 of the volumes suppose to keep the data.
Now if I reboot the instance the volumes get mixed up, so it's possible that the volume that has the data will be mounted to a different directory and hence the db service starts and doesn't find any data.
Of course it happens also after creating an image, so I can't configure 1 instance and the spin up more using an image.
How can I force the mapping to be consistent? or stop using NVME? (I read this random serialization happens only with NVME)
You need to use device UUID. See my example below.
I have 3 disks, 8 GB, 10 GB and 12 GB.
They show as device nvme0n1 (8 GB), nvme1n1 (10 GB) and nvme2n1 (12 GB).
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme1n1 259:0 0 10G 0 disk
└─nvme1n1p1 259:6 0 10G 0 part /mnt/disk10
nvme2n1 259:1 0 12G 0 disk
└─nvme2n1p1 259:7 0 12G 0 part /mnt/disk12
nvme0n1 259:2 0 8G 0 disk
├─nvme0n1p1 259:3 0 8G 0 part /
└─nvme0n1p128 259:4 0 1M 0 part
Look that I have a file on disk of size 10 GB called /mnt/disk10/file10.txt.
Also a file on disk of size 12 GB called /mnt/disk12/file12.txt.
$ ls -l /mnt/*
/mnt/disk10:
total 0
-rw-r--r-- 1 root root 0 May 9 00:37 file10.txt
/mnt/disk12:
total 0
-rw-r--r-- 1 root root 0 May 9 00:38 file12.txt
My fstab file use UUID to refer those disks, as you can see below.
$ cat /etc/fstab
# Disk 8 GB
UUID=7b355c6b-f82b-4810-94b9-4f3af651f629 / xfs defaults,noatime 1 1
# Disk 10 GB
UUID=2b19004b-795f-4da3-b220-d531c7cde1dc /mnt/disk10 xfs defaults,noatime 0 0
# Disk 12 GB
UUID=1b18a2f2-f48f-4977-adf8-aa483e1fa91f /mnt/disk12 xfs defaults,noatime 0 0
If you want to know what is the UUID for each device, use blkid, as you can see below.
$ blkid
/dev/nvme1n1: PTUUID="2e6aaa33" PTTYPE="dos"
/dev/nvme1n1p1: UUID="2b19004b-795f-4da3-b220-d531c7cde1dc" TYPE="xfs" PARTUUID="2e6aaa33-01"
/dev/nvme2n1: PTUUID="10565c83" PTTYPE="dos"
/dev/nvme2n1p1: UUID="1b18a2f2-f48f-4977-adf8-aa483e1fa91f" TYPE="xfs" PARTUUID="10565c83-01"
/dev/nvme0n1: PTUUID="1760802e-28df-44e2-b0e0-d1964f72a39e" PTTYPE="gpt"
/dev/nvme0n1p1: LABEL="/" UUID="7b355c6b-f82b-4810-94b9-4f3af651f629" TYPE="xfs" PARTLABEL="Linux" PARTUUID="a5dcc974-1013-4ea3-9942-1ac147266613"
/dev/nvme0n1p128: PARTLABEL="BIOS Boot Partition" PARTUUID="dc255fff-03c6-40e6-a8dc-054ec864a155"
Now I will stop my machine, force change device order and start it again.
Look how the disk changes the device name, but they are still being mounted on the same path, with the same files on it.
Before: nvme0n1 (8 GB), nvme1n1 (10 GB) and nvme2n1 (12 GB).
Now: nvme0n1 (8 GB), nvme1n1 (12 GB) and nvme2n1 (10 GB).
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme1n1 259:0 0 12G 0 disk
└─nvme1n1p1 259:1 0 12G 0 part /mnt/disk12
nvme2n1 259:2 0 10G 0 disk
└─nvme2n1p1 259:3 0 10G 0 part /mnt/disk10
nvme0n1 259:4 0 8G 0 disk
├─nvme0n1p1 259:5 0 8G 0 part /
└─nvme0n1p128 259:6 0 1M 0 part
$ ls -l /mnt/*
/mnt/disk10:
total 0
-rw-r--r-- 1 root root 0 May 9 00:37 file10.txt
/mnt/disk12:
total 0
-rw-r--r-- 1 root root 0 May 9 00:38 file12.txt
UUID is an attribute from filesystem, so any time you create an filesystem it will generate an UUID. Also any time you generate an AMI or snapshot, the UUID is the same, as it belongs to filesystem, not to EBS volume.
While I was deploying an application on some partition (on aws-ec2 instance) I got fatal error I'm out of space on this block No space left on device.
How do I increase ebs partitions volume size on aws ec2?
you can use "growpart" and "lvextend" based on what partition type you have .
https://www.tldp.org/HOWTO/html_single/LVM-HOWTO/#extendlv
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html
Solution:
Go to your amazon machine and run lsblk (ls block) - useful Linux command which lists information about all or the specified block devices (disks and partitions). It queries the /sys virtual file system to obtain the information that it displays. The command displays details about all block devices excluding except RAM disks in a tree-like format by default.)
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 93.9M 1 loop /snap/core/123
loop2 7:2 0 18M 1 loop /snap/amazon-ssm-agent/456
loop3 7:3 0 18M 1 loop /snap/amazon-ssm-agent/789
loop4 7:4 0 93.8M 1 loop /snap/core/777
xvda 202:0 0 20G 0 disk
└─xvda1 202:1 0 20G 0 part /
xvdb 202:16 0 20G 0 disk /home/boot
xvdc 202:32 0 20G 0 disk
├─xvdc1 202:33 0 10G 0 part
| └─cryptswap1 253:0 0 10G 0 crypt
└─xvdc2 202:34 0 10G 0 part
└─crypttmp 253:1 0 10G 0 crypt /tmp
xvdd 202:48 0 50G 0 disk
└─enc_xvdd 253:2 0 50G 0 crypt /home
xvde 202:64 0 8.9T 0 disk
└─enc_xvde 253:3 0 8.9T 0 crypt /var/data
Locate your disk volume name of the specific partition.
Go to your amazon aws account -> ec2 -> instances -> Description Panel -> Block Devices -> Click on the right block -> Click on the volume id -> Right CLick on the block id -> Modify Volume -> Select The correct size
4.ssh your machine and perform the following commands (on the correct volume - in my example I chose increasing /home which is under enc_xvdd):
sudo cryptsetup resize enc_xvdd -v
sudo resize2fs /dev/mapper/enc_xvdd