AWS NVME mounted to different directory after reboot - amazon-web-services

I'm trying to mount a few volumes to my instance and they are all NVME.
I read that NVME volumes don't keeping their mapping the same, each time they are given serialized numbers randomly.
The point is I need to keep the mapping consistent, it's for a db and 1 of the volumes suppose to keep the data.
Now if I reboot the instance the volumes get mixed up, so it's possible that the volume that has the data will be mounted to a different directory and hence the db service starts and doesn't find any data.
Of course it happens also after creating an image, so I can't configure 1 instance and the spin up more using an image.
How can I force the mapping to be consistent? or stop using NVME? (I read this random serialization happens only with NVME)

You need to use device UUID. See my example below.
I have 3 disks, 8 GB, 10 GB and 12 GB.
They show as device nvme0n1 (8 GB), nvme1n1 (10 GB) and nvme2n1 (12 GB).
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme1n1 259:0 0 10G 0 disk
└─nvme1n1p1 259:6 0 10G 0 part /mnt/disk10
nvme2n1 259:1 0 12G 0 disk
└─nvme2n1p1 259:7 0 12G 0 part /mnt/disk12
nvme0n1 259:2 0 8G 0 disk
├─nvme0n1p1 259:3 0 8G 0 part /
└─nvme0n1p128 259:4 0 1M 0 part
Look that I have a file on disk of size 10 GB called /mnt/disk10/file10.txt.
Also a file on disk of size 12 GB called /mnt/disk12/file12.txt.
$ ls -l /mnt/*
/mnt/disk10:
total 0
-rw-r--r-- 1 root root 0 May 9 00:37 file10.txt
/mnt/disk12:
total 0
-rw-r--r-- 1 root root 0 May 9 00:38 file12.txt
My fstab file use UUID to refer those disks, as you can see below.
$ cat /etc/fstab
# Disk 8 GB
UUID=7b355c6b-f82b-4810-94b9-4f3af651f629 / xfs defaults,noatime 1 1
# Disk 10 GB
UUID=2b19004b-795f-4da3-b220-d531c7cde1dc /mnt/disk10 xfs defaults,noatime 0 0
# Disk 12 GB
UUID=1b18a2f2-f48f-4977-adf8-aa483e1fa91f /mnt/disk12 xfs defaults,noatime 0 0
If you want to know what is the UUID for each device, use blkid, as you can see below.
$ blkid
/dev/nvme1n1: PTUUID="2e6aaa33" PTTYPE="dos"
/dev/nvme1n1p1: UUID="2b19004b-795f-4da3-b220-d531c7cde1dc" TYPE="xfs" PARTUUID="2e6aaa33-01"
/dev/nvme2n1: PTUUID="10565c83" PTTYPE="dos"
/dev/nvme2n1p1: UUID="1b18a2f2-f48f-4977-adf8-aa483e1fa91f" TYPE="xfs" PARTUUID="10565c83-01"
/dev/nvme0n1: PTUUID="1760802e-28df-44e2-b0e0-d1964f72a39e" PTTYPE="gpt"
/dev/nvme0n1p1: LABEL="/" UUID="7b355c6b-f82b-4810-94b9-4f3af651f629" TYPE="xfs" PARTLABEL="Linux" PARTUUID="a5dcc974-1013-4ea3-9942-1ac147266613"
/dev/nvme0n1p128: PARTLABEL="BIOS Boot Partition" PARTUUID="dc255fff-03c6-40e6-a8dc-054ec864a155"
Now I will stop my machine, force change device order and start it again.
Look how the disk changes the device name, but they are still being mounted on the same path, with the same files on it.
Before: nvme0n1 (8 GB), nvme1n1 (10 GB) and nvme2n1 (12 GB).
Now: nvme0n1 (8 GB), nvme1n1 (12 GB) and nvme2n1 (10 GB).
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme1n1 259:0 0 12G 0 disk
└─nvme1n1p1 259:1 0 12G 0 part /mnt/disk12
nvme2n1 259:2 0 10G 0 disk
└─nvme2n1p1 259:3 0 10G 0 part /mnt/disk10
nvme0n1 259:4 0 8G 0 disk
├─nvme0n1p1 259:5 0 8G 0 part /
└─nvme0n1p128 259:6 0 1M 0 part
$ ls -l /mnt/*
/mnt/disk10:
total 0
-rw-r--r-- 1 root root 0 May 9 00:37 file10.txt
/mnt/disk12:
total 0
-rw-r--r-- 1 root root 0 May 9 00:38 file12.txt
UUID is an attribute from filesystem, so any time you create an filesystem it will generate an UUID. Also any time you generate an AMI or snapshot, the UUID is the same, as it belongs to filesystem, not to EBS volume.

Related

I want to resize my linux root volume to create a separate partition. (aws ec2)

//as-is
[root#ip-172-28-72-124 /]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 100G 0 disk
├─nvme0n1p1 259:1 0 100G 0 part /
└─nvme0n1p128 259:2 0 1M 0 part
————————————————————————————————————
// to-be
[root#ip-172-28-72-124 /]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 100G 0 disk
├─nvme0n1p1 259:1 0 50G 0 part /
├─nvme0n1p2 259:1 0 50G 0 part /data
└─nvme0n1p128 259:2 0 1M 0 part
————————————————————————————————————
The file system is xfs.
How can I change nvme0n1p1 to 50G?
I tried to change the size using xfs_growfs but it failed.
xfs_growfs is used for the opposite of what you want. If you are using a RHEL based system, official answer is you cannot.
"After an XFS file system is created, its size cannot be reduced. However, it can still be enlarged using the xfs_growfs command. For more information, see Section 3.4, “Increasing the Size of an XFS File System”)."
You may find some hacky ways of doing it but there is a risk element there.
I would recommend simply creating and attaching a 50Gig ebs volume if you simply need a second disk.

Unable to extend EBS disk partition to use new space

Following the instructions at https://aws.amazon.com/premiumsupport/knowledge-center/ebs-volume-size-increase/ I have successfully increased the size of my EBS volume in 3 of 4 supposedly identical instances. However, when I attempt the final instance I am receiving errors.
[ec2-user#ip-***-***-***-*** ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 8G 0 disk
├─nvme0n1p1 259:1 0 8G 0 part /
└─nvme0n1p128 259:2 0 1M 0 part
[ec2-user#ip-***-***-***-*** ~]$ lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
nvme0n1
├─nvme0n1p1 ext4 / ac513929-f93a-47d9-9637-9df68c0ca960 /
└─nvme0n1p128
[ec2-user#ip-***-***-***-*** ~]$ sudo growpart /dev/nvme0n1 1
NOCHANGE: disk=/dev/nvme0n1 partition=1: size=16773086, it cannot be grown
Does anyone know what's causing this or how it can be resolved? I've also tried skipping to the next step which obviously isn't working:
[ec2-user#ip-***-***-***-*** ~]$ sudo resize2fs /dev/nvme0n1p1
resize2fs 1.43.5 (04-Aug-2017)
The filesystem is already 2096635 (4k) blocks long. Nothing to do!
I've had a look around for similar issues but I'm not finding a lot with this specific error. I can confirm the disk is not optimising any more and is showing the correct new size in the console (32GB).

Not able to grow partition ubuntu EC2

I'm trying to extend my partition size on EC2 from 8GB to 16GB.
lsblk gives
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 89.1M 1 loop /snap/core/8268
loop1 7:1 0 18M 1 loop /snap/amazon-ssm-agent/1480
xvda 202:0 0 16G 0 disk
└─xvda1 202:1 0 8G 0 part /
when i try sudo growpart /dev/xvda 1 it gives
NOCHANGE: partition 1 could only be grown by -33 [fudge=2048]
when i try to do sudo resize2fs /dev/xvda1 i get
The filesystem is already 2096891 (4k) blocks long. Nothing to do!
How do i resize it to 16GB?

How do I increase ebs partitions volume size on aws ec2?

While I was deploying an application on some partition (on aws-ec2 instance) I got fatal error I'm out of space on this block No space left on device.
How do I increase ebs partitions volume size on aws ec2?
you can use "growpart" and "lvextend" based on what partition type you have .
https://www.tldp.org/HOWTO/html_single/LVM-HOWTO/#extendlv
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html
Solution:
Go to your amazon machine and run lsblk (ls block) - useful Linux command which lists information about all or the specified block devices (disks and partitions). It queries the /sys virtual file system to obtain the information that it displays. The command displays details about all block devices excluding except RAM disks in a tree-like format by default.)
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 93.9M 1 loop /snap/core/123
loop2 7:2 0 18M 1 loop /snap/amazon-ssm-agent/456
loop3 7:3 0 18M 1 loop /snap/amazon-ssm-agent/789
loop4 7:4 0 93.8M 1 loop /snap/core/777
xvda 202:0 0 20G 0 disk
└─xvda1 202:1 0 20G 0 part /
xvdb 202:16 0 20G 0 disk /home/boot
xvdc 202:32 0 20G 0 disk
├─xvdc1 202:33 0 10G 0 part
| └─cryptswap1 253:0 0 10G 0 crypt
└─xvdc2 202:34 0 10G 0 part
└─crypttmp 253:1 0 10G 0 crypt /tmp
xvdd 202:48 0 50G 0 disk
└─enc_xvdd 253:2 0 50G 0 crypt /home
xvde 202:64 0 8.9T 0 disk
└─enc_xvde 253:3 0 8.9T 0 crypt /var/data
Locate your disk volume name of the specific partition.
Go to your amazon aws account -> ec2 -> instances -> Description Panel -> Block Devices -> Click on the right block -> Click on the volume id -> Right CLick on the block id -> Modify Volume -> Select The correct size
4.ssh your machine and perform the following commands (on the correct volume - in my example I chose increasing /home which is under enc_xvdd):
sudo cryptsetup resize enc_xvdd -v
sudo resize2fs /dev/mapper/enc_xvdd

Amazon EC2 resize root device

I have one amazonw ec2 instance and would like to extend root device device form 100G to 500G. After create a new 500G volume and reattached to instance.
I can see volume is there by command $lsblk. However, after I resize the disk. I cannot do it with error "The filesystem is already 26212055 blocks long. Nothing to do!
name#ip-172-1-1-3:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 8.0K 3.9G 1% /dev
tmpfs 799M 840K 798M 1% /run
/dev/xvda1 99G 92G 3.1G 97% /
name#ip-172-1-1-3:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE
MOUNTPOINT
xvda 202:0 0 500G 0 disk
└─xvda1 202:1 0 100G 0 part /
name#ip-172-1-1-3:~$sudo resize2fs /dev/xvda1
resize2fs 1.42.9 (4-Feb-2014)
The filesystem is already 26212055 blocks long. Nothing to do!
here's exactly what to do:
df -h #print the name of your boot partition
lsblk #show info on all your block devices
You'll see from that output what the name of the disk is of your root partition. For example, you probably see something like this:
xvde 202:64 0 32G 0 disk
└─xvde1 202:65 0 8G 0 part /
Our goal is to make xvde1 use the whole available space from xvde.
Here's how to resize your partition:
fdisk /dev/xvda (the disk name, not your partition)
This enters into the fdisk utility.
u #Change the display to sectors
p #Print info
d #Delete the partition
n #New partition
p #Primary partition
1 #Partition number
2048 #First sector
Press Enter to accept the default
p #Print info
a #Toggle the bootable flag
1 #Select partition 1
w #Write table to disk and exit
Now, reboot your instance:
reboot
After it comes back do:
resize2fs /dev/xvde1 (the name of your partition, not the block device)
And finally verify the new disk size:
df -h
After I follow #error2007s step 12 with "a" a #Toggle the bootable flag stop and reboot. I can not bring up instance.
Disk /dev/xvda: 536.9 GB, 536870912000 bytes
255 heads, 63 sectors/track, 65270 cylinders, total 1048576000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/xvda1 2048 1048575999 524286976 83 Linux
Command (m for help): a
Partition number (1-4): 1
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
name#ip-172-1-1-3:~$ reboot
reboot: Need to be root
name#ip-172-1-1-3:~$ sudo reboot
Broadcast message from name#ip-172-1-1-3
(/dev/pts/1) at 10:18 ...
The system is going down for reboot NOW!
$ ssh -i "a.pem" name#ec2-172.1.1.3.compute-1.amazonaws.com -p 22
ssh: connect to host ec2-172.1.1.3.compute-1.amazonaws.com port 22: Operation timed out
You need to extend the available space:
$ lsblk
xvda 202:0 0 500G 0 disk
└─xvda1 202:1 0 100G 0 part /
$ growpart /dev/xvda 1
$ resize2fs /dev/xvda1