AWS volume larger than specified - amazon-web-services

I created an AWS instance with an attached volume of size 500 GB. Everything looks good in AWS console which shows the volume to be 500 GB (it is /dev/xvdf). When I ssh into the instance and look at the drive I see the drive is actually 540 GB instead of 500 GB. Why is this, where did this extra 40 GB come from?
fdisk output:
Disk /dev/xvdf: 536.9 GB, 536870912000 bytes
255 heads, 63 sectors/track, 65270 cylinders, total 1048576000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
df -h (uses 1024):
/dev/xvdf 493G 110G 358G 24% /data0
df -H (uses 1000):
/dev/xvdf 529G 118G 384G 24% /data0

Your volume size is correct.
536,870,912,000 ÷ 1,024 ÷ 1,024 ÷ 1,024 = 500 GiB.
1 GiB ("gibibyte," or giga-binary byte) is 230 bytes. EBS volume sizes are in GiB.

I might be wrong but "/dev/xvdf" Shows that AWS is using some form of Xen
be that XenServer or some other flavor.
What happens is:
Xen calculates how much space is actually needed so that after you format the volume to "ext4" or any other FS you will have 500GB or as close to it as possible.
Anyways this is IME.

Related

CentOS 7: LVM swap extension not shown by the "free" command

I'm running a CentOS 7 guest on a VirtualBox 6 on Windows. The result of the free command is as follows:
$ free -h
total used free shared buff/cache available
Mem: 15G 2.4G 11G 162M 1.5G 12G
Swap: 1.2G 0B 1.2G
showing that the swap partition has 1.2 GB. I need to extend it to at least 2GB. So, with the guest stopped, I added a new volume of 1.2 GB and, after having rebooted, I did as follows:
$ sudo pvcreate /dev/sdb
$ sudo vgextend centos /dev/sdb
$ sudo lvextend -L+1G /dev/centos/swap
Now, the lvdisplay command shows the new created volume, as follows:
$ sudo lvdisplay
--- Logical volume ---
LV Path /dev/centos/swap
LV Name swap
VG Name centos
LV UUID 1OT4R8-69eL-vczL-zydM-XrwS-jA47-YfikMS
LV Write Access read/write
LV Creation host, time localhost, 2019-12-30 22:01:35 +0100
LV Status available
# open 2
LV Size <2.20 GiB
Current LE 563
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:1
--- Logical volume ---
LV Path /dev/centos/root
LV Name root
VG Name centos
LV UUID hGDGPf-iPMB-TUtM-nqRv-aDNd-D3mw-W15H8Z
LV Write Access read/write
LV Creation host, time localhost, 2019-12-30 22:01:35 +0100
LV Status available
# open 1
LV Size <76.43 GiB
Current LE 19565
Segments 3
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:0
The fstab file looks as follows:
dev/mapper/centos-root / xfs defaults 0 0
UUID=4ef0416f-1617-40da-99d2-83896d808eed /boot xfs defaults 0 0
/dev/mapper/centos-swap swap swap defaults 0 0
showing that the swap is allocated on the /dev/mapper/centos-swap partition. Here is the out put of the fstab command:
Disk /dev/mapper/centos-root: 82.1 GB, 82061557760 bytes, 160276480 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/centos-swap: 2361 MB, 2361393152 bytes, 4612096 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
But after reboot the swapon command doesn't seem to reflect the extension:
$ sudo swapon -s
Filename Type Size Used Priority
/dev/dm-1 partition 1257468 0 -2
For some reason, the swap doesn't seem to be on the /dev/mapper/centos-swap partition but on /dev/dm-1, which doesn't even exist. And the free command still shows the same result like in the beggining:
$ free -h
total used free shared buff/cache available
Mem: 15G 2.4G 11G 155M 1.5G 12G
Swap: 1.2G 0B 1.2G
and the /proc/swaps:
$ cat /proc/swaps
Filename Type Size Used Priority
/dev/dm-1 partition 1257468 0 -2
What am I missing here ?
Seymour
I'm answering my own question. The issue is simply solved by running the following command:
sudo mkswap /dev/mapper/centos-swap
After that, the free command shows the new increased swap space and the /proc/swaps file also reflect that.
I found the solution by chance, while surfing for another topic. It seems that, as a matter of fact, after having created the physical volume and after having extended the virtual group and the logical volume, it's not enough to declare the new swap with swapon command, but it also requires to effectively "make" the swap using the mkswap command.
Don't ask me why, this is the way it works :-).

How to convert a qemu image (2 pflash + ide) to virtualbox vdi?

I am starting a VM with QEMU this way:
qemu-system-x86_64 \
-m 512M \
-drive file=ovmf.qcow2,if=pflash,format=qcow2,unit=0,readonly=on \
-drive file=ovmf.vars.qcow2,if=pflash,format=qcow2,unit=1 \
-nographic \
-drive file=file.uefiimg,if=ide,format=raw
fdisk -l file.uefiimg output:
Disk file.uefiimg: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: DC9B048E-91D0-4DD0-BD7A-4D6130AA726F
Device Start End Sectors Size Type
file.uefiimg1 16384 49151 32768 16M EFI System
file.uefiimg2 49152 1589247 1540096 752M Linux filesystem
file.uefiimg3 1589248 3129343 1540096 752M Linux filesystem
file.uefiimg4 3129344 4177919 1048576 512M Linux filesystem
Now the tricky part is that I would like to start this on Virtualbox. If not possible vmware is also an option. I tried converting the uefiimg to a raw image with VBoxManage then to vdi, without success. I think the main problem was that I need to include the qcow2 files. I read about those 2 files, inserted as pflash but I don't understand how to load them in VirtualBox (or if it's possible)
I converted the image to vdi with
VBoxManage convertfromraw file.uefiimg --format vdi file.vdi
Then, loaded this vdi and it works perfectly.

Amazon EC2 resize root device

I have one amazonw ec2 instance and would like to extend root device device form 100G to 500G. After create a new 500G volume and reattached to instance.
I can see volume is there by command $lsblk. However, after I resize the disk. I cannot do it with error "The filesystem is already 26212055 blocks long. Nothing to do!
name#ip-172-1-1-3:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 8.0K 3.9G 1% /dev
tmpfs 799M 840K 798M 1% /run
/dev/xvda1 99G 92G 3.1G 97% /
name#ip-172-1-1-3:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE
MOUNTPOINT
xvda 202:0 0 500G 0 disk
└─xvda1 202:1 0 100G 0 part /
name#ip-172-1-1-3:~$sudo resize2fs /dev/xvda1
resize2fs 1.42.9 (4-Feb-2014)
The filesystem is already 26212055 blocks long. Nothing to do!
here's exactly what to do:
df -h #print the name of your boot partition
lsblk #show info on all your block devices
You'll see from that output what the name of the disk is of your root partition. For example, you probably see something like this:
xvde 202:64 0 32G 0 disk
└─xvde1 202:65 0 8G 0 part /
Our goal is to make xvde1 use the whole available space from xvde.
Here's how to resize your partition:
fdisk /dev/xvda (the disk name, not your partition)
This enters into the fdisk utility.
u #Change the display to sectors
p #Print info
d #Delete the partition
n #New partition
p #Primary partition
1 #Partition number
2048 #First sector
Press Enter to accept the default
p #Print info
a #Toggle the bootable flag
1 #Select partition 1
w #Write table to disk and exit
Now, reboot your instance:
reboot
After it comes back do:
resize2fs /dev/xvde1 (the name of your partition, not the block device)
And finally verify the new disk size:
df -h
After I follow #error2007s step 12 with "a" a #Toggle the bootable flag stop and reboot. I can not bring up instance.
Disk /dev/xvda: 536.9 GB, 536870912000 bytes
255 heads, 63 sectors/track, 65270 cylinders, total 1048576000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/xvda1 2048 1048575999 524286976 83 Linux
Command (m for help): a
Partition number (1-4): 1
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
name#ip-172-1-1-3:~$ reboot
reboot: Need to be root
name#ip-172-1-1-3:~$ sudo reboot
Broadcast message from name#ip-172-1-1-3
(/dev/pts/1) at 10:18 ...
The system is going down for reboot NOW!
$ ssh -i "a.pem" name#ec2-172.1.1.3.compute-1.amazonaws.com -p 22
ssh: connect to host ec2-172.1.1.3.compute-1.amazonaws.com port 22: Operation timed out
You need to extend the available space:
$ lsblk
xvda 202:0 0 500G 0 disk
└─xvda1 202:1 0 100G 0 part /
$ growpart /dev/xvda 1
$ resize2fs /dev/xvda1

In Hadoop HDFS, how many data nodes a 1GB file uses to be stored?

I have a file of 1GB size to be stored on HDFS file system. I am having a cluster setup of 10 data nodes and a namenode. Is there any calculation that the Namenode uses (not for replicas) a particular no of data nodes for the storage of the file? Or Is there any parameter that we can configure to use for a file storage? If so, what is the default no of datanodes that Hadoop uses to store the file if it is not specifically configured?
I want to know if it uses all the datanodes of the cluster or only specific no of datanodes.
Let's consider the HDFS block size is 64MB and free space is also existing on all the datanodes.
Thanks in advance.
If the configured block size is 64 MB, and you have a 1 GB file which means the file size is 1024 MB.
So the blocks needed will be 1024/64 = 16 blocks, which means 1 Datanode will consume 16 blocks to store your 1 GB file.
Now, let's say that you have a 10 nodes cluster then the default replica is 3, that means your 1 GB file will be stored on 3 different nodes. So, the blocks acquired by your 1 GB file is -> *16 * 3 =48 blocks*.
If your one block is of 64 MB, then total size your 1 GB file consumed is ->
*64 * 48 = 3072 MB*.
Hope that clears your doubt.
In Second(2nd) Generation of Hadoop
If the configured block size is 128 MB, and you have a 1 GB file which means the file size is 1024 MB.
So the blocks needed will be 1024/128 = 8 blocks, which means 1 Datanode will contain 8 blocks to store your 1 GB file.
Now, let's say that you have a 10 nodes cluster then the default replica is 3, that means your 1 GB file will be stored on 3 different nodes. So, the blocks acquired by your
1 GB file is -> *8 * 3 =24 blocks*.
If your one block is of 128 MB, then total size your 1 GB file consumed is -
*128 * 24 = 3072 MB*.

Resizing disk space on vagrant box

I'd like to give my box some more disk space. I'm trying to do this through the vagrantfile as follows:
Vagrant::Config.run do |config|
# ..
config.vm.customize ["modifyvm", :id, "--memory", 1024]
config.vm.customize ["modifyhd", :id, "--resize", 4096]
end
This gives me the error:
A customization command failed:
["modifyhd", "e87d8786-88be-4805-9c2a-45e88b8e0e56", "--resize", "4096"]
The following error was experienced:
VBoxManage: error: The given path 'e87d8786-88be-4805-9c2a-45e88b8e0e56' is not fully qualified
VBoxManage: error: Details: code VBOX_E_FILE_ERROR (0x80bb0004), component Medium, interface IMedium, callee nsISupports
VBoxManage: error: Context: "OpenMedium(Bstr(pszFilenameOrUuid).raw(), enmDevType, enmAccessMode, fForceNewUuidOnOpen, pMedium.asOutParam())" at line 178 of file VBoxManageDisk.cpp
Please fix this customization and try again.
I'm trying to piece the information together from http://docs.vagrantup.com/v1/docs/config/vm/customize.html
http://www.virtualbox.org/manual/ch08.html#vboxmanage-modifyvdi
You are sending modifyhd the UUID of the VM (provided by vagrant) while it expects the UUID of the VDI.
You will need to use the absolute path to the actual VDI file or its UUID. You can use the following command to get the UUID of the VDI: VBoxManage showhdinfo <filename> (see virtualbox - how to check what is the uuid of a vdi?)
I created a new disk, added and extended the older.
My Vagrantfile:
Vagrant.configure(2) do |config|
config.vm.box = "bseller/oracle-standard"
config.vm.define :oracle do |oracle|
oracle.vm.hostname = 'oraclebox'
oracle.vm.synced_folder ".", "/vagrant", owner: "oracle", group: "oinstall"
oracle.vm.network :private_network, ip: '192.168.33.13'
oracle.vm.network :forwarded_port, guest: 1521, host: 1521
oracle.vm.provider :virtualbox do |vb|
vb.customize ["modifyvm", :id, "--memory", "4096"]
vb.customize ["modifyvm", :id, "--name", "oraclebox"]
if !File.exist?("disk/oracle.vdi")
vb.customize [
'createhd',
'--filename', 'disk/oracle',
'--format', 'VDI',
'--size', 60200
]
vb.customize [
'storageattach', :id,
'--storagectl', "SATA",
'--port', 1, '--device', 0,
'--type', 'hdd', '--medium', 'disk/oracle.vdi'
]
end
end
oracle.vm.provision "shell", path: "shell/add-oracle-disk.sh"
oracle.vm.provision "shell", path: "shell/provision.sh"
end
end
This will create new disk in
disk
|-- oracle.vdi
shell
|-- provision.sh
Vagrantfile
and add in your box. The new disk is of 60GB
My shell provision.sh
set -e
set -x
if [ -f /etc/disk_added_date ] ; then
echo "disk already added so exiting."
exit 0
fi
sudo fdisk -u /dev/sdb <<EOF
n
p
1
t
8e
w
EOF
sudo pvcreate /dev/sdb1
sudo vgextend VolGroup /dev/sdb1
sudo lvextend -L50GB /dev/VolGroup/lv_root
sudo resize2fs /dev/VolGroup/lv_root
date > /etc/disk_added_date
This script was adapted from SHC to box bseller/oracle-standard. For full code, see my project oraclebox in GitHub
I've been looking at this, and I haven't found any way to actually do this directly. However, you can achieve the effect using Ansible as a provisioner. First of all, it is definitely possible with Vagrant to create and add a second disk, which you can then add and mount any way you like using Ansible.
However, Ansible also has the ability to run local commands (on the host). This is with Ansible's local_action feature. I used it here to reboot a Vagrant VM after a kernel upgrade and tell the host to wait until it has restarted, but you could use this with the command or shell actions to find the HD identifier, shutdown the VM, and configure the hard disk, then reboot. At least in theory.
Although the question is old but I saw no answer accepted.
The given path 'e87d8786-88be-4805-9c2a-45e88b8e0e56' is not fully qualified shows up because the UUID e87d8... is VirtualBox vm UUID, not your SATA storage disk device UUID. You an find the storage device UUID by VBoxManage showvminfo e87d8786-88be-4805-9c2a-45e88b8e0e56|grep vdi. The replace :id with the SATA storage UUID in Vagrantfile modifyhd line.
It solved my problem.
OK... Solved...
VBoxManage.exe wan't in my path so what I did was go to (you have to go to that path):
C:\Program Files\Oracle\VirtualBox
then used the command:
VBoxManage.exe modifyhd "C:\Users\MyUser\VirtualBox VMs\MachineName\HDName.vdi " --resize 20480
For 20 GB size a HD
This DON'T work:
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" modifyhd "C:\Users\MyUser\VirtualBox VMs\MachineName\HDName.vdi " --resize 20480
You have to be in the path: C:\Program Files\Oracle\VirtualBox
You can add new disk instead.
First use virtual box GUI to add another
virtual disk
then use fdisk to create a primary disk partion
root#linux-dev:/# fdisk -l
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sda: 9.9 GiB, 10632560640 bytes, 20766720 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x83312a2b
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 19816447 19814400 9.5G 83 Linux
/dev/sda2 19818494 20764671 946178 462M 5 Extended
/dev/sda5 19818496 20764671 946176 462M 82 Linux swap / Solaris
root#linux-dev:/# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.25.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x5eb328b9.
Command (m for help): m
Help:
DOS (MBR)
a toggle a bootable flag
b edit nested BSD disklabel
c toggle the dos compatibility flag
Generic
d delete a partition
l list known partition types
n add a new partition
p print the partition table
t change a partition type
v verify the partition table
Misc
m print this menu
u change display/entry units
x extra functionality (experts only)
Save & Exit
w write table to disk and exit
q quit without saving changes
Create a new label
g create a new empty GPT partition table
G create a new empty SGI (IRIX) partition table
o create a new empty DOS partition table
s create a new empty Sun partition table
Command (m for help): p
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x5eb328b9
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-41943039, default 2048):
Last sector, +sectors or +size{K,M,G,T,P} (2048-41943039, default 41943039):
Created a new partition 1 of type 'Linux' and of size 20 GiB.
Command (m for help): p
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x5eb328b9
Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 41943039 41940992 20G 83 Linux
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Make newly created disk partition a ext4 filesystem
root#linux-dev:/# mkfs.ext4 /dev/sdb1
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 5242624 4k blocks and 1310720 inodes
Filesystem UUID: 0301b56a-1d80-42de-9334-cc49e4eaf7b2
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
Mount the disk partition to a directory
root#linux-dev:/# mount -t ext4 /dev/sdb1 /home/chenchun
root#linux-dev:/# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.2G 3.3G 5.5G 38% /
udev 10M 0 10M 0% /dev
tmpfs 74M 4.4M 70M 6% /run
tmpfs 185M 0 185M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 185M 0 185M 0% /sys/fs/cgroup
none 372G 240G 133G 65% /vagrant
/dev/sdb1 20G 44M 19G 1% /home/chenchun