How to expand new EBS storage? - amazon-web-services

I have added a new 2TB EBS volume to my database instance:
Volume Snapshot:
I would like to merge the 2TB volume so that I can write a big dataset onto the database, but growpart and resize2fs don't seem to work.
I get this error:
ubuntu:~$ sudo growpart /dev/nvme0n1 0
WARN: unknown label
failed [sfd_dump:1] sfdisk --unit=S --dump /dev/nvme0n1
sfdisk: /dev/nvme0n1: does not contain a recognized partition table
FAILED: failed to dump sfdisk info for /dev/nvme0n1
Any suggestions on how to approach this?

Related

AWS Replication agent problem when launching

I am trying to launch aws replication agent in a CENTOS 8.3 and always returns me an error during the process of replication agent installation ( python3 aws-replication-installer-init.py ......)
The output of the process shows me:
The installation of the AWS Replication Agent has started.
Identifying volumes for replication.
Identified volume for replication: /dev/sdb of size 7 GiB
Identified volume for replication: /dev/sda of size 11 GiB
All volumes for replication were successfully identified.
Downloading the AWS Replication Agent onto the source server... Finished.
Installing the AWS Replication Agent onto the source server...
Error: Failed Installing the AWS Replication Agent
Installation failed.
If i check the aws_replication_agent_installer.log i can see that appears messages like:
make -C /lib/modules/4.18.0-348.2.1.el8_5.x86_64/build M=/tmp/tmp8mdbz3st/AgentDriver modules
.....................
retcode: 0
Build essentials returned with code None
--- Building software
running: 'which zypper'
retcode: 256
running: 'make'
retcode: 0
running: 'chmod 0770 ./aws-replication-driver-commander'
retcode: 0
running: '/sbin/rmmod aws-replication-driver'
retcode: 256
running: '/sbin/insmod ./aws-replication-driver.ko'
retcode: 256
running: '/sbin/rmmod aws-replication-driver'
retcode: 256
Cannot insert module. Try 0.
running: '/sbin/rmmod aws-replication-driver'
retcode: 256
running: '/sbin/insmod ./aws-replication-driver.ko'
retcode: 256
running: '/sbin/rmmod aws-replication-driver'
retcode: 256
Cannot insert module. Try 1.
............
Cannot insert module. Try 9.
Installation returned with code 2
Installation failed due to unspecified error:
stderr: sh: /var/lib/aws-replication-agent/stopAgent.sh: No such file or directory
which: no zypper in (/sbin:/bin:/usr/sbin:/usr/bin:/sbin:/usr/sbin)
which: no apt-get in (/sbin:/bin:/usr/sbin:/usr/bin:/sbin:/usr/sbin)
which: no zypper in (/sbin:/bin:/usr/sbin:/usr/bin:/sbin:/usr/sbin)
rmmod: ERROR: Module aws_replication_driver is not currently loaded
insmod: ERROR: could not insert module ./aws-replication-driver.ko: Required key not available
rmmod: ERROR: Module aws_replication_driver is not currently loaded
Any issue of the error?
Launching with the command:
mokutil --disable-validation
will allow to change kernel modules (next boot will confirm it introducing password that must be entered afet command mokutil)

Docker-compose blkio: device_write_iops not working for AWS/EBS instance

I am trying to limit iops on a particular container in my docker-compose stack. To do this I am using the following config:
blkio_config:
device_write_iops:
- path: "/dev/xvda1"
rate: 20
device_read_iops:
- path: "/dev/xvda1"
rate: 20
I cannot provide the rest of the file for security reasons however it is isolated to this statement. I confirmed that this is the correct path for my ebs volume using the df -h command.
When I then run docker-compose up -d I get the following error:
Recreating e1c25c41b612_drone ... error
ERROR: for e1c25c41b612_drone Cannot start service drone: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"process_linux.go:396: setting cgroup config for procHooks process caused \\\"failed to write 202:1 20 to blkio.throttle.read_iops_device: write /sys/fs/cgroup/blkio/docker/a674e86d50111afa576d5fd4e16a131070c100b7db3ac22f95986904a47ae82a/blkio.throttle.read_iops_device: invalid argument\\\"\"": unknown
ERROR: for drone Cannot start service drone: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"process_linux.go:396: setting cgroup config for procHooks process caused \\\"failed to write 202:1 20 to blkio.throttle.read_iops_device: write /sys/fs/cgroup/blkio/docker/a674e86d50111afa576d5fd4e16a131070c100b7db3ac22f95986904a47ae82a/blkio.throttle.read_iops_device: invalid argument\\\"\"": unknown
The iops limit on my EBS instance is 120 and so I tested using a variety of different values to no avail.
Any help is massively appreciated.

`growpart` failed on Debian

I am running Debian 8.7 on Google Cloud. The instance had a disk of size 50G, and I increased its size to 100G, as shown in the lsblk output below:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 100G 0 disk
`-sda1 8:1 0 50G 0 part /
I then tried to increase the size of sda1 using
sudo growpart /dev/sda 1
, but got the following error:
failed [sfd_list:1] sfdisk --list --unit=S /dev/sda
FAILED: failed: sfdisk --list /dev/sda
It didn't tell me the specific reason for the failure. I googled around and couldn't find anyone who got this issue.
I followed the gcloud documentation and cannot figure out where the problem is.
Google Cloud images for Debian, Ubuntu, etc. have the ability to automatically resize the root file system on startup. If you resize the disk while the system is running, the next time the system is rebooted the partition and file system will be resized.
You can also resize the root file system while the system is running without rebooting.
Replace INSTANCE_NAME and ZONE in the following commands. The second command assumes that the file system is EXT4. Check for your system setup.
Resize the disk:
gcloud compute disks resize INSTANCE_NAME --zone ZONE --size 30GB --quiet
Resize the partition and file system:
gcloud compute ssh INSTANCE_NAME --zone ZONE --command "sudo expand-root.sh /dev/sda 1 ext4"
Debian 9 – Resize Root File System

Error when mounting an EC2 instances

I am following an online tutorial to set up an EC2 instance for a group project. http://www.developintelligence.com/blog/2017/02/analyzing-4-million-yelp-reviews-python-aws-ec2-instance/.
The instance I used is r3.4xlarge, the tutorial says if I chose an instance with an SSD, I need to mount that and run the following code:
lsblk
sudo mkdir /mnt/ssd
sudo mount /dev/xvdb /mnt/ssd
sudo chown -R ubuntu /mnt/ssd
lsblk shows the following:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdb 202:16 0 300G 0 disk
However, when I run sudo mount /dev/xvdb /mnt/ssd, it gives me the error:
mount: wrong fs type, bad option, bad superblock on /dev/xvdb,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
Could someone provide a solution to this error? Thanks!
Before one mounts a filesystem in linux - the filesystem should be created.
In this case it might be
mkfs.ext4 /dev/xvdb
This would create an ext4 filesystem on a /dev/xvdb device.

Not all storage is available for Amazon EBS

I'm sure the problem appears because of my misunderstanding of Ec2+EBS configuring, so answer might be very simple.
I've created RedHat ec2 instance on the Amazon WS with 30GB EBS storage. But lsblk shows me, that only 6GB of total 30 is available for me:
xvda 202:0 0 30G 0 disk
└─xvda1 202:1 0 6G 0 part /
How can I mount all remaining storage space to my instance?
[UPDATE] commands output:
mount:
/dev/xvda1 on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sudo fdisk -l /dev/xvda:
WARNING: GPT (GUID Partition Table) detected on '/dev/xvda'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/xvda: 32.2 GB, 32212254720 bytes
255 heads, 63 sectors/track, 3916 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/xvda1 1 1306 10485759+ ee GPT
resize2fs /dev/xvda1:
resize2fs 1.41.12 (17-May-2010)
The filesystem is already 1572864 blocks long. Nothing to do!
I believe you are experiencing an issue that seems specific to EC2 and RHEL* where the partition won't extend using the standard tools.
If you follow the instructions of this previous answer you should be able to extend the partition to use the full space. Follow the instructions particularly carefully if expanding the root partition!
unable to resize root partition on EC2 centos
If you update your question with the output of fdisk -l /dev/xvda and mount it should help provide any extra information if the following isnt suitable:
I would assume that you could either re-partition xvda to provision the space for another mount point (/var or /home for example) or grow your current root partition into the extra space available - you can follow this guide here to do this
Obviously be sure to back up any data you have on there, this is potentially destructive!
[Update - how to use parted]
The following link will talk you through using GNU Parted to create a partition, you will essentially just need to create a new partition, then I would temporarily mount this to a directory such as /mnt/newhome, copy across all of the current contents of /home (recursively as root keeping permissions with cp -rp /home/* /mnt/newhome), then I would rename the current /home to /homeold, then make sure you have set up Fstab to have the correct entry: (assuming your new partition is /dev/xvda2)
/dev/xvda2 /home /ext4 noatime,errors=remount-ro 0 1