error when resizing partition using `growpart` on AWS EBS instance - amazon-web-services

I have an EC2 instance where I'm attempting to resize the disk on the fly. I've followed the instructions in this SO post but when I run sudo growpart /dev/nvme0n1p1 1, I get the following error:
FAILED: failed to get start and end for /dev/nvme0n1p11 in /dev/nvme0n1p1
What does this mean and how can I resolve it?
More info:
Output from lsblk:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 300G 0 disk
└─nvme0n1p1 259:1 0 300G 0 part /
I can see that EBS volume is in the in-use (optimizing) state.
Thanks in advance!

But for me the solution didn’t work
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 280G 0 disk
├─xvdf1 202:81 0 4G 0 part [SWAP]
├─xvdf2 202:82 0 10G 0 part /data1
├─xvdf3 202:83 0 10G 0 part /data2
├─xvdf4 202:84 0 1K 0 part
├─xvdf5 202:85 0 10G 0 part /applications1
├─xvdf6 202:86 0 4G 0 part /applications2
├─xvdf7 202:87 0 8G 0 part /logsOld
├─xvdf8 202:88 0 50G 0 part /extra
├─xvdf9 202:89 0 20G 0 part /logs
└─xvdf10 202:90 0 64G 0 part /extra/tmp
growpart /dev/xvdf 10
FAILED: failed to get start and end for /dev/xvdf10 in /dev/xvdf

I think the name of the command growpart is a bit misleading, because following the aws instructions you should grow the disk:
sudo growpart /dev/nvme0n1 1
not the partition /dev/nvme0n1p1

Related

How to increase 1st partition size via terminal only when there are second and third adjacent partitions for NVME

This is on an AWS EC2 M5a with EBS (Ubuntu 16.04)
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:1 0 50G 0 disk
├─nvme0n1p1 259:2 0 20G 0 part /
├─nvme0n1p2 259:3 0 2G 0 part [SWAP]
└─nvme0n1p3 259:4 0 28G 0 part
├─vg_abcdef-logs 251:1 0 8G 0 lvm /var/log
└─vg_abcdef-app 251:2 0 19G 0 lvm /home/abcdef
nvme1n1 259:0 0 50G 0 disk
└─vg_backups-backups 251:0 0 49G 0 lvm /home/abcdef/Backups-Disk
I added 50GB to the disk (nvme0n1)/EBS volume for a total of 100GB and need to expand the first partition (root /). I have the following:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:1 0 100G 0 disk
├─nvme0n1p1 259:2 0 20G 0 part /
├─nvme0n1p2 259:3 0 2G 0 part [SWAP]
└─nvme0n1p3 259:4 0 28G 0 part
├─vg_abcdef-logs 251:1 0 8G 0 lvm /var/log
└─vg_abcdef-app 251:2 0 19G 0 lvm /home/abcdef
nvme1n1 259:0 0 50G 0 disk
└─vg_backups-backups 251:0 0 49G 0 lvm /home/abcdef/Backups-Disk
The resize2fs command wont work on the first partition because there is a second and third partition right after it. As you will see below - resize2fs will work on the third (nvme0n1p3)
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:1 0 100G 0 disk
├─nvme0n1p1 259:2 0 20G 0 part /
├─nvme0n1p2 259:3 0 2G 0 part [SWAP]
└─nvme0n1p3 259:4 0 78G 0 part
├─vg_abcdef-logs 251:1 0 8G 0 lvm /var/log
└─vg_abcdef-app 251:2 0 19G 0 lvm /home/abcdef
nvme1n1 259:0 0 50G 0 disk
└─vg_backups-backups 251:0 0 49G 0 lvm /home/abcdef/Backups-Disk
How do I move the second and third partitions (via terminal/CLI only) whereas there is enough space to expand (extend file system) on the first partition? I would prefer a solution wherein I do not have to stop and restart the EC2 instance.

AWS Image Builder SSM execution error when cURLing s3 through an HTTP proxy - is there a way to inject the HTTP_PROXY env vars into image builder?

I believe that all i'd need to do to resolve this is to set SSM inside of Image Builder to use my proxy with the environment variable -> HTTP_PROXY = HOST:IP
for example, I can run this on another server where all traffic is directed through the proxy:
curl -I --socks5-hostname socks.local:1080 https://s3.amazonaws.com/aws-cli/awscli-bundle.zip -o awscli-bundle.zip
Here's what Image builder is trying to do and failing (before any of the image builder components are ran):
SSM execution '68711005-5dc4-41f6-8cdd-633728ca41da' failed with status = 'Failed' in state = 'BUILDING' and failure message = 'Step fails when it is verifying the command has completed. Command 76b55646-79bb-417c-8bb6-6ee01f9a76ff returns unexpected invocation result: {Status=[Failed], ResponseCode=[7], Output=[ ----------ERROR------- + sudo systemctl stop ecs + curl https://s3.amazonaws.com/aws-cli/awscli-bundle.zip -o /tmp/imagebuilder_service/awscli-bundle.zip % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:03 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:04 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:05 --:--:-- 0 0 0 0 0 0 0 0 0 ...'
These env vars are all that should be needed, the problem is that i see no way to add them (similarly to how you would in CodeBuild):
http_proxy=http://hostname:port
https_proxy=https://hostname:port
no_proxy=169.254.169.254
SSM Agent does not read environment variables from host, you would need to provide the environment variables in the file below and restart ssm agent
On Ubuntu Server instances where SSM Agent is installed by using a snap: /etc/systemd/system/snap.amazon-ssm-agent.amazon-ssm-agent.service.d/override.conf
On Amazon Linux 2 instances: /etc/systemd/system/amazon-ssm-agent.service.d/override.conf
On other operating systems: /etc/systemd/system/amazon-ssm-agent.service.d/amazon-ssm-agent.override
[Service]
Environment="http_proxy=http://hostname:port"
Environment="https_proxy=https://hostname:port"
Environment="no_proxy=169.254.169.254"
Reference: https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-proxy-with-ssm-agent.html#ssm-agent-proxy-systemd

How to extract full paths from grep output using regular expression

I need to automatically detect any USB drives plugged in, mounted or not, mount the ones not mounted already in folders that have the name of the given name of the device (like it happens in a Windows machine by default) and get the routes of the mount points of all the devices. The devices should be mounted in folders in /media/pi (using a Raspberry Pi, so pi is my username). This is what I'm doing:
To get the path of all mounter devices:
1) Run lsblk, outputs:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 1 14.4G 0 disk
└─sda1 8:1 1 14.4G 0 part /media/pi/D0B46928B4691270
sdb 8:16 1 14.3G 0 disk
└─sdb1 8:17 1 14.3G 0 part /media/pi/MI PENDRIVE
mmcblk0 179:0 0 14.9G 0 disk
├─mmcblk0p1 179:1 0 41.8M 0 part /boot
└─mmcblk0p2 179:2 0 14.8G 0 part /
2) With a particularly crafted line, I can filter out some unnecessary info:
I run lsblk | grep 'sd' | grep 'media' which outputs:
└─sda1 8:1 1 14.4G 0 part /media/pi/D0B46928B4691270
└─sdb1 8:17 1 14.3G 0 part /media/pi/MI PENDRIVE
I need to get /media/pi/D0B46928B4691270 and /media/pi/MI PENDRIVE, preferably in an array. Currently I'm doing this:
lsblk | grep 'sd' | grep 'media' | cut -d '/' -f 4
But it only works with paths that have no spaces and the output of grep is not an array of course. What would be a clean way of doing this with regular expressions?
Thanks.
lsblk supports json output with the -J flag. I would recommend that if you want to parse the output:
lsblk -J | jq '..|.?|select(.name|startswith("sd")).mountpoint // empty'
Something like this?
$ echo "$f"
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 1 14.4G 0 disk
└─sda1 8:1 1 14.4G 0 part /media/pi/D0B46928B4691270
sdb 8:16 1 14.3G 0 disk
└─sdb1 8:17 1 14.3G 0 part /media/pi/MI PENDRIVE
mmcblk0 179:0 0 14.9G 0 disk
├─mmcblk0p1 179:1 0 41.8M 0 part /boot
└─mmcblk0p2 179:2 0 14.8G 0 part /
$ grep -o '/media/.*$' <<<"$f"
/media/pi/D0B46928B4691270
/media/pi/MI PENDRIVE
$ IFS=$'\n' drives=( $(grep -o '/media/.*$' <<<"$f") )
$ printf '%s\n' "${drives[#]}"
/media/pi/D0B46928B4691270
/media/pi/MI PENDRIVE

rsync running differently from QProcess compared to bash command line

I am experimenting with launching rsync from QProcess and although it runs, it behaves differently when run from QProcess compared to running the exact same command from the command line.
Here is the command and stdout when run from QProcess
/usr/bin/rsync -atv --stats --progress --port=873 --compress-level=9 --recursive --delete --exclude="/etc/*.conf" --exclude="A*" rsync://myhost.com/haast/tmp/mysync/* /tmp/mysync/
receiving incremental file list
created directory /tmp/mysync
A
0 100% 0.00kB/s 0:00:00 (xfer#1, to-check=6/7)
B
0 100% 0.00kB/s 0:00:00 (xfer#2, to-check=5/7)
test.conf
0 100% 0.00kB/s 0:00:00 (xfer#3, to-check=4/7)
subdir/
subdir/A2
0 100% 0.00kB/s 0:00:00 (xfer#4, to-check=2/7)
subdir/C
0 100% 0.00kB/s 0:00:00 (xfer#5, to-check=1/7)
subdir/D
0 100% 0.00kB/s 0:00:00 (xfer#6, to-check=0/7)
Number of files: 7
Number of files transferred: 6
Total file size: 0 bytes
Total transferred file size: 0 bytes
Literal data: 0 bytes
Matched data: 0 bytes
File list size: 105
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 174
Total bytes received: 367
sent 174 bytes received 367 bytes 360.67 bytes/sec
total size is 0 speedup is 0.00
Notice that although I excluded 'A*', it still copied them! Now running the exact same command from the command line:
/usr/bin/rsync -atv --stats --progress --port=873 --compress-level=9 --recursive --delete --exclude="/etc/*.conf" --exclude="A*" rsync://myhost.com/haast/tmp/mysync/* /tmp/mysync/
receiving incremental file list
created directory /tmp/mysync
B
0 100% 0.00kB/s 0:00:00 (xfer#1, to-check=4/5)
test.conf
0 100% 0.00kB/s 0:00:00 (xfer#2, to-check=3/5)
subdir/
subdir/C
0 100% 0.00kB/s 0:00:00 (xfer#3, to-check=1/5)
subdir/D
0 100% 0.00kB/s 0:00:00 (xfer#4, to-check=0/5)
Number of files: 5
Number of files transferred: 4
Total file size: 0 bytes
Total transferred file size: 0 bytes
Literal data: 0 bytes
Matched data: 0 bytes
File list size: 83
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 132
Total bytes received: 273
sent 132 bytes received 273 bytes 270.00 bytes/sec
total size is 0 speedup is 0.00
Notice that now the 'A*' exclude is respected! Can someone explain why they are performing differently?
A noticed that if I removed the quotes surrounding the excludes, then the QProcess run performs correctly.
In your command-line execution, bash interpreter performs a previous substitution and remove quotes, so they are not passed to rsync arg list.
Next script shows how bash substitution is performed:
[tmp]$ cat printargs.sh
#!/bin/bash
echo $*
[tmp]$ ./printargs.sh --exclude="A*"
--exclude=A*

Way to get SCSI disk names in Linux C++ application

In my Linux C++ application I want to get names of all SCSI disks which are present on the
system. e.g. /dev/sda, /dev/sdb, ... and so on.
Currently I am getting it from the file /proc/scsi/sg/devices output using below code:
host chan SCSI id lun type opens qdepth busy online
0 0 0 0 0 1 128 0 1
1 0 0 0 0 1 128 0 1
1 0 0 1 0 1 128 0 1
1 0 0 2 0 1 128 0 1
// If SCSI device Id is > 26 then the corresponding device name is like /dev/sdaa or /dev/sdab etc.
if (MAX_ENG_ALPHABETS <= scsiId)
{
// Device name order is: aa, ab, ..., az, ba, bb, ..., bz, ..., zy, zz.
deviceName.append(1, 'a'+ (char)(index / MAX_ENG_ALPHABETS) - 1);
deviceName.append(1, 'a'+ (char)(index % MAX_ENG_ALPHABETS));
}
// If SCSI device Id is < 26 then the corresponding device name is liek /dev/sda or /dev/sdb etc.
else
{
deviceName.append(1, 'a'+ index);
}
But the file /proc/scsi/sg/devices also contains the information about the disk which were previously present on the system. e.g If I detach the disk (LUN) /dev/sdc from the system
the file /proc/scsi/sg/devices still contains info of /dev/sdc which is invalid.
Tell me is there any different way to get the SCSI disk names? like a system call?
Thanks
You can simply read list of all files like /dev/sd* (in C, you would need to use opendir/readdir/closedir) and filter it by sdX (where X is one or two letters).
Also, you can get list of all partitions by reading single file /proc/partitions, and then filter 4th field by sdX:
$ cat /proc/partitions
major minor #blocks name
8 0 52428799 sda
8 1 265041 sda1
8 2 1 sda2
8 5 2096451 sda5
8 6 50066541 sda6
which would give you list of all physical disks together with their capacity (3rd field).
After get disk name list from /proc/scsi/sg/devices, you can verify the existence through code. For example, install sg3-utils, and use sg_inq to query whether the disk is active.