`growpart` failed on Debian - google-cloud-platform

I am running Debian 8.7 on Google Cloud. The instance had a disk of size 50G, and I increased its size to 100G, as shown in the lsblk output below:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 100G 0 disk
`-sda1 8:1 0 50G 0 part /
I then tried to increase the size of sda1 using
sudo growpart /dev/sda 1
, but got the following error:
failed [sfd_list:1] sfdisk --list --unit=S /dev/sda
FAILED: failed: sfdisk --list /dev/sda
It didn't tell me the specific reason for the failure. I googled around and couldn't find anyone who got this issue.
I followed the gcloud documentation and cannot figure out where the problem is.

Google Cloud images for Debian, Ubuntu, etc. have the ability to automatically resize the root file system on startup. If you resize the disk while the system is running, the next time the system is rebooted the partition and file system will be resized.
You can also resize the root file system while the system is running without rebooting.
Replace INSTANCE_NAME and ZONE in the following commands. The second command assumes that the file system is EXT4. Check for your system setup.
Resize the disk:
gcloud compute disks resize INSTANCE_NAME --zone ZONE --size 30GB --quiet
Resize the partition and file system:
gcloud compute ssh INSTANCE_NAME --zone ZONE --command "sudo expand-root.sh /dev/sda 1 ext4"
Debian 9 – Resize Root File System

Related

How to expand new EBS storage?

I have added a new 2TB EBS volume to my database instance:
Volume Snapshot:
I would like to merge the 2TB volume so that I can write a big dataset onto the database, but growpart and resize2fs don't seem to work.
I get this error:
ubuntu:~$ sudo growpart /dev/nvme0n1 0
WARN: unknown label
failed [sfd_dump:1] sfdisk --unit=S --dump /dev/nvme0n1
sfdisk: /dev/nvme0n1: does not contain a recognized partition table
FAILED: failed to dump sfdisk info for /dev/nvme0n1
Any suggestions on how to approach this?

How to make Openwrt running on AWS EC2?

I have tried two methods to upload openwrt x86_64 image to AWS AMI and run on EC2, but both failed.
The image I built runs ok on VirutalBox and vmware.
The first method - vm_import/export.
I followed instruction on https://amazonaws-china.com/cn/ec2/vm-import/, vm_import tool failed and said "Not found initrd in Grub" at last.
Openwrt doesn't use initrd at boot stage. This is the default boot entry of grub.cfg
menuentry "OpenWrt" {
linux /boot/vmlinuz root=PARTUUID=fbdad417-02 rootfstype=ext4 rootwait console=tty0 console=ttyS0,115200n8 noinitrd
}
The second method - ec2-bundle-image/ec2-upgrade-image
I tried this way, and it can upload image files and metadata files to S3, and I could make a new AMI, and launch EC2 instance. But EC2 instance was not be booted correctly it stop at the grubdom>.
I followed the instruction of https://forum.archive.openwrt.org/viewtopic.php?id=41588, it seems a little old, I didn't found the aki instance it mentioned and used a alternative one (aki-7077ab11 pv-grub-hd0_1.05-x86_64.gz).
Whatever the combined image(openwrt default built) or the custom image(release rootfs.tar.gz and copy kernel and grub config to it), both failed, here is EC2 instance system log:
Xen Minimal OS!
start_info: 0x10d4000(VA)
nr_pages: 0xe504a
shared_inf: 0xeeb28000(MA)
pt_base: 0x10d7000(VA)
nr_pt_frames: 0xd
mfn_list: 0x9ab000(VA)
mod_start: 0x0(VA)
mod_len: 0
flags: 0x300
cmd_line: root=/dev/sda1 ro console=hvc0 4
stack: 0x96a100-0x98a100
MM: Init
_text: 0x0(VA)
_etext: 0x7b824(VA)
_erodata: 0x97000(VA)
_edata: 0x9cce0(VA)
stack start: 0x96a100(VA)
_end: 0x9aa700(VA)
start_pfn: 10e7
max_pfn: e504a
Mapping memory range 0x1400000 - 0xe504a000
setting 0x0-0x97000 readonly
skipped 0x1000
MM: Initialise page allocator for 1809000(1809000)-e504a000(e504a000)
MM: done
Demand map pfns at e504b000-20e504b000.
Heap resides at 20e504c000-40e504c000.
Initialising timer interface
Initialising console ... done.
gnttab_table mapped at 0xe504b000.
Initialising scheduler
Thread "Idle": pointer: 0x20e504c050, stack: 0x1f10000
Thread "xenstore": pointer: 0x20e504c800, stack: 0x1f20000
xenbus initialised on irq 3 mfn 0xfeffc
Thread "shutdown": pointer: 0x20e504cfb0, stack: 0x1f30000
Dummy main: start_info=0x98a200
Thread "main": pointer: 0x20e504d760, stack: 0x1f40000
"main" "root=/dev/sda1" "ro" "console=hvc0" "4"
vbd 2049 is hd0
******************* BLKFRONT for device/vbd/2049 **********
backend at /local/domain/0/backend/vbd/27482/2049
2097152 sectors of 512 bytes
**************************
vbd 2064 is hd1
******************* BLKFRONT for device/vbd/2064 **********
backend at /local/domain/0/backend/vbd/27482/2064
8377344 sectors of 512 bytes
**************************
[H[J
GNU GRUB version 0.97 (3752232K lower / 0K upper memory)
[ Minimal BASH-like line editing is supported. For
the first word, TAB lists possible command
completions. Anywhere else TAB lists the possible
completions of a device/filename. ]
grubdom>
Any idea? thanks.
It is easy task which doesn't need any of the complicated setup.
I used Virtualbox, but any other virtualization can be used (e.g. VMware or Hyper-V)
By my experience, placing openwrt to AWS fails using any of import methods other than "importing snapshot"
download openwrt
https://downloads.openwrt.org/releases/19.07.5/targets/x86/64/
install openwrt on virtualbox and create ova
https://openwrt.org/docs/guide-user/virtualization/virtualbox-vm
2a) convert img to vdi
- example: VBoxManage convertfromraw --format VDI openwrt-x86-64-combined.img openwrt.vdi
2b) extend vdi to 1GB
- example: VBoxManage modifymedium openwrt.vdi --resize 1024
2c) boot openwrt
2d) change eth0 interface to dhcp
- example: vi /etc/config/network
2e) shutdown
2f) export VM to ova'
rename .ova to .zip
unzip .zip
by unzipping you get vmdk file of virtual disk
upload vmdk to AWS S3 bucket
add vmimport role to your account
https://www.msp360.com/resources/blog/how-to-configure-vmimport-role/
import vmdk as snapshot
https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-import-snapshot.html
create new EC2 instance
replace EC2 instance volume with imported volume
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-restoring-volume.html
boot up

Website completely down after resizing boot disk VM (Google Cloud)

I had to resize the boot disk of my Debian Linux VM from 10GB to 30GB because it was full. After doing just so and stopping/starting my instance it has become useless. I can't enter SSH and i can't access my application. The last backups where from 1 month ago and we will lose A LOT of work if i don't get this to work.
I have read pretty much everything on the internet about resizing disks and repartitioning tables, but nothing seems to work.
When running df -h i see:
Filesystem Size Used Avail Use% Mounted on
overlay 36G 30G 5.8G 84% /
tmpfs 64M 0 64M 0% /dev
tmpfs 848M 0 848M 0% /sys/fs/cgroup
/dev/sda1 36G 30G 5.8G 84% /root
/dev/sdb1 4.8G 11M 4.6G 1% /home
overlayfs 1.0M 128K 896K 13% /etc/ssh/keys
tmpfs 848M 744K 847M 1% /run/metrics
shm 64M 0 64M 0% /dev/shm
overlayfs 1.0M 128K 896K 13% /etc/ssh/ssh_host_dsa_key
tmpfs 848M 0 848M 0% /run/google/devshell
when running sudo lsblk i see:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 40G 0 disk
├─sda1 8:1 35.9G 0 part /var/lib/docker
├─sda2 8:2 0 16M 1 part
├─sda3 8:3 0 2G 1 part
├─sda4 8:4 0 16M 0 part
├─sda5 8:5 0 2G 0 part
├─sda6 8:6 512B 0 part
├─sda7 8:7 0 512B 0 part
├─sda8 8:8 16M 0 part
├─sda9 8:9 0 512B 0 part
├─sda10 8:10 0 512B 0 part
├─sda11 8:11 8M 0 part
└─sda12 8:12 0 32M 0 part
sdb 8:16 0 5G 0 disk
└─sdb1 8:17 0 5G 0 part /home
zram0 253:0 0 768M 0 disk [SWAP]
Before increasing the disk size i did try to add a second disk and i even formatted and mounted it following the google cloud docs, then unmounted it again. (so i edited the fstab and fstab.backup etc..)
Nothing about resizing disks / repartition tables on the google cloud documentation worked for me.. The growpart, fdisk, resize2fs and many other StackOverflow posts did neither.
When trying to access through SSH i get the "Unable to connect on port 22" error as stated here on the google cloud docs
When creating a new Debian Linux instance with a new disk it works fine.
Anybody that can get this up and running for me without losing any data gets 100+ and a LOT OF LOVE......
I have tried to replicate the case scenario, but it didn't get me any VM instance issues.
I have created a VM instance with 10 GB of data and then Stopped it, increased the disk size to 30 GB and started the instance again. You mention that you can't ssh to the instance or access your application. After my test, I could ssh though and enter the instance. So there must be an issue of the procedure that you have followed that corrupted the VM instance or maybe the boot disk.
However there is a workaround to recover the files from the instance that you can't SSH to. I have tested it and it worked for me:
Go to Compute Engine page and then go to Images
Click on '[+] CREATE IMAGE'
Give that image a name and under Source select Disk
Under Source disk select the disk of the VM instance that you have resized.
Click on Save, if the VM of the disk is running, you will get an error. Either stop the VM instance first and do the same steps again or just select the box Keep instance running (not recommended). (I would recommend to stop it first, as also suggested by the error).
After you save the new created image. Select it and click on + CREATE INSTANCE
Give that instance a name and leave all of the settings as they are.
Under Boot Disk you make sure that you see the 30 GB size that you set up earlier when was increasing the disk size and the name should be the name of the image you created.
Click create and try to SSH to the newly created instance.
If all your files were preserved when you were resizing the disk, you should be able to access the latest ones you had before the corruption of the VM.
UPDATE 2nd WORKAROUND - ATTACH THE BOOT DISK AS SECONDARY TO ANOTHER VM INSTANCE
In order to attach the disk from the corrupted VM instance to a new GCE instance you will need to follow these steps :
Go to Compute Engine > Snapshots and click + CREATE SNAPSHOT.
Under Source disk, select the disk of the corrupted VM. Create the snapshot.
Go to Compute Engine > Disks and click + CREATE DISK.
Under Source type go to Snapshot and under Source snapshot chooce your new created snapshot from step 2. Create the disk.
Go to Compute Engine > VM instances and click + CREATE INSTANCE.
Leave ALL the set up as defult. Under Firewall enable Allo HTTP traffic and Allow HTTPS traffic.
Click on Management, security, disks, networking, sole tenancy
Click on Disks tab.
Click on + Attach existing disk and under Disk choose your new created disk. Create the new VM instnace.
SSH into the VM and run $ sudo lsblk
Check the device name of the newly attached disk and it’s primary partition (it will likely be /dev/sdb1)
Create a directory to mount the disk to: $ sudo mkdir -p /mnt/disks/mount
Mount the disk to the newly created directory $ sudo mount -o discard,defaults /dev/sdb1 /mnt/disks/mount
Then you should be able to load all the files from the disk. I have tested it myself and I could recover the files again from the old disk with this method.

AWS DOCKER dm.basesize in /etc/sysconfig/docker doesn't work

I want to change dm.basesize in my containers .
These are the size of containers to 20GB
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
`-xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 8G 0 disk
xvdg 202:96 0 8G 0 disk
I have a sh
#cloud-boothook
#!/bin/bash
cloud-init-per once docker_options echo 'OPTIONS="${OPTIONS} --storage-opt dm.basesize=20G"' >> /etc/sysconfig/docker
~
I executed this script
I stopped the docker service
[ec2-user#ip-172-31-41-55 ~]$ sudo service docker stop
Redirecting to /bin/systemctl stop docker.service
[ec2-user#ip-172-31-41-55 ~]$
I started docker service
[ec2-user#ip-172-31-41-55 ~]$ sudo service docker start
Redirecting to /bin/systemctl start docker.service
[ec2-user#ip-172-31-41-55 ~]$
But the container size doesn't change.
This is /etc/sysconfig/docker file
#The max number of open files for the daemon itself, and all
# running containers. The default value of 1048576 mirrors the value
# used by the systemd service unit.
DAEMON_MAXFILES=1048576
# Additional startup options for the Docker daemon, for example:
# OPTIONS="--ip-forward=true --iptables=true"
# By default we limit the number of open files per container
OPTIONS="--default-ulimit nofile=1024:4096"
# How many seconds the sysvinit script waits for the pidfile to appear
# when starting the daemon.
DAEMON_PIDFILE_TIMEOUT=10
I read in the aws documentation that I can to execute scripts in the aws instance when I start it . I don't want to restart my aws instance because I lost my data.
Is there a way to update my container size without restart the aws instance?
In the aws documentation I don't find how to set a script when I launch the aws instance.
I follow the tutorial
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html
I don't find a example how to set a script when I launch the aws instance.
UPDATED
I configured the file
/etc/docker/daemon.json
{
"storage-driver": "devicemapper",
"storage-opts": [
"dm.directlvm_device=/dev/xdf",
"dm.thinp_percent=95",
"dm.thinp_metapercent=1",
"dm.thinp_autoextend_threshold=80",
"dm.thinp_autoextend_percent=20",
"dm.directlvm_device_force=false"
]
}
When I start docker, I get
Error starting daemon: error initializing graphdriver: /dev/xdf is not available for use with devicemapper
How can I configure the parameter
dm.directlvm_device=/dev/xdf

Error when mounting an EC2 instances

I am following an online tutorial to set up an EC2 instance for a group project. http://www.developintelligence.com/blog/2017/02/analyzing-4-million-yelp-reviews-python-aws-ec2-instance/.
The instance I used is r3.4xlarge, the tutorial says if I chose an instance with an SSD, I need to mount that and run the following code:
lsblk
sudo mkdir /mnt/ssd
sudo mount /dev/xvdb /mnt/ssd
sudo chown -R ubuntu /mnt/ssd
lsblk shows the following:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdb 202:16 0 300G 0 disk
However, when I run sudo mount /dev/xvdb /mnt/ssd, it gives me the error:
mount: wrong fs type, bad option, bad superblock on /dev/xvdb,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
Could someone provide a solution to this error? Thanks!
Before one mounts a filesystem in linux - the filesystem should be created.
In this case it might be
mkfs.ext4 /dev/xvdb
This would create an ext4 filesystem on a /dev/xvdb device.