In physical machine, I can do partition with command 'fdisk' by steps as below link:
http://puremonkey2010.blogspot.com/2017/01/linux-linux-hard-disk-format-command.html
But in Google cloud VM instance, it is not allowed to do so:
Command (m for help): w The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device
or resource busy. The kernel still uses the old table. The new table
will be used at the next reboot or after you run partprobe(8) or
kpartx(8) Syncing disks.
So supposed I have a partition as below:
[root#johnwiki Tasks]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 40G 0 disk
└─sda1 8:1 0 20G 0 part /
How do I create a new partition sdb2 to use the rest 20G from sda?
Many thanks!
Ps I tried to look into document from google from here and no proper example to show me how to.
============= Solved =============
It turned out that the fdisk can work as well. I just need to reboot the instance in order to reflect the action in doing partition. Otherwise I won't be able do mkfs on the /dev/sda2 (new partition).
Reference:
- https://blog.gtwang.org/linux/linux-add-format-mount-harddisk/
There is some documentation in the following link that you can follow to resize the file system and partitions on a persistent disk on a Cloud VM instance.
https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_partitions
Related
I am trying to run a workflow on GCP using Nextflow. The problem is, whenever an instance is created to run a process, it has two disks attached. The first boot-disk (default 10GB) and an additional 'google-pipelines-worker' disk (default 500GB). When I run multiple processes in parallel, multiple VM's are created and each has an additional disk attached of 500GB. Is there any way to customize the 500GB default?
nextflow.config
process {
executor = 'google-pipelines'
}
cloud {
driver = 'google'
}
google {
project = 'my-project'
zone = 'europe-west2-b'
}
main.nf
#!/usr/bin/env nextflow
barcodes = Channel.from(params.analysis_cfg.barcodes.keySet())
process run_pbb{
machineType: n1-standard-2
container: eu.gcr.io/my-project/container-1
output:
file 'this.txt' into barcodes_ch
script:
"""
sleep 500
"""
}
The code provided is jus a sample. Basically, this will create a VM instance with an additional 500GB standard persistent disk attached to it.
Nextflow updated this in the previous release, will leave this here.
First run export NXF_VER=19.09.0-edge
Then in the scope 'process' you can declare a disk directive like so:
process this_process{
disk "100GB"
}
This updates the attached persistent disk (default: 500GB)
There is still no functionality to edit the size of the boot disk (default: 10GB)
I have been checking the Nextflow documentation, where is specified:
The compute nodes local storage is the default assigned by the Compute Engine service for the chosen machine (instance) type. Currently it is not possible to specify a custom disk size for local storage.
My question is so simple:
What happens when I increase the size of running volume of ec2 instance.
1) Does my all data wiped ?
2) Does the space of my instance will also modify with new size ?
Actually my instance has storage of 8GB and that is almost full. I want to increase space that can help me to save more files to my instance.
I have found this option in my console.
I have found that connected ec2 volume. Does directly modifying the volume size will automatically reflect my instance space after reboot.
I
know this is quiet simple. I am just worried about my existing data.
Thank you for your help !
Assuming you have found the option in console to modify the size of the instance and the Instance here is Linux Instance. What the other answer forgets to mentions an important thing that is according to AWS Documentation:
Modifying volume size has no practical effect until you also extend
the volume's file system to make use of the new storage capacity. For
more information, see Extending a Linux File System after Resizing the
Volume.
For ext2, ext3, and ext4 file systems, this command is resize2fs. For XFS file systems, this command is xfs_growfs
Note:
If the volume you are extending has been partitioned, you need to increase the size of the partition before you can resize the file system
To check if your volume partition needs resizing:
Use the lsblk command to list the block devices attached to your instance. The example below shows three volumes: /dev/xvda, /dev/xvdb, and /dev/xvdf.
In Case if the partition occupies all of the room on the device, so it does not need resizing.
However, /dev/xvdf1if is an 8-GiB partition on a 35-GiB device and there are no other partitions on the volume. In this case, the partition must be resized in order to use the remaining space on the volume.
To extend a Linux file system
Log In to Instance via SSH
Use the df -h command to report the existing disk space usage on the file system.
Expand the modified partition using growpart (and note the unusual syntax of separating the device name from the partition number):
sudo growpart /dev/xvdf 1
Then Use a file system-specific command to resize each file system to the new volume capacity.
Finally Use the df -h command to report the existing file system disk space usage
Note : It is Recommended to take snapshot of ebs volume before making any changes.
Please Refer to this AWS Documentation
Well you can just modify the volume directly and this will not affect any file, it will take around 1 min or so to upgrade the size or you might want to restart your instance.
to ensure data safety you can create a snapshot of that volume and from that snapshot create a new volume of whatever size you want and delete the old volume which now contains old data.
bit of background:
I have an esxi 5.5 cluster with vcenter HA.
I have multiple iscsi LUNs which are hosted on Ubuntu running iscsi target and software RAID (mdadm).
A few days ago I noticed a bunch of vm's were inaccessible.
I removed them from inventory thinking I'd add them back by browsing the datastore.
The datastore was showing inactive. The other datastores (same server) were fine.
rescan/refresh didnt work. I removed from inventory all the vm's hosted on the datastore with the problem but wasnt able to remove it still.
"HostDatastoreSystem.RemoveDatastore" for object on vCenter Server .
on the esxi hosts I ran /etc/init.d/storageRM stop then rescanned and restarted storageRM. This got rid of the datastore from vcenter console.
Tried to remove and add it back from the iscsi adapter, this was fine.
But when I try to add it as a datastore under configuration/storage I get another error - unable to read the partition information for device.
Its VMFS5, mirrored RAID1. 4tb.
I've logged onto the esxi shell directly on one of the hosts and used partedUtil to investigate and try to repair it.
getting the following if I try to getUsableSectors or getptbl
Error: The primary GPT table states that the backup GPT is located beyond the end of disk. This may happen if the disk has shrunk or partition table is corrupted. Fix, by writing backup table at the end? This will also fix the last usable sector appropriately as per the new reduced size. diskPath (/dev/disks/t10.94544500000000002318F588822755821C9CFF1605288097) diskSize (7813774720) AlternateLBA (23441323007) LastUsableLBA (23441322974)
Warning: The available space to /dev/disks/t10.94544500000000002318F588822755821C9CFF1605288097 appears to have shrunk. This may happen if the disk size has reduced. The space has been reduced by (15627548288 blocks). You can fix the GPT to correct the available space or continue with the current settings ? This will also move the backup table at the end if it is not at the end already. diskSize (7813774720) AlternateLBA (23441323007) LastUsableLBA (23441322974) NewLastUsableLBA (7813774686)
Error: Can't have a partition outside the disk!
Unable to read partition table for device /vmfs/devices/disks/t10.94544500000000002318F588822755821C9CFF1605288097
trying to fix it:
partedUtil fixGpt /vmfs/devices/disks/t10.94544500000000002318F588822755821C9CFF1605288097
FixGpt tries to fix any problems detected in GPT table.
Please ensure that you don't run this on any RDM (Raw Device Mapping) disk.
Are you sure you want to continue (Y/N): y
Error: The primary GPT table states that the backup GPT is located beyond the end of disk. This may happen if the disk has shrunk or partition table is corrupted. Fix, by writing backup table at the end? This will also fix the last usable sector appropriately as per the new reduced size. diskPath (/dev/disks/t10.94544500000000002318F588822755821C9CFF1605288097) diskSize (7813774720) AlternateLBA (23441323007) LastUsableLBA (23441322974)
Fix/Ignore/Cancel? fix
Error: Can't have a partition outside the disk!
Unable to read partition table on device /vmfs/devices/disks/t10.94544500000000002318F588822755821C9CFF1605288097
One of the other datastores is identical with identical disks so I tried to setptbl using the size from that.
partedUtil setptbl /vmfs/devices/disks/t10.94544500000000002318F588822755821C9CFF1605288097 gpt "1 2048 7813774686 AA31E02A400F11DB9590000C2911D1B8 0"
gpt
0 0 0 0
1 2048 7813774686 AA31E02A400F11DB9590000C2911D1B8 0
Error: The primary GPT table states that the backup GPT is located beyond the end of disk. This may happen if the disk has shrunk or partition table is corrupted. Fix, by writing backup table at the end? This will also fix the last usable sector appropriately as per the new reduced size. diskPath (/dev/disks/t10.94544500000000002318F588822755821C9CFF1605288097) diskSize (7813774720) AlternateLBA (23441323007) LastUsableLBA (23441322974)
Warning: The available space to /dev/disks/t10.94544500000000002318F588822755821C9CFF1605288097 appears to have shrunk. This may happen if the disk size has reduced. The space has been reduced by (15627548288 blocks). You can fix the GPT to correct the available space or continue with the current settings ? This will also move the backup table at the end if it is not at the end already. diskSize (7813774720) AlternateLBA (23441323007) LastUsableLBA (23441322974) NewLastUsableLBA (7813774686)
Error: Can't have a partition outside the disk!
On the iscsitarget host the LUNs show healthy. mdstat also shows healthy RAID and disks.
Is there anything else I can try to repair this and recover the vm's?
Thanks for helping.
I have 20005 edit logs files in the NameNode which is a large number to me, is there a way I can merge them to fsimage ? I have restarted the NameNode, it did not help.
If you do not have HA enabled for NN, then you need to have a Secondary NameNode that does this.
If you have HA enabled, then your Standby NN does this.
If you have those, check for their logs and see what happens and why it fails. It is possible that you do not have enough RAM, and you need to increase the heap size of these roles, but that should be verified before with the logs.
If you do not have one of those beside the NN, then fix this and it will happen automatically, relevant configs that affect checkpoint timing:
dfs.namenode.checkpoint.period (default: 3600s)
dfs.namenode.checkpoint.txns (default: 1 million txn)
You can run the following commands as well, but this is a temporary fix:
hdfs dfsadmin -safemode enter
hdfs dfsadmin -rollEdits
hdfs dfsadmin -saveNamespace
hdfs dfsadmin -safemode leave
Note: after entering safemode HDFS gets read only until you leave safemode.
I use Apache Brooklyn in combination with jCloud EC2 to create ec2 instances on AWS.
ec2 instance setting:
region: eu-central-1 (Frankfurt)
imageId: ami-10d1367f
Name: amzn-ami-minimal-hvm-2016.03.0.x86_64-s3
RootDeviceType: instance-store
VirtualizationType: hvm
hardwareId: d2x_large
vCPU: 4
Memory: 30,5 GB
Storage: 3x2000 GB
Everytime when I create a ec2 instance the root partition has only 10GB disc space. I found the problem in the jCloud [ECHardwareBuilder]:(https://github.com/jclouds/jclouds/blob/master/apis/ec2/src/main/java/org/jclouds/ec2/compute/domain/EC2HardwareBuilder.java#L731)
/**
* #see InstanceType#D2_XLARGE
*/
public static EC2HardwareBuilder d2_xlarge() {
return new EC2HardwareBuilder(InstanceType.D2_XLARGE).d2()
.ram(31232)`enter code here`
.processors(ImmutableList.of(new Processor(4.0, 3.5)))
.volumes(ImmutableList.<Volume>of(
new VolumeBuilder().type(LOCAL).size(10.0f).device("/dev/sda1").bootDevice(true).durable(false).build(),
new VolumeBuilder().type(LOCAL).size(2000.0f).device("/dev/sdb").bootDevice(false).durable(false).build(),
new VolumeBuilder().type(LOCAL).size(2000.0f).device("/dev/sdc").bootDevice(false).durable(false).build(),
new VolumeBuilder().type(LOCAL).size(2000.0f).device("/dev/sdd").bootDevice(false).durable(false).build()))
.is64Bit(true);
}
My questions are:
Is it possible to create my own class which extends EC2HardwareBuilder, so that I can change the root volume size to 2000?
How can inject this class to brooklyn, so that it will be used instead of the old EC2HardwareBuilder class?
The EC2HardwareBuilder.d2_xlarge method just represents the defaults for that instance type. It doesn't control what is actually provisioned.
You can see this if you provision manually - under storage, it offers /dev/sdb, /dev/sdc, and /dev/sdd. If you try to edit this "storage" section, it only lets you choose the device of /dev/sd{b-m}. It doesn't let you choose /dev/sda1. When I deploy with the defaults, it actually gives me a 2G partition for /dev/xvda1:
[ec2-user#ip-172-31-5-36 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 2.0G 686M 1.3G 36% /
devtmpfs 15G 72K 15G 1% /dev
tmpfs 15G 0 15G 0% /dev/shm
When I deploy with Brooklyn using the blueprint below (with brooklyn master), it also gives me a 2G partition for /dev/xvda1:
location:
jclouds:aws-ec2:eu-central-1:
hardwareId: d2.xlarge
imageId: eu-central-1/ami-10d1367f
services:
- type: org.apache.brooklyn.entity.machine.MachineEntity
Can you confirm that you are definitely getting 10G rather than 2G? I suspect that size is dependent on the AMI.
From jclouds perspective, the problem is this information is not discoverable through the EC2 api, so was hard-coded within jclouds!
We could add a jclouds feature enhancement for changing which of /dev/sd{b-m} is used (as can be done in the AWS web-console), but that wouldn't solve your problem.
As described in http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device, the size limit for a "root device volume" of an Amazon Instance Store-Backed AMI is 10 GiB.
To have a bigger root partition, you could:
use an EBS-Backed AMI
try to resize it on-the-fly (but that would actually be a move to a different partition, rather than just resizing to the right; for a discussion of this, see https://askubuntu.com/a/728141, which links to https://unix.stackexchange.com/a/227318)
work around it: what in / needs all the space? can you just mount /dev/sdb to an appropriate directory?
I personally favour the "work around it" approach (assuming EBS-Backed is not right for you).
For future reference, if you did want to override the behaviour of methods like EC2HardwareBuilder.d2_xlarge, then unfortunately you can't! These are static methods. You'd have to build a jar with your own version of that class (compiled against the right version of jclouds of course), and put it in $BROOKLYN_HOME./lib/patch/.
Usually, jclouds is extremely good for allowing things to be overridden and customized by configuring things in guice (to change the dependencies that are injected), but unfortunately not here.
If you did want to use guice, you might be able to bind a different implementation of the entire EC2HardwareSupplier (to thus avoid the static calls), but we'd need to be very careful that nothing else was making calls to these static methods as well.