we have google compute engine , our disk (hard disk) size has full, so we added an additional disk, I followed all steps but still, size has not increased,
please help what might the case.
its showing error message like " mnt/disks/disk-1 is not mounted completely or it is not available"
If your boot disk is full, you want to resize the boot disk. Please keep in mind that if the disk is too full (100%), the OS will not be able to update the partition table to increase the size if the partition despite the larger disk.
If this is the case, take a snapshot of the disk, create a new, larger, disk and then use that as your new boot disk.
Related
I would like to update the samba on a 3TB NAS. My boss suggested making a clone, however, there is no storage that will fit him whole. If a snapshot of the VM costs a smaller size, and serves to, in case of failure, restore the samba as it was, making it a better idea.
There's no real guide on how much space snapshots occupy. That will greatly depend on the activity on the VM where the snapshot has been taken. If it's an active VM (database or something of the like), there could be a considerable amount of data written. If it's not a very used VM, there could be limited to no data written to the backend datastore.
I am running GETH node on google cloud compute engine instance and started with HDD. It grows 1.5TB now. But it is damn slow. I want to move from HDD to SSD now.
How I can do that?
I got some solution like :
- make a snapshot from the existing disk(HDD)
- Edit the instance and attach new SSD with the snapshot made.
- I can disconnected old disk afterwards.
One problem here I saw is : Example - If my HDD is 500GB, it is not allowing SSD of size less than 500GB. My data is in TBs now. It will cost like anything.
But, I want to understand if it actually works? Because this is a node I want to use for production. I already waiting too long and cannot afford to wait more.
One problem here I saw is : If my HDD is 500GB, it is not allowing SSD of size less than 500GB. My data is in TBs now. It will cost like anything.
You should try to use Zonal SSD persistent disks.
As standing in documentation
Each persistent disk can be up to 64 TB in size, so there is no need to manage arrays of disks to create large logical volumes.
The description of the issue is confusing so I will try to help from my current understanding of the problem. First, you can use a booting disk snapshot to create a new booting disk accomplishing your requirements, see here. The size limit for persistent disk is of 2 TB so I don’t understand your comment about the 500 GB minimum size. If your disk have 1.5 TB then will meet the restriction.
Anyway, I don’t recommend having such a big disk as a booting disk. A better approach could be to use a smaller boot disk and expand the total capacity by attaching additional disks as needed, see this link.
I observe some behavior on the EC2 instances that I believe it is due to the disk cache. Bascially:
I have a calculation task that needs to access large chuck of data sequentially (~60 1GB files). I have included the files to my amazon image. I also use mpi to start ~30 processes to access different files simultaneously. BTW, the program is computation bound but the disk IO takes a decent chunk of run time. I noticed that when I start the instance and perform the calculation on the first try, it is extemely slow. The top command will show the processes were hanging from time to time and cpu usage is around 60%. However, once that run finishes, if I start another run, it is much faster and the cpu is around 99%. Is that because my data was still on a network drive (EBS) and it was loaded to local instance disk cache (SSD drive?) automatically? I ran it on C5n.18xlarge but it is listed as EBS only.
Has anyone has similar experiences? Or alternative explanations?
It was almost certainly disk cache, but in RAM, not a local SSD.
The c5.18xl instance type has 192 GB of RAM. So, depending on what else you're doing with that RAM, it's entirely possible that your 60 GB of data files were read into the cache and never left.
For more information: https://www.tldp.org/LDP/sag/html/buffer-cache.html
I recently updated a free licensed VMWare ESXi host to 6.0 (I do not have access to vcenter). The host has 6 datastores available, the first two of which reside on SSD's and are fairly small (I typically use those for my VM OS, and any VM's that need more storage can use one of the mechanical datastores). The upgrade went fine and all my machines started.
I decided to shut down one of the machines and expand it's OS storage. My datastore1 has a but more than 70GB free, so I extended the VM's guest disk size from 160GB to 229GB figuring I'd still have some wiggle room there. I guess that was my first mistake. I was unaware that apparently you can easily increase a virtual disk size, but decreasing it is not possible. Now my VM won't start!
Failed to start the virtual machine.
Failed to power on VM.
Could not power on virtual machine: msg.vmk.status.VMK_NO_SPACE.
Current swap file size is 0 KB.
Failed to extend swap file from 0 KB to 16777216 KB.
Now I've tried multiple things, starting from removing snapshots etc. to try to free up some space, to migrating the virtual disk to another datastore and then using vcenter converter to move it back but to a smaller disk (that failed horribly, took several hours and when all was said and done, the VM could only PXE boot, said no operating system found).
I still have a few copies of the virtual disk but they're all 230GB virtual disks. If I change the VM settings to run the virtual disk off of one of the larger mechanical datastores, it does still work fine (OS boots etc.) but I really want to get this thing back down to 160GB and moved back to my SSD datastores.
Now, I have NOT utilized the extra space provisioned to this VM. fdisk still shows 160GB drive / partitions, so I have not even touched the extra provisioned space yet. I am not trying to reduce the partition, I want to reduce the space provisioned to this VM and ultimately the VMDK file so I can move it back to my SSD datastore and fire it up again.
I have searched all over but I feel I may be using the wrong terminology as many of my results seem to end in "it's not possible without data loss" but I feel since I haven't used the extra provisioned space it simply has to be possible. Maybe I'm wrong. Can anyone help point me in the right direction?
I don't know of a documented way to shrink a disk without VMware Converter, but VMware Converter should work. Have you verified you gave all the correct arguments (most notably the new size)? You can try mounting the resulting VMDK on a different VM (as a data disk) to see if there's anything wrong with it.
Have you considered making the disk thin-provisioned? See this VMware KB for how to achieve this without vCenter (you'll need to ssh into the ESXi). Since the last 69GB of the disk are zeros, it can help you reclaim that space.
If all else fails and you're feeling adventerous, you might be able to manually edit the VMDK file and prune the last part of it.
I am looking at porting an application to the cloud, more speficially I am looking at Amazon EC2 or Google GCE.
My app heavily uses Linux's mmap to memory map large read-only files and I I would like to understand how mmap would actually work when a file is on the ESB volume.
I would specifically like to know what happens when I call mmap as EBS appears to be a black-box. Also, are the benefits negated?
I can speak for GCE Persistent Disks. It behaves pretty much in the same way a physical disk would. At a high level, pages are faulted in from disk as mapped memory is accessed. Depending on your access pattern these pages might be loaded one by one, or in a larger quantity when readahead kicks in. As the file system cache fills up, old pages are discarded to give space to new pages, writing out dirty pages if needed.
One thing to keep in mind with Persistent Disk is that performance is proportional to disk size. So you'd need to estimate your throughput and IOPS requirements to ensure you get a disk with enough performance for your application. You can find more details here: Persistent disk performance.
Is there any aspect of mmap that you're worried about? I would recommend to write a small app that simulates your workload and test it before deciding to migrate your application.
~ Fabricio.