I am running GETH node on google cloud compute engine instance and started with HDD. It grows 1.5TB now. But it is damn slow. I want to move from HDD to SSD now.
How I can do that?
I got some solution like :
- make a snapshot from the existing disk(HDD)
- Edit the instance and attach new SSD with the snapshot made.
- I can disconnected old disk afterwards.
One problem here I saw is : Example - If my HDD is 500GB, it is not allowing SSD of size less than 500GB. My data is in TBs now. It will cost like anything.
But, I want to understand if it actually works? Because this is a node I want to use for production. I already waiting too long and cannot afford to wait more.
One problem here I saw is : If my HDD is 500GB, it is not allowing SSD of size less than 500GB. My data is in TBs now. It will cost like anything.
You should try to use Zonal SSD persistent disks.
As standing in documentation
Each persistent disk can be up to 64 TB in size, so there is no need to manage arrays of disks to create large logical volumes.
The description of the issue is confusing so I will try to help from my current understanding of the problem. First, you can use a booting disk snapshot to create a new booting disk accomplishing your requirements, see here. The size limit for persistent disk is of 2 TB so I don’t understand your comment about the 500 GB minimum size. If your disk have 1.5 TB then will meet the restriction.
Anyway, I don’t recommend having such a big disk as a booting disk. A better approach could be to use a smaller boot disk and expand the total capacity by attaching additional disks as needed, see this link.
Related
I would like to update the samba on a 3TB NAS. My boss suggested making a clone, however, there is no storage that will fit him whole. If a snapshot of the VM costs a smaller size, and serves to, in case of failure, restore the samba as it was, making it a better idea.
There's no real guide on how much space snapshots occupy. That will greatly depend on the activity on the VM where the snapshot has been taken. If it's an active VM (database or something of the like), there could be a considerable amount of data written. If it's not a very used VM, there could be limited to no data written to the backend datastore.
I have recently inherited a VMWare setup with 2 ESXi hosts and an HP StoreVirtual SAN for storage.
On the SAN, there's a 2 TB volume which has been used to extend one of the datastores on VMWare however only 25% of this volume has been used for this. The remaining 75% is empty.
I now wanted to extend other datastores using the space on this volume but it will not show up as an available volume when trying to increase datastore size.
Basically my question is whether it's possible to share a SAN volume between datastores. I thought of reducing the SAN volume size but I feel it's too risky.
Before I start thinking of moving stuff etc. I wanted to know what I'm trying to do is possible.
I will also say that the reason for increasing the datastore size is for backup purposes. During backups the datastore must be big enough to accomodate snapshots etc.
Thanks in advance for any help.
No, not really, when you assign space to a datastore, it's owned by the datastore. So you have to reduce space on Datastore 1 to allocate it to Datastore 2
I observe some behavior on the EC2 instances that I believe it is due to the disk cache. Bascially:
I have a calculation task that needs to access large chuck of data sequentially (~60 1GB files). I have included the files to my amazon image. I also use mpi to start ~30 processes to access different files simultaneously. BTW, the program is computation bound but the disk IO takes a decent chunk of run time. I noticed that when I start the instance and perform the calculation on the first try, it is extemely slow. The top command will show the processes were hanging from time to time and cpu usage is around 60%. However, once that run finishes, if I start another run, it is much faster and the cpu is around 99%. Is that because my data was still on a network drive (EBS) and it was loaded to local instance disk cache (SSD drive?) automatically? I ran it on C5n.18xlarge but it is listed as EBS only.
Has anyone has similar experiences? Or alternative explanations?
It was almost certainly disk cache, but in RAM, not a local SSD.
The c5.18xl instance type has 192 GB of RAM. So, depending on what else you're doing with that RAM, it's entirely possible that your 60 GB of data files were read into the cache and never left.
For more information: https://www.tldp.org/LDP/sag/html/buffer-cache.html
we have google compute engine , our disk (hard disk) size has full, so we added an additional disk, I followed all steps but still, size has not increased,
please help what might the case.
its showing error message like " mnt/disks/disk-1 is not mounted completely or it is not available"
If your boot disk is full, you want to resize the boot disk. Please keep in mind that if the disk is too full (100%), the OS will not be able to update the partition table to increase the size if the partition despite the larger disk.
If this is the case, take a snapshot of the disk, create a new, larger, disk and then use that as your new boot disk.
I put our application on EC2 (Windows 2003 x64 server) and attached up to 7 EBS volumes. The app is very I/O intensive to storage -- typically we use DAS with NTFS mount points (usually around 32 mount points, each to 1TB drives) so i tried to replicate that using EBS but the I/O rates are bad as in 22MB/s tops. We suspect the NIC card to the EBS (which are dymanic SANs if i read correctly) is limiting the pipeline. Our app uses mostly streaming for disk access (not random) so for us it works better when very little gets in the way of our talking to the disk controllers and handling IO directly.
Also when I create a volume and attach it, I see it appear in the instance (fine) and then i make it into a dymamic disk pointing to my mount point, then quick format it -- when I do this does all the data on the volume get wiped? Because it certainly seems so when i attach it to another AMI. I must be missing something.
I'm curious if anyone has any experience putting IO intensive apps up on the EC2 cloud and if so what's the best way to setup the volumes?
Thanks!
I've had limited experience, but I have noticed one small thing:
The initial write is generally slower than subsequent writes.
So if you're streaming a lot of data to disk, like writing logs, this will likely bite you. But if you make a big file fill it with data, and do a lot of random access I/O to it, it gets better on the second time writing to any specific location.