I have recently inherited a VMWare setup with 2 ESXi hosts and an HP StoreVirtual SAN for storage.
On the SAN, there's a 2 TB volume which has been used to extend one of the datastores on VMWare however only 25% of this volume has been used for this. The remaining 75% is empty.
I now wanted to extend other datastores using the space on this volume but it will not show up as an available volume when trying to increase datastore size.
Basically my question is whether it's possible to share a SAN volume between datastores. I thought of reducing the SAN volume size but I feel it's too risky.
Before I start thinking of moving stuff etc. I wanted to know what I'm trying to do is possible.
I will also say that the reason for increasing the datastore size is for backup purposes. During backups the datastore must be big enough to accomodate snapshots etc.
Thanks in advance for any help.
No, not really, when you assign space to a datastore, it's owned by the datastore. So you have to reduce space on Datastore 1 to allocate it to Datastore 2
Related
I would like to update the samba on a 3TB NAS. My boss suggested making a clone, however, there is no storage that will fit him whole. If a snapshot of the VM costs a smaller size, and serves to, in case of failure, restore the samba as it was, making it a better idea.
There's no real guide on how much space snapshots occupy. That will greatly depend on the activity on the VM where the snapshot has been taken. If it's an active VM (database or something of the like), there could be a considerable amount of data written. If it's not a very used VM, there could be limited to no data written to the backend datastore.
I am running GETH node on google cloud compute engine instance and started with HDD. It grows 1.5TB now. But it is damn slow. I want to move from HDD to SSD now.
How I can do that?
I got some solution like :
- make a snapshot from the existing disk(HDD)
- Edit the instance and attach new SSD with the snapshot made.
- I can disconnected old disk afterwards.
One problem here I saw is : Example - If my HDD is 500GB, it is not allowing SSD of size less than 500GB. My data is in TBs now. It will cost like anything.
But, I want to understand if it actually works? Because this is a node I want to use for production. I already waiting too long and cannot afford to wait more.
One problem here I saw is : If my HDD is 500GB, it is not allowing SSD of size less than 500GB. My data is in TBs now. It will cost like anything.
You should try to use Zonal SSD persistent disks.
As standing in documentation
Each persistent disk can be up to 64 TB in size, so there is no need to manage arrays of disks to create large logical volumes.
The description of the issue is confusing so I will try to help from my current understanding of the problem. First, you can use a booting disk snapshot to create a new booting disk accomplishing your requirements, see here. The size limit for persistent disk is of 2 TB so I don’t understand your comment about the 500 GB minimum size. If your disk have 1.5 TB then will meet the restriction.
Anyway, I don’t recommend having such a big disk as a booting disk. A better approach could be to use a smaller boot disk and expand the total capacity by attaching additional disks as needed, see this link.
I have tried many times to install the R server on an AWS instance using terminal commands without any luck. I can install it using http://www.louisaslett.com/RStudio_AMI/
and following a Youtube video but I cannot get the dropbox sync to stop "syncing". I have tried installing a fresh version using the terminal and Putty and other methods without much success.
What I wanted to use AWS for was to use the bandwidth / computing time.
I basically wanted to run an R script to download a bunch of documents which could take 2 weeks to download. I had hoped to save these on a large dropbox account I have access to but unfortunately library("RStudioAMI")
linkDropbox()
excludeSyncDropbox("*") doesn`t seem to work for me and the whole dropbox folder gets synced onto my AWS instance and I run out of space.
So basically... I think I will forget dropbox and just use AWS storage.
I want to download appox 500GB - or perhaps 1TB worth of data (running an R script to download documents and save them), it just connects to a website and downloads a document, so no ML or high computing power needed. Just a consistent connection. Once the documents are fully downloaded I would like to then just transfer them to an external hard drive I have for further analysis.
So my question is, "approximately" how much do you think this may cost, I don't care about paying 20-30$ I just don`t want to go in with inexperience/without knowledge and rack up hundreds$.
Additionally: What other instances/servers do you suggest I pay for, I feel like I dont need that much power just consistency.
Here is another SO question I opened:
Amazon AWS Dropbox link error: "No directories are being ignored."
There will be three main costs for your scenario:
Amazon EC2, which is charged hourly. You do not need much processing power, so a t3.small would probably be adequate if you're not doing any big computations. It's only about 2c/hour, which is $7 for 2 weeks.
An Amazon EBS disk volume attached to your Amazon EC2 instance for storing the data. A General Purpose volume is 10c/GB/month. So, 1TB for 2 weeks would be $50. If you configure it to use "Cold HDD (sc1)", then it's a quarter of that price.
Data Transfer for when you download from AWS. If you are using AWS in the USA, it is 9c/GB. So, 1TB = $90. This would be your major cost.
There might be some other minor costs, but they won't be significant compared to the above.
Or, given that your basic goal is to collect and download data, you could just do it on a computer at home.
If you are not strictly limited to EC2 ( which I think you are not, considering the requirement you stated and the AMI approach failed for you) , AWS Lightsail would be a much better solution
It has bundled data transfer package and acceptable performance
Here is the 1-month plan
512 MB Memory
1 Core Processor
20 GB SSD Disk
1 TB Transfer ( Data in will cost nothing, only data Out, Ex: From LightSail to your local PC )
Additional SSD - $10 for 1 TB
Average network performance for that instance I see is about 30 Megabyte per second. You can just shutdown everything and only billed for the hours you used in the month
We have been using MariaDB in RDS and we noticed that the swap space is getting increasingly high whithout being recycled. The freeable memory however seems to be fine. Please check the attached files.
Instance type : db.t2.micro
Freeable memory : 125Mb
Swap space : increased by 5Mb every 24h
IOPS : disabled
Storage : 10Gb (SSD)
Soon RDS will eat all the swap space, which will cause lots of issues to the app.
Does anyone have similar issues?
What is the maximum swap space? (didn't find anything in the docs)
Please help!
Does anyone have similar issues?
I had similar issues on different instance types. The trend of swapping stays even if you would switch to higher instance type with more memory.
An explanation from AWS you can find here
Amazon RDS DB instances need to have pages in the RAM only when the pages are being accessed currently, for example, when executing queries. Other pages that are brought into the RAM by previously executed queries can be flushed to swap space if they haven't been used recently. It's a best practice to let the operating system (OS) swap older pages instead of forcing the OS to keep pages in memory. This helps make sure that there is enough free RAM available for upcoming queries.
And the resolution:
Check both the FreeableMemory and the SwapUsage Amazon CloudWatch metrics to understand the overall memory usage pattern of your DB instance. Check these metrics for a decrease in the FreeableMemory metric that occurs at the same time as an increase in the SwapUsage metric. This can indicate that there is pressure on the memory of the DB instance.
What is the maximum swap space?
By enabling Enhanced Monitoring you should be able to see OS metrics, e.g. The amount of swap memory free, in kilobytes.
See details here
Enabling enhanced monitoring in RDS has made things more clear.
Obviously what we needed to watch was Committed Swap instead of Swap Usage. We were able to see how much Free Swap we had.
I now also believe that MySQL is dumping things in swap just because there is too much space in there, even though it wasn't really in urgent need of memory.
I put our application on EC2 (Windows 2003 x64 server) and attached up to 7 EBS volumes. The app is very I/O intensive to storage -- typically we use DAS with NTFS mount points (usually around 32 mount points, each to 1TB drives) so i tried to replicate that using EBS but the I/O rates are bad as in 22MB/s tops. We suspect the NIC card to the EBS (which are dymanic SANs if i read correctly) is limiting the pipeline. Our app uses mostly streaming for disk access (not random) so for us it works better when very little gets in the way of our talking to the disk controllers and handling IO directly.
Also when I create a volume and attach it, I see it appear in the instance (fine) and then i make it into a dymamic disk pointing to my mount point, then quick format it -- when I do this does all the data on the volume get wiped? Because it certainly seems so when i attach it to another AMI. I must be missing something.
I'm curious if anyone has any experience putting IO intensive apps up on the EC2 cloud and if so what's the best way to setup the volumes?
Thanks!
I've had limited experience, but I have noticed one small thing:
The initial write is generally slower than subsequent writes.
So if you're streaming a lot of data to disk, like writing logs, this will likely bite you. But if you make a big file fill it with data, and do a lot of random access I/O to it, it gets better on the second time writing to any specific location.