I have a NFS Storage which I use for deploying VMs on my ESX.
I have been creating/deleting vms for a couple of years now on this storage.
But lately I noticed the free space is pretty low. Upon investiging, I found that older vm files( Vms which I deleted more than an year ago).
Any Ideas why the files are not removed from NFS?
Or how can I find out which vms are not being used by any esx, so that I can delete them manually.
There's two main ways to remove a VM from your vCenter inventory. The methods being used are UnregisterVM() and Destroy_Task().
Based on your discovery, I'm assuming you've been unregistering your VMs from inventory.
If you're ok with PowerShell, there's a pretty straight forward way to remedy this using a community resource: http://www.lucd.info/2016/09/13/orphaned-files-revisited/
LucD mainly uses the underlying API methods, so if you prefer another language... a majority of the discovery is done for you.
Related
I'm presently looking into GCP's Deployment Manager to deploy new projects, VMs and Cloud Storage buckets.
We need a web front end that authenticated users can connect to in order to deploy the required infrastructure, though I'm not sure what Dev Ops tools are recommended to work with this system. We have an instance of Jenkins and Octopus Deploy, though I see on Google's Configuration Management page (https://cloud.google.com/solutions/configuration-management) they suggest other tools like Ansible, Chef, Puppet and Saltstack.
I'm supposing that through one of these I can update something simple like a name variable in the config.yaml file and deploy a project.
Could I also ensure a chosen name for a project, VM or Cloud Storage bucket fits with a specific naming convention with one of these systems?
Which system do others use and why?
I use Deployment Manager, as all 3rd party tools are reliant upon the presence of GCP APIs, as well as trusting that those APIs are in line with the actual functionality of the underlying GCP tech.
GCP is decidedly behind the curve on API development, which means that even if you wanted to use TF or whatever, at some point you're going to be stuck inside the SDK, anyway. So that's why I went with Deployment Manager, as much as I wanted to have my whole infra/app deployment use other tools that I was more comfortable with.
To specifically answer your question about validating naming schema, what you would probably want to do is write a wrapper script that uses the gcloud deployment-manager subcommand. Do your validation in the wrapper script, then run the gcloud deployment-manager stuff.
Word of warning about Deployment Manager: it makes troubleshooting very difficult. Very often it will obscure the error that can help you actually establish the root cause of a problem. I can't tell you how many times somebody in my office has shouted "UGGH! Shut UP with your Error 400!" I hope that Google takes note from my pointed survey feedback and refactors DM to pass the original error through.
Anyway, hope this helps. GCP has come a long way, but they've still got work to do.
I'm new to GCP and just experimenting. Tried to install something in one of my projects and got a disk full exception. Rather than buy more I thought I would just do some cleanup.
I have now deleted ALL instances, buckets and projects. I know projects take awhile to be deleted so maybe one of them is consuming a lot of disk. Question:
How can I remove/delete whatever is consuming 99% + of /dev/sdb1 /home ? or ...
Increase the size of that resource?
It seems that you are using Cloud Shell. The Cloud shell comes with only 5GB
for storage, and no way to increase.
One possible solution would be to setup gcloud SDK on your own machine or GCE (Google Compute Engine) instance instead.
I hope this approach works for you.
I'm using docker for a project, the main focus for its usage is to make the application available even if one of the node (it's a 6 nodes cluster with docker swarm) is down.
The application is basically a Django App that can save some images from users and others models. I'm currently saving the images as files, but since I need to specify a volume locally for a single machine, I would like to know if it would be better to save the images on database cluster, so it would be available even if the whole node goes down. Or is there another way?
#Edit
Note: The cluster runs locally and doesn't have internet access
The two options are two perform the file sharing via database or via the file system.
For file system sharing, you can use something like GlusterFS, so for each container it seems like they are mounting a host-local volume, but it's actually shared via GlusterFS between the hosts.
To my mind, if it's your application (e.g you can modify it at will), saving stuff in database would be the easier approach for most developers.
The best solution is often to go for a hosted option (such as MongoDB Atlas). Making a database resilient and highly available is really hard, and unless you are an expert on docker and mongo I would strongly recommend you to go for a hosted option.
We have the need to perform tests on localized platforms that put some burden on our hardware resources because for just a few weeks we might need plenty of servers and clients (Windows 2003 and Windows 2008, Vista, XP, Red Hat, etc) in multiple languages.
We typically have relied on blades with Windows 2003 and VMWare, but sometimes these are overgrown by punctual needs and also have the issue that the acquisition and deployment process is quite slow if the environment needs to grow.
Is Amazon EC2/S3 usable in the following scenario?
Install VMWare (Desktop because we need the ability to have snapshots) on an Amazon AMI.
Load existing VMWare images from S3 and run them on EC2 instances (perhaps 3 or 4 server or client OSes on each EC2 instance.
We are more interested in the ability to very easily start or stop VMware snaphsots for relatively short tests. This is just for testing configurations, not a production environment to actually serve a user workload. The only real user is the tester. These configurations might be required for just a few weeks and then turned off for a few months until the next release requires them again.
Is EC2/S3 a viable alternative for this type of testing purpose?
Do you actually need VMWare, or are you testing software that runs in the VMWare VMs? You might actually need VMWare if you are testing e.g. VMWare deployment policy, or are running code that tests the VMWare APIs. Examples of the latter might be you are testing an application server stack and currently using VMWare to test on many platforms.
If you actually need VMWare, I do not believe that you can install VMWare in EC2. Someone will correct & enlighten me if this is not the case.
If you don't actually need VMWare, you have more options. If you can use one of the zillion public AMIs as a baseline, clone the appropriate AMIs and customize them to suit your needs (save the customized version as a private AMI for your team). Then, you can use as many of them as you like. Perhaps you already have a bunch of VMWare images that you need to use in your testing. In that case, you can migrate your VMWare image to an EC2 AMI as described in various places in Google, for example:
http://thewebfellas.com/blog/2008/9/1/creating-an-new-ec2-ami-from-within-vmware-or-from-vmdk-files
(Apologies to the SO censors for not pasting the entire article here. It's pretty long.) But that's a shortcut; you can always use the documented AMI creation process to convert any machine (VMWare or not) to an AMI. Perform that process for each VMWare VM you have, and you'll be all set. Just keep in mind that when you create an AMI, you have to upload it to S3, and that will take a lot of time for large VMs.
This is a bit of a shameless plug, but we have a new startup that may deal with exactly your problem. Amazon EC2 is excellent for on-demand computing, but is really targeted at just a single user launching production servers. We've extended EC2 to make it a Virtual Lab Management environment, with self-service, policies and VM sharing. You can check it out at http://LabSlice.com and see if it meets your needs.
Amazon provides a solution themselves now: http://aws.typepad.com/aws/2010/12/amazon-vm-import-bring-your-vmware-images-to-the-cloud.html
We recently bought a new rack and set of servers for it, we want to be able to redeploy these boxes as build servers, QA regression test servers, lab re-correlation servers, simulation servers, etc.
We have played a bit with VMWare, VirtualPC, VirtualBox etc, creating a virtual build server, but we came across a lot of issues when we tried to copy it for others to use, having to reconfigure every new copy of the VM.
We are using Windows XP x86/x64 and Windows Vista x86/x64, so I had to rename the machine, join the domain etc for every new copy.
Ideally we just want to be able to add a new box, deploy a thin boot strap OS (Linux is fine here) to get the VM up an running, then use it.
One other thing we have limited to no budget, so free is best.
I would like to understand others experiences in doing the same thing.
FYI, I am not in systems IT, this we are group of software engineers trying to set this up.
Any links to good tutorials would be great.
The problem you're running into is the machine SID must be unique for each machine in a domain. Of course by copying an image you now break that unique constraint.
I'd suggest that you read the documentation for Sysprep in the reskit and Vista System Image Manager - your friends for XP/Win2k3 and Vista/Win2k8 respectively.
These tools enable to "reseal" your configured instance of the OS such that the next time it boots - it can prompt for information such as network configuration, machine names, admin user ID's, run scripts etc.
Also be aware that the licencing restrictions for Windows desktop clients are generally per image - not per server.
Using these tools with HyperV we created complete preconfigured instances of Win2k3 & Win2k8 that boot to finish installing Sharepoint - going further we used the diffing disks to overlay Visual Studio so our devs could use the production images for their work. It has radically changed our development process.
At this point our entire public website is run on HyperV with of 5 boxes running 15 images for a mix of soft and hard redundancy - they take several hundred million page views per week.
Another option for dealing with the SID probelm is NewSID. This is a simpler tool than sysprep, in that all it does is rename the machine and reassign the SID; if you don't need all the other features of sysprep this is a much easier tool to use.