I want to deploy 10-15 VMWare hosts to cloudstack. This is my first time working with any type of cloud. I was doing research on installation and architecture, I was stuck on a point that for using VMWare hosts i have to install VCenter server, but i can't do that as it's paid. So, please guide me that is there a way of deploying these VMWare hosts on cloudstack without buying any licensed software.
Unfortunately, CloudStack does not support vSphere/ESXi without vCenter. There were several requests to support vSphere/ESXi without vCenter - however, keep in mind that many features vCenter provides must be implemented in CloudStack and it not an easy task.
If you want to remain open source and or/free, consider using Xen with XenCenter or just go pure KVM. I use to use VMware for most of my career and recently transitions to KVM - it was an easy switch and with no regrets.
CloudStack mailing lists are best to answering any setup questions you might have.
All best
-ilya
By design, vCenter is must for CloudStack to manage & build cloud over VMware ESXi hosts. It would be huge exercise to extend the support to ESXi host management without vCenter, which could be limited in features like live migration, VMware distributed virtual switches, DRS etc.
You might consider switching to XenServer which is free and very well supported seamlessly by CloudStack. Feel free discuss your deployment configuration and planning at users#cloudstack.apache.org or dev#cloudstack.apache.org.
Related
I am in the process of evaluating vendors for upgrading our existing VMware environment. In a conversation with a provider, he told me that vMotion was not possible without a separate SAN appliance or vSAN (the latter requiring 6+ hosts and expensive licensing).
Under the impression that our 3-host cluster already had vMotion licensing and capability, I tried to "vMotion" a running Windows VM using the vSphere client. I was able to "migrate" both the VM and its disk to a new host and datastore respectively, but nowhere did I see the term "vMotion" in the Recent Tasks log at the bottom of the UI. What I did see there was "Migrating Virtual Machine - Active State" and I was able to maintain an RDC connection and interact with the VM all through the migration process.
My question: Am I misunderstanding the term vMotion? Is it different than migration in an "active state"?
Also, assuming vMotion is an unattended convenience and seeing as we already have an image-level backup solution for our VMs and my company is okay with manually restoring those VMs from a backup (as opposed to the convenience of an "instant," unattended, back-end restoration), is vMotion worth the investment in a dedicated SAN server if we're already capable of "live migration" on demand?
And don't worry about selling me on all the benefits of a SAN. Believe me, I'm already with you on that. The people over here who sign the checks just have different priorities is all.
TWIMC: We're in a 3-host cluster, ESXi 6.0 on all. Enterprise Plus licensing.
vMotion is VMware's branding for being able to migrate powered-on / running Virtual Machines from one ESX/ESXi host to another. vSphere UI does not refer to the actual operation in the UI as vMotion except for a number of places where the branding matters i.e. when configuring a feature called Enhanced vMotion Compatibility (EVC) or when enabling vMotion traffic through specific VMkernel virtual network adapter.
On the point about vSAN / physical SAN being mandatory - you already confirmed that you can migrate the VMDKs of a live VM so it's not a complete necessity. The official docs have a section about the limitations of simultaneous comput + storage migration: https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.vcenterhost.doc/GUID-9F1D4A3B-3392-46A3-8720-73CBFA000A3C.html.
I'd bet that migration should be faster if only the memory image of a powered-on VM is migrated - this is especially true in automated DRS setups where VMs are migrated automatically based on a pre-configured policy. Users on reddit seem to have tested this - https://www.reddit.com/r/vmware/comments/matict/vmware_drs_cluster_without_shared_storage_das/gru579m/?utm_source=reddit&utm_medium=web2x&context=3.
Note that I am a VMware employee (albeit not in sales), and you'd probably want a different, unbiased opinion about the product's merits ;)
I am new to cloud and still learning GCP, I exhausted almost all my free credit for GCP within 2 months while learning different modules.
GCP is great and provides a lot of things to ease the development and maintenance process.
But I realized using different modules cost me a lot.
So I was wondering if I could have a big VM box, install MySQL, Docker, and Java and React required components by myself, I can achieve pretty much what I want without using extra modules.
Am I right?
Can I use the same VM to host multiple sites by changing ports of API, or do I need to have different boxes for that?
Your question is out of GCP domain but about IT architecture. You can create a big VM with all installed on it. But you have to manage it by yourselves and the scalability is hard.
You can also have 1 VM per website, but the management cost is higher (patching and updgrade)! However you can scale with a better granularity (per website).
The standard pattern today is to explode your monolith server into dedicated services. The database on a specific server, the docker and Java in another one, and the react in a static component (like Google Cloud Platform).
If you want to use VM, you can use GKE and you containerize your application. It's far more easier to maintain your VM with an automatic tool like K8S.
The ultimate step, is to use serverless and/or full managed solution. Cloud SQL for your database, GCS for your static content, and App Engine or Cloud Run for your backend. Like this, you pay as you use and if you website is not very used, you won't be charged on it (except for the database).
What are the main difference between vsphere 5.5 and vsphere 6? Are there any feature additions? Are they backwards compatible?
Please check this table for a comparison
Vsphere 6/6.5 are a major departure from 5.5. There are a number of small changes but the 2 biggest you will likely notice is that in 6+ they deprecated the c# vCenter client and are moving vCenter to an in house linux based appliance system instead of hosting it on windows.
vSphere 6.5 used the HTML5 to access the vCenter data center management. access from any system in google chrome. completly deprecated the windows based Thin client.
Lot of feature added like HCI - Hyper converged infrastructure.
In this all your compute storage and Network will be integrated with single device.
You can reduce the manual task for management task. creating switch and deploying the configuration in multiple data center with same configuration. replication job will easy.
You can find more info
I am doing some research on VMWare VSAN because we are looking at our options for storage. I am getting mixed answers when I Google. We are building a new host in our new office and we are starting fresh. Our old setup we had a server host HP with a few drives which ESXi connected to a SAN and we used a combination of both for storage of VM's and file storage. We did not use VSAN, but with the new setup this is definitely an option. We are looking at a HP ProLiant DL380 GEN9 server that is capable of holding several drives. If I loaded this up with large drives and setup VSAN, would this be a good option for a file storage server? This host will also host several other VM's as well.
So, basically you want to do the hardware refresh and system architecture reconfiguration. Correct me if I`m wrong.
If so, then IMHO the best way gonna be to go with one of the hyper-converged solutions. Is see three options here:
Simplivity (https://www.simplivity.com/). Its really good, but it was too high cost for one of the projects that I had. Also its perfromance is mostly bottlenecked by the propritory component (FPGA), which in most of cases means lack of flexibility
VMware VSAN (I'm sure you don't need link for that :) ). According to my friend who works at VMware - it is usually considered for big deployments. so if that is your case - go for it.
StarWind Hyper-Converged Appliance (https://www.starwindsoftware.com/starwind-hyperconverged-appliance ). That one is SMB-oriented. It combines commodity Dell hardware and bunch of software. Since everything is commodity - it is easy to handle.
I hope that makes gonna help.
P.S. I`m not sure if this is the best place to ask this Q, possible serverfault would be the better place.
Fault tolerant file storage is possible with VMware Virtual SAN but kind of expensive. Either way VMware does solve storage redundancy for running VMs but it does not solve the issue with exporting SMB 3.0 or NFS v4.1 mount points you'll need, you have to use custom VMs for that. FreeBSD / Linux with Samba for NFS and Windows / Hyper-V Server for SMB 3.0 will do the trick!
Similar discussion on REDDIT some time ago lots of good thoughts.
https://www.reddit.com/r/vmware/comments/4223ol/virtual_san_for_file_servers/
I want to learn Apache Nutch and I have an account at Amazon Web Services (AWS). I have three machines at AWS and one of them is micro sized, other one is small and the other one is medium. I want to start with small sized and I will install Nutch, Hadoop and Hbase on it. I have Centos 6 at my machines.
There is a question here but not I ask: Nutch 2.1 (HBase, SOLR) with Amazon Web Services
I want to learn that which approach is better. I want to install them on small size machine. After that I want to add micro sized. On the other hand I don't have any experience about Nutch maybe I should work on local or is there a possibility using my machine and AWS both (does it charge more i.e. copying data from AWS may be charged.)
When I want to implement a wrapper into my Nutch, should I install it on my local(to have source codes) and run it on AWS.
Any ideas?
It sounds like your facing a steep learning curve.
For one, you admit that you're just learning Nutch, so I would recommend you install CentOS on a physical box at home and play around there.
On the other hand, you are pondering the use of a micro AWS instance, which will not be useful in running a CPU/memory intensive application like Nutch. Read about AWS micro instances here.
My suggestion is to stick to a single physical box solution at home and work on scripting your solution before moving on to an AWS instance.