Given a vsphere client, I am trying to find a way to determine the ESXi host on which a VM of provided specs can be spawned. Does anyone know of any formula by means of which one could relate the available CPU, ram and disk on an ESXi host and make a decision as to which host is a better choice to use to spawn a VM of a defined flavor - flavor here being a specified set of cpus, ram and disk.
Basically, I want to determine the number of VMs of a given specification (CPU, ram and disk) that can be spawned on a host.
You can use Configuration Maximums page to determine how many Esxi instances your physical hardware supports.
The upper limit is 128 virtual CPUs, 6 TB of RAM and 120 devices
There's two main ways to go about this:
If you happen to have access to vROps, they have that capability
built in the "Optimize Capacity" section of their UI.
Use your programming language of choice to perform those
calculations manually. Referencing a single host, divide the host's
available CPU MHz by the desired VM MHz, divide the host's available
RAM by the desired VM RAM, take the host's datastore with the lowest
about of space and divide that by the desired space of the VM. The
lowest returned figure from those 3 calculations will be your max
number of VMs that can be spawned on that particular host.
Related
We are working on VMware and HA and DRS are enabled on a cluster. We want to set CPU threshold for every host i.e. if cpu utilization goes above 80% vm's move automatically.
Thanks in advance
Details
You receive an event message when the number of phisycal cpu on the host exceeds their limit.
This occurs when clients register more cpu on an ESXi host than the host can support.
Impact
If the limit is exceeded, the management agent is at risk of running out of system resources. Consequently, VMware vCenter Server might stop managing the corresponding ESXi host.
Solution
To ensure management functionality, restrict the number of Pcpu to the limit indicated by the VMkernel.Boot.maxPCPUS property of the ESXi host.
To lower the value of maximum number of allowed registered pcpu:
Edit the VMkernel.Boot.maxPCPUS variable by selecting the host in vCenter Server.
Open the Configuration tab and select Advanced Options in the Software box.
Expand the VMkernel option and click Boot.
Alter the value in the text box to the right of the VMkernel.Boot.maxPCPUS variable.
Hope to be useful ;)
I want to understand what happens under the hood in a live migration for execution of my final year project
According to my understanding ,with two host sharing a common storage via SAN
1)When a vm is migrated from one host to another host,the VM files are transferred from one ESXI to another ,but the question is they have a shared storage so how are they going to transfer.
2)VMDK,snapshots files are transferred during live migration
Now I have questions
1)Only VMDK,.vmx files are transferred
2)with VMotion the memory pages are transferred,so what are this memory pages,are they files ,or what are they physically
3)Where is the code for migration present,in hypervisor or VCenter
4)Can we get a stacktrace for vm ,hypervisor during a migration and if yes how would that be possible (I tried a strace to get a basic on how a VM (ubuntu) would call a hypervisor but that only gives me till the linux system and not beyond that )
Can anyone please guide me on this .
VMotion overview
Phase 1: Guest Trace Phase
The guest VM is staged for migration during this phase. Traces are
placed on the guest memory pages to track any modifications by the
guest during the migration. Tracing all of the memory can cause a
brief, noticeable drop in workload throughput. The impact is generally
proportional to the overall size of guest memory.
Phase 2: Precopy Phase
Because the virtual machine continues to run and actively modify its
memory state on the source host during this phase, the memory contents
of the virtual machine are copied from the source vSphere host to the
destination vSphere host in an iterative process. The first iteration
copies all of the memory. Subsequent iterations copy only the memory
pages that were modified during the previous iteration. The number of
precopy iterations and the number of memory pages copied during each
iteration depend on how actively the memory is changed on the source
vSphere host, due to the guest’s ongoing operations. The bulk of
vMotion network transfer is done during this phase—without taking any
significant number of CPU cycles directly from the guest. One would
still observe an impact on guest performance, because the write trace
fires during the precopy phase will cause a slight slowdown in page
writes.
Phase 3: Switchover Phase
During this final phase, the virtual machine is momentarily
quiesced on the source vSphere host, the last set of memory
changes are copied to the target vSphere host, and the virtual
machine is resumed on the target vSphere host. The guest briefly
pauses processing during this step. Although the duration of this
phase is generally less than a second, it is the most likely phase
where the largest impact on guest performance (an abrupt, temporary
increase of latency) is observed. The impact depends on a variety of
factors not limited to but including network infrastructure, shared
storage configuration, host hardware, vSphere version, and dynamic
guest workload.
From my experience, I would say I am always loosing at least 1 ping during Phase 3.
Regarding your questions:
1) All data is transferred over TCP/IP network. NO .vmdk is transferred unless it's Storage VMotion. All details you can find in the documentation
2) .nvram is VMware VM memory file. All the list of VMware VM file types can be validated here
3) All the logic is in hypervisor. vSphere Client/ vCenter are management products. VMware has proprietary code base, so I don't think you can get actual source code. At the same time, you are welcome to check ESXi cli documentation. VMotion invokation due to licensing restrictions can be done only via client.
4) Guest OS (in your case Ubuntu) is not aware of the fact the it uses virtual hardware at all. There is NO way for guest OS to track migration or any other VMware kernel/vmfs activity in general.
I am using VirtualBox in windows 8 for launching guest OS and NAT as a virtual network adapter.
i want to know whether the guest OS supports network bandwidth control(I should get a notification when data usage crosses a particular limit) or not ?
As far as I know:
Nope. VMWare on the other hand got bandwidth limiting capabilities (like throttling and introducing lag/jitter/loss.)
But bandwidth meter? No.
Use a software for that. It's called "Bandwidth meter". (It's not free, but that's the one I used and the one worked perfectly.)
And with this I totally go into SU scope: Try updating your modem software. Most new software contains a bandwidth meter feature. (In case that's why you are looking for such feature in VBox.)
Currently we are running a VMWare Server on a Windows Server 2008 R2. The hardware specs of the machine are very good. Nonetheless, performance in virtual machines is not at all acceptable when two or more virtual machines are running at the same time (just running, not performing any CPU or disk intensive tasks).
Hence we are looking for alternatives. VMWare's website is full of buzz words only, I cannot figure out if they provide a product fitting our requirements. But alternatives from other suppliers are also welcome.
There are some constraints:
The virtualization product must run on Windows 2008 R2 - the server will not be virtualized (hence esx is excluded)
Many Virtual Machines already exist. They must be usable with the new system, or the conversion process must be simple
The virtualization engine must be able to run without an interactive user session (hence VMWare Player and VirtualBox are excluded)
It must be possible to reset a machine to a snapshot and to start a machine via command line from a different (i.e. not the host) machine (something like the vmrun command)
Several machines must be able to run in parallel without causing an enormous drop in performance
Do you have some hints for that?
Have you considered Hyper-V (native hypervisor in Windows)?
However I would suggest troubleshooting the performance issues (the most common is not enough RAM for VM or host - which result in paging and poor performance)
Though I could not find a real alternative to VMWare Server with the constraints given, I could at least speed the performance up:
changing the disk policies from "Optimize for safety" to "Optimize for performance" reduced the time of most build projects by a third
installing IP version 6 protocol on the XP machines typically brought another 10%
The slowest integation testing project (installation of Dragon Naturally Seaking 12) is now done in 20 minutes instead of 2h20min.
Still, when copying larger files from the host to the virtual machine, performance is inacceptable - while copying them from a different VM on the same host works far better...
I would still consider esxi and 2008 on top of that if i would be in your place.
We used vmware server and performance is simply not comparable to esxi especially if you are using IO intensive applications.
I have a home server that I'd like to use with ESXi. It's a fairly decent system with a 45W Dual Core AMD Athlon with 6GB DDR RAM running in Dual Channel mode.
Would ESXi 5 let me use local data stores instead of using NFS or iSCSI target?
Thanks,
F.
Short answer: Yes it will. It will create a local VMFS partition.
You will be locked out of more advanced features that require shared storage: DRS, High Availability,Fault Tolerance, etc'. But those require vCenter and multiple ESX servers in any case.