Alternative to VMWare Server on Windows Server 2008 R2 - vmware

Currently we are running a VMWare Server on a Windows Server 2008 R2. The hardware specs of the machine are very good. Nonetheless, performance in virtual machines is not at all acceptable when two or more virtual machines are running at the same time (just running, not performing any CPU or disk intensive tasks).
Hence we are looking for alternatives. VMWare's website is full of buzz words only, I cannot figure out if they provide a product fitting our requirements. But alternatives from other suppliers are also welcome.
There are some constraints:
The virtualization product must run on Windows 2008 R2 - the server will not be virtualized (hence esx is excluded)
Many Virtual Machines already exist. They must be usable with the new system, or the conversion process must be simple
The virtualization engine must be able to run without an interactive user session (hence VMWare Player and VirtualBox are excluded)
It must be possible to reset a machine to a snapshot and to start a machine via command line from a different (i.e. not the host) machine (something like the vmrun command)
Several machines must be able to run in parallel without causing an enormous drop in performance
Do you have some hints for that?

Have you considered Hyper-V (native hypervisor in Windows)?
However I would suggest troubleshooting the performance issues (the most common is not enough RAM for VM or host - which result in paging and poor performance)

Though I could not find a real alternative to VMWare Server with the constraints given, I could at least speed the performance up:
changing the disk policies from "Optimize for safety" to "Optimize for performance" reduced the time of most build projects by a third
installing IP version 6 protocol on the XP machines typically brought another 10%
The slowest integation testing project (installation of Dragon Naturally Seaking 12) is now done in 20 minutes instead of 2h20min.
Still, when copying larger files from the host to the virtual machine, performance is inacceptable - while copying them from a different VM on the same host works far better...

I would still consider esxi and 2008 on top of that if i would be in your place.
We used vmware server and performance is simply not comparable to esxi especially if you are using IO intensive applications.

Related

Does VMWare Workstation/Player allow multiple kernel images?

I am planning to use VMWare workstation for installing linux. But my use case is to have multiple kernel versions as part of development requirement.
Does VMWare allow use of this?
I mean will GRUB or loader prompt me for loading of kernel of my choice the way which it will do on actual system ?
Thanks, kedar
Yes, it will allow this. Linux does not care if it is running in a VM or on real hardware. As far as Linux knows (except for the VMWare tools, of course), it is running on real hardware.
The VM "disk" is just a file on the host file system so can be set up independently of that host file system, including boot loaders and such.
Vmware workstation mimics a true hardware installation very well, almost everything you can do in a physical box you can do in a virtual machine. It's not perfect but it is pretty close to it. I use a 2 physical machine setup to mimic a 10 machine domain lab. The ability to save snapshots or to pause a machine makes it better than a physical machine in some respects.
It is a great tool and one that I recommend for anyone learning IT

Any reason not to use ESXi?

We have 3 identical HP DL380 G5 server here, one of them is running vmware-server with one VM running on it.
I've begun the process to migrate these systems to be running ESXi (the $0, "embedded" system); two of the physical machines will have %99.99 of the time exactly 1 VM, the other will have 2.
For this, the major advantage I get Disaster Recovery ability. Our tape backup system doesn't have a "bare metal" ability. I can manually copy VM images to a different server, however. Even if they are months old, they provide pretty-close-to-instant up, further recovery they would be from tape.
Being the free version, I don't get the VMWare "consolidated backup" or VMotion. And I need to do per-physical machine management. But the ESXi takes 32MB of disk, and it specifically supports the server.
With that in mind, is there any reason not to always use ESXi, if the hardware supports it? Even if you only are planning on running 1 VM on that hardware?
Well, in your case ESXi is the better choice. There are cases where you want to use VMware Server but not really for this case. This is what ESXi is for. For instance, I use VMware Server on top of my development OS so I could do testing and use different distro's etc. I wouldn't do VMware Server for a production server like you are describing, but ESXi would be the best choice.
Is it an excellent idea to virtualize the whole OS to get the ability to make backups? NO! its not... Damn hype to virtualize without the real need for it.
There are free alternatives to make backups of pretty much any OS, image or archive of your choice.
To be more precise, XSIBackup will allow you to hot backup any ESXi edition from version 5.1 and up, it backs up the guest OS while it is running, and can even transfer it to a secondary ESXi box via IP and leave it ready to be switched on:
https://33hops.com/xsibackup-vmware-esxi-backup.html

How to keep a VMWare VM's clock in sync?

I have noticed that our VMWare VMs often have the incorrect time on them. No matter how many times I reset the time they keep on desyncing.
Has anyone else noticed this? What do other people do to keep their VM time in sync?
Edit: These are CLI linux VMs btw..
If your host time is correct, you can set the following .vmx configuration file option to enable periodic synchronization:
tools.syncTime = true
By default, this synchronizes the time every minute. To change the periodic rate, set the following option to the desired synch time in seconds:
tools.syncTime.period = 60
For this to work you need to have VMWare tools installed in your guest OS.
See http://www.vmware.com/pdf/vmware_timekeeping.pdf for more information
according to VMware's knowledge base, the actual solution depends on the Linux distro and release, in RHEL 5.3 I usually edit /etc/grub.conf and append this parameters to the kernel entry: divider=10 clocksource=acpi_pm
Then enable NTP, disable VMware time synchronization from vmware-toolbox and finally reboot the VM
A complete table with guidelines for each Linux distro can be found here:
TIMEKEEPING BEST PRACTICES FOR LINUX GUESTS
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1006427
I'll answer for Windows guests. If you have VMware Tools installed, then the taskbar's notification area (near the clock) has an icon for VMware Tools. Double-click that and set your options.
If you don't have VMware Tools installed, you can still set the clock's option for internet time to sync with some NTP server. If your physical machine serves the NTP protocol to your guest machines then you can get that done with host-only networking. Otherwise you'll have to let your guests sync with a genuine NTP server out on the internet, for example time.windows.com.
Something to note here. We had the same issue with Windows VM's running on an ESXi host. The time sync was turned on in VMWare Tools on the guest, but the guest clocks were consistently off (by about 30 seconds) from the host clock. The ESXi host was configured to get time updates from an internal time server.
It turns out we had the Internet Time setting turned on in the Windows VM's (Control Panel > Date and Time > Internet Time tab) so the guest was getting time updates from two places and the internet time was winning. We turned that off and now the guest clocks are good, getting their time exclusively from the ESXi host.
In my case we are running VMWare Server 2.02 on Windows Server 2003 R2 Standard. The Host is also Windows Server 2003 R2 Standard. I had the VMware Tools installed and set to sync the time. I did everything imaginable that I found on various internet sites. We still had horrendous drift, although it had shrunk from 15 minutes or more down to the 3 or 4 minute range.
Finally in the vmware.log I found this entry (resides in the folder as the .vmx file):
"Your host system does not guarantee synchronized TSCs across different CPUs, so please set the /usepmtimer option in your Windows Boot.ini file to ensure that timekeeping is reliable. See Microsoft KB http://support.microsoft.com/kb... for details and Microsoft KB http://support.microsoft.com/kb... for additional information."
Cause: This problem occurs when the computer has the AMD Cool'n'Quiet technology (AMD dual cores) enabled in the BIOS or some Intel multi core processors. Multi core or multiprocessor systems may encounter Time Stamp Counter (TSC) drift when the time between different cores is not synchronized. The operating systems which use TSC as a timekeeping resource may experience the issue. Newer operating systems typically do not use the TSC by default if other timers are available in the system which can be used as a timekeeping source. Other available timers include the PM_Timer and the High Precision Event Timer (HPET).
Resolution: To resolve this problem check with the hardware vendor to see if a new driver/firmware update is available to fix the issue.
Note The driver installation may add the /usepmtimer switch in the Boot.ini file.
Once this (/usepmtimer switch) was done the clock was dead on time.
This documentation solved this problem for me.
The CPU speed varies due to power saving. I originally noticed this because VMware gave me a helpful tip on my laptop, but this page mentions the same thing:
Quote from : VMWare tips and tricks
Power saving (SpeedStep, C-states, P-States,...)
Your power saving settings may interfere significantly with vmware's performance. There are several levels of power saving.
CPU frequency
This should not lead to performance degradation, outside of having the obvious lower performance when running the CPU at a lower frequency (either manually of via governors like "ondemand" or "conservative"). The only problem with varying the CPU speed while vmware is running is that the Windows clock will gain of lose time. To prevent this, specify your full CPU speed in kHz in /etc/vmware/config
host.cpukHz = 2167000
VMware experiences a lot of clock drift. This Google search for 'vmware clock drift' links to several articles.
The first hit may be the most useful for you: http://www.fjc.net/linux/linux-and-vmware-related-issues/linux-2-6-kernels-and-vmware-clock-drift-issues
When installing VMware Tools on a Windows Guest, “Time Synchronisation” is not enabled by default.
However – “best practise” is to enable time synch on Windows Guests.
There a several ways to do this from outside the VM, but I wanted to find a way to enable time sync from within the guest itself either on or after tools install.
Surprisingly, this wasn’t quite as straightforward as I expected.
(I assumed it would be posible to set this as a parameter / config option during tools install)
After a bit of searching I found a way to do this in a VMware article called “Using the VMware Tools Command-Line Interface“.
So, if time sync is disabled, you can enable it by running the following command line in the guest:
VMwareService.exe –cmd “vmx.set_option synctime 0 1″
Additional Notes
For some (IMHO stupid) reason, this utility requires you to specify the current as well as the new value
0 = disabled
1 = enabled
So – if you run this command on a machine which has this already set, you will get an error saying – “Invalid old value“.
Obviously you can “ignore” this error when run (so not a huge deal) but the current design seems a bit dumb.
IMHO it would be much more sensible if you could simply specify the value you want to set and not require the current value to be specified.
i.e.
VMwareService.exe –cmd “vmx.set_option synctime <0|1>”
In Active Directory environment, it's important to know:
All member machines synchronizes with any domain controller.
In a domain, all domain controllers synchronize from the PDC Emulator (PDCe) of that domain.
The PDC Emulator of a domain should synchronize with local or NTP.
It's important to consider this when setting the time in vmware or configuring the time sync.
Extracted from: http://www.sysadmit.com/2016/12/vmware-esxi-configurar-hora.html
I added the following job to crontab. It is hacky but i think should work.
*/5 * * * * service ntpd stop && ntpdate pool.ntp.org && service ntpd start
It stops ntpd service updates from service and starts ntpd again

VMware ESX vs. VMware Workstation

I'm using VMware Workstation 6.0 for simulation of tight clusters of "blades" in a "chassis". Both the host and target OSs are Linux. Each "chassis" uses a vmnet switch as a virtual backplane, to which the virtual blades connect. Other vmnet switches are used to mediate point-to-point connections between mutiple virtual ethernet adapters on each blade VM. The chassis, and thus the VMs, are brought up and shutdown rather frequently. My scripts (python) make heavy use of the VIX api, and also manipulate the .vmx config file.
What do I gain and/or lose going from VMware Workstation to ESX? Do my scripts that use the VIX api still work? Do my rather complicated virtual network topologies, with lots of vmnet switches defined as "custom", still work the same way? Is the syntax and semantics of the .vmx config file the same between Workstation and ESX?Thanks in advance for your help.
The first thing you'll gain by switching will be a substantially more powerful platform that's running directly on the bare-metal of your server.
From my experience, moving up the VMware application stack has never been problematic (Server to Workstation to ESX). However, I would verify this by exporting all of your VMs from the workstation install to an ESX install to make sure you're not seeing any 'weird' issues related to running the high-end tool from VMware.
From my [limited] experience, scripts also carry-over cleanly: each offering moving up their product line doesn't break lower-level tools, but do add substantial improvements.
You get scalability and performance.
ESX scales much better and run much faster than any of VMware desktop products like Workstation or Player.
You should not lose anything. ESXi performs all the functions that Workstation does, plus a lot more. I use ESXi at home and Workstation on my laptop.
You will gain more fine-grained control over the virtual networks, over storage, snapshots, cloning, quiescing guest OSes, and many more advanced options in ESXi configuration.
One thing to note is the considerable expense of the ESX line compared to Workstation. If you're working for a successful company, though, the cost can easily be justified as ESX is (imho) da bomb. Also, FYI, the old free VMware Server options definitely had a whole different interface.

vmware and performance for developing [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Curious, how many of you develop under a VMware environment?
Is it popular for employers to setup vmware for everyone?
Seems like a great way to rollout new desktop computers and perform backups etc.
Just worried about the performance though (PC vmwares).
Update
I was just looking at vmware's site, 1.3 BILLION in sales..wow!
I almost exclusively use Virtual Machines for development and am very happy doing so. The flexibility of multiple sand-boxed environments is definitely worth a small trade in performance.
Clearly a VM will never give you the same results as running on a native system, but you should be able to get performance that's easily within 10-15% of the real thing. In my experience many of the performance problems people encounter are due to underspecced or poorly configured systems and VM;s.
I primarily develop with a Vista x64 virtual machine on a 2.4Ghz Core 2 Duo with 4GB of Ram. Of this I assign 2GB of RAM and two virtual core's to my main VM. If I'm running more than one VM I usually change this to 1-1.5GB and one core.
Here's some quick GeekBench test results; (Note than GeekBench results under OSX and Vista don't seem comparable, they're listed here to show the impact of configs on both systems).
Fresh boot, no active applications:
Native OSX - 3115
Native OSX running Vista 64 VM - 3042
Native Vista 64 (2.4GHz x 2, 4GB) - 2596
Vista 64 VM (2 VCore, 3GB) - 2362
Vista 64 VM (1 VCore, 2GB) - 1892
These are the most common reasons for poor VM performance in my experience;
Under-specced machines. Ideally you should be able to dedicate one core and 1GB of memory to each VM you plan to work in. Contrary to what you might read I've found that Vista runs within a few percent of XP with 1GB of memory.
Running too many things on your VM. Keep your email, web browsing and IM's to Mummy on your native OS.
On your VM turn off items such as screensavers, background apps and non-essential services. If your VM's are backed-up you may want to turn off system restore.
If possible have your VM's on a separate hard-drive than your native OS so their disc access is independent if one or the other starts paging.
Defrag your VM drive. It does make a difference.
VMware Workstation 6.5 runs like a champ on my older Athlon X2. I use Visual Studio on my host machine and have many VMs installed with various OS, framework and browser combinations. VMware Workstation adds VM debugging into Visual Studio as well, so I can just hit F6 to start my app in any one of my VMs and debug it under any OS I want. The only catch is that you need at least 4gb RAM to make it practical to use more than 1 VM at a time.
My company uses VMware to test our webapp using different browsers/OS versions. Everyone has at least 1 VM on their machine for this purpose. We all develop on the native machine, however -- even on a quad core machine with 4GB RAM, it takes about 20 minutes to do a clean build of our app! For me, I dislike using VM images because of how much paging they do. A few developers here have started using Linux has the host OS and running Windows VMs inside it and they get much better performance due to reduced paging (Linux is way better at memory and disk cache management, plus is has a better scheduler). The extra VMs for testing that would normally be run inside our Windows instance thus get moved to run side by side on the Linux host, which improves performance.
I switched to developing exclusively in VMs around the time I started doing work with technologies like BizTalk Server, Sharepoint, and betas/CTPs of various things...it just got to be impossible to have all the stuff co-exist on the same box.
Since switching I have enjoyed many other benefits to developing in a VM - snapshots, portability, dynamically marshaling resources, etc.
The ultimate benefit is due to VMWare having a presence on many different hosts operating systems, thus I am free to select the host OS of my choice - XP, Vista, Linux, OSX, etc.
Now I run OSX on a MacBook Pro, which allows me to do Mac and iPhone development as well as Windows development, all on the same box.
That is the long winded backstory that brings me to answering the question - as long as your hardware is decently spec'd you should not run into any performance problems - even doing crazy shit with BizTalk and SQL Server.
We use it where I work. We are even making a dvd with the appliance on it to reduce the time it takes new developers to get up to speed.
Regarding performance, I have seen a performance hit. It seems mostly limited by the hard drive if you have snapshots enabled. Of course after I moved my vm's to a VelociRaptor, even that performance hit is no longer noticable.
Oh, I develop ASP websites and C/C++ applications using Visual Studio 2005 and 2008.
Sadly, it's not yet "popular" in the sense of "common," but it's definitely "popular" in the sense of "enjoyed" by those who try it. As a consultant, I love it, since it allows me to swap tool chains in a matter of minutes and, at the end of an engagement, burn a DVD, throw it in the project file, and be done with it.
Several responders seem to be emphasizing the use of VMs for testing, where I think it is beginning to gain some traction, at least within more sophisticated shops. It's clearly a huge win for deployment and compatibility testing.
Depends on the employer, I suppose. On a machine that is adequately-equipped, VMWare (or any virtualization software) performs perfectly fine. On machines that you are more likely to be forced to use at the majority of programming jobs, not so much.
I personally do not use VMWare at work. My work machine barely has enough power to natively handle the tools I need to use.
Its very popular unless employer is cheap, i used it in a few companies. its great for .NET or any language where you have to check if the thing works on different OS versions/platforms. The most common way is not to use VMWare on your own computer but to remotely join it.
I've started using VMware for almost everything on my personal PC.
I keep my native Windows install for games only and have seperate VMs for everything else:
a general office workstatation (MSOffice, accounting software, general crapware). This one stays on almost all the time.
a WAMP stack dev environment
a MS stack dev environment
a throwaway environment for beta testing and toying around with things that might break the OS install.
Everything is pretty fast. I use a streamlined WinXP base install that takes up very little space/memory.
Disk I/O seems to be the bottleneck for me, but I feel we are only one generation (6 months?) away from quite affordable SSDs.
I couldnt go back to physical computing.
Once you start using VM's you'll never go back. I use VMware on a MacBook Pro for Windows and Linux development and I'm very happy with the result.
Observations:
get plenty of RAM. 4GB is quite usable, but 8 is better. You're a developer, you have a lot of apps and web pages open, right?
allocate 1 core to the VM - it's faster than 2.
follow VMware's recommendations for allocating RAM to the guests
use a virtual hard drive for the guest OS. It's much faster than running the guest from a BootCamp partition.
VMware doesn't have the WDDM driver needed to enable Aero.
when I did an eval, the VMware Linux host video drivers didn't seem nearly as fast as for Windows or OSX hosts. Video for Windows guests is noticeably slower on a Linux host vs the other two OS's. This was the main reason I chose Mac over a Linux machine.
In my development environment I use a couple of VM's. Usually one (linux) server per role (such as subversion, MySQL databases, web server, trac server, etc.. ). This way my primary machine remains clean and can't affect my work by running amok, and the data remains secure on the VM-host.
VmWare is quite high-level, for production I'd recommend using a more low-level, bare-metal solution, like Xen.
VMWare as a windows development environment runs terrible on my dual core with 2GB ram (XP guest, XP host). Even with nothing running on the host except for VMware, constant paging that takes about a minute to settle every time I switch applications. Heck, native VS2008 doesn't even run that great during intellisense-heavy use (occasional noticible lag). While using a fixed VM image as my day-to-day working environment has a ton of benefits, the second-to-second performance lag is just too frustrating.
My employer is buying me a nice 64bit system with a ton of ram so I'll revisit the subject in a month. For now I just reimage my machine every couple months.
...console development is obviously performs just fine. for server applications (deployment) where high memory applications aren't launching and closing vmware is lovely and performs fine.
I am doing some SharePoint development and I really love the flexibility that comes from using the VMPlayer on my laptop. I have an image with WSS and the VS2005 tool chain and another image with MOSS and VS2008/SQL server 2008 when I need to it to the max.
When the 2008 image became corrupt (to many beta version I guess) I could just delete it and create a new one from a prior backup.
Being able to develop in a server environment while on the train speakes for it self.
PS: It only takes 4 GB to run the VMWare and it performing really nice, even with a slow 5600 rpm disk drive
Personally I would love to use a virtualization solution for my day to day development because of the ability to test and develop on multiple operating systems simultaneously. However, since my day-to-day development involves quite a bit of opengl this currently isn't a workable solution because most of the time the OS on the VM will default back to software rendering due to the lack of drivers and hardware acceleration.
I develop under a VMWare version of my entire network, including; AD Server, DB Server, etc, needless to say the performance is terrible even on our VMWare server that is running 4gb of ram. But it does allow me to develop without fear of accidentally destroying my companies live databases or shutting down an important server in the real world. And if something crazy happens, no biggy, I can just roll it back to yesterday. If my entire network wasn't housed inside the VMWare environment the performance would be incredible, but running all those other systems really bogs it down a lot.
We tried going all-in with VMs, but found that SQL Server running multiple times on the same physical box basically bogged it down to uselessness. However, I don't think we've seen any serious issues once the DBs were removed from the VM stacks.
Virtualization on desktop / workstation: Sun Virtualbox or VPC. Easy, light. We share our favorite images, keep it causal, and sometime even sysprep them.
Main QA environments get serious with Manager. It's a beast to get working, but can't live without it. There's no way we could afford our test matrix in real machines, or maintain it without the template management. Without such a resource, there are probably things you should do and don't.
Long lived servers or QA DB: VM Ware ESX. (No short explanation).
We don't have perf problems with DBs and virtualization. Well, I did in Lab Manager - which is part of why DB's live on ESX in our shop. For I/O, our IT guys do magic with SAN, iSCSI, and high quality wire. It is certainly simpler to avoid perf problems on db servers if they are bare metal, and probably possible to squeeze out more perf from a dedicated host.
Which brings up what virtualization is and isn't for: Virtualization isn't for a scenario where you are maxing out your hardware already. For example, I don't use it dev on, because I need everything my dev box can give me. It's to replace dozens of underutilized, hard to provision physical servers, with dozens of easy to provision virtual clones on many fewer hosts. It allows hot swapping more capacity, or allows engineering flexibility.
I also have some late 90s computer games that I run in virtualized Windows 98.