In a research project involving virtualization and power management I am testing various resource allocation scenarios and custom power management algorithms. I am interested in isolating a virtual machine to use only a certain CPU core.
I was thinking about using Windows 2008R2 and Hyper-V, but Hyper-V does not allow setting CPU affinity for a virtual machine, is there any way I can make sure that a virtual machine running a CPU intensive task will use only one core of the CPU (the VM is configured to use a single CPU) and have the rest of the cores available for other task?
VMware ESX Server is an interesting choice since it provides the settings I need (including hot memory add), however it seems like a closed system. Does the OS of ESX Server, based on Linux from what I understand, allow for installing custom application through which to control aspects related to power management of the physical server's components (e.g. perform CPU frequency scaling). Does it provide any APIs? I am aware the product already has power management features, but I am looking for means to achieve custom implementations.
Besides these two solutions, can you recommend other hypervisors which provide facilities such as setting CPU affinity, CPU limits and reservations, hot memory add and which allow for custom applications running on the host server (also provide APIs to program such applications) - maybe Citrix XenSource, KVM (I am not familiar with these solutions)?
I don't think VMware would support modifications to the server, but you can get a command line on the ESX server as essentially you're right, it's linux underneath (RedHat mod I believe).
Xen/KVM are open source so you can hack away. You may be advised to go down the KVM route if you have budgetary constraints as the community will support you. The inclusion of Citrix may prove troublesome in an enterprise setting.
is there any way I can make sure that a virtual machine running a CPU intensive task will use only one core of the CPU
Openstack (KVM as hypervisor) provides a feature of CPU pinning through which you can bind a vCPU to a physical CPU core. Let me know if you need more information on the subject.
Here is a link explaining the feature. This link also confirms that Hyper-V doesn't support CPU pinning.
Related
I want to know the features supported by the CPU of every x86-64 / amd64 server I can rent on AWS, Azure and GCP. To decide which compilation flags are safe to use to produce binaries that would work on the oldest currently available instance types.
I can create server instances myself, run cat /proc/cpuinfo and destroy the instances. But I wonder, maybe someone has compiled the answers already and has shared them?
I assume the smallest size of every instance type is the same CPU as the largest, and that the same hardware is used in all regions. Is that true?
I also assume there won't be surprises like "usually you get a server with one CPU, but randomly sometimes you get a server with an older CPU with different supported features, for the same price!". Does anyone know of a counterexample?
Info about other big cloud vendors is welcome, but I'm mostly interested in those three.
I was working on this project that aim to make VM assignment more efficient.
By VM assigment, what I mean is as soon as a request for new virtual machine creation comes to the Openstack platform how the platform handles the request. In the openstack framework, nova-scheduler does this part. I was looking to add more features/filters into the nova scheduler.
I wanted to implement some special kinds of filters, in the Nova scheduler. That would have some special rules or which would maintain averaged load across the whole system and saving energy. Generally, A system with medium load consumes less energy than a system running at maximum load. I was thinking of filters that would allocate Virtual machines close, ie on a same rack. When a request for making a cluster of Vm is recieved. I would like what you think of feasiblity of any such filters. And How effective they can be ?
Any Help would be highly appriciated.
By default, openstack assign vms to bare metal whose memory is larger.
scheduler_driver=nova.scheduler.multi.MultiScheduler
scheduler_driver_task_period=60
compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
scheduler_available_filters=nova.scheduler.filters.all_filters
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
http://docs.openstack.org/trunk/config-reference/content/section_compute-scheduler.html
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I currently work in an organization that forces all software development to be done inside a VM. This is for a variety of risk/governance/security/compliance reasons.
The standard setup is something like:
VMWare image given to devs with tools installed
VM is customized to suit project/stream needs
VM sits in a network & domain that is isolated from the live/production network
SCM connectivity is only possible through dev/test network
Email and office tools need to be on live network so this means having two separate desktops going at once
Heavyweight dev tools in use on VMs so they are very resource hungry
Some problems that people complain about are:
Development environment runs slower than normal (host OS is windows XP so memory is limited)
Switching between DEV machine and Email/Office machine is a pain, simple things like cut and paste are made harder. This is less efficient from a usability perspective.
Mouse in particular doesn't seem to work properly using VMWare player or RDP.
Need a separate login to Dev/Test network/domain
Has anyone seen or worked in other (hopefully better) setups to this that have similar constraints (as mentioned at the top)?
In particular are there viable options that would remove the need for running stuff in a VM altogether?
In particular are there viable options
that would remove the need for running
stuff in a VM altogether?
Given that you said there are unspecified risk/governance/security/compliance reasons for your organization's use of VMs, I doubt any option we could provide could negate those. Ultimately it sounds like they just need their development team as sandboxed as possible.
(And even so, the question/answers would probably be better off at serverfault since it's more networking/security oriented.)
It sounds like a big problem is not having enough horsepower on the host OS. WinXP should be fine, but you need to have adequate hardware. i.e. at least 3 GB RAM, dual core CPU, and hardware that supports virtualization. Clipboard sync should be working with the VM.
I am not currently doing this, but I've thought about it, and we're kind of kicking this idea around with the idea of making it easier to standardize the dev environment, and to avoid wasting a day when you get a new PC. I'm dismayed to hear that it's not the utopia that I had dreamed...
I've been using VMs as a development environment for a long time. There's nothing inherently wrong with it, and it presents lots of benefits.
Ensuring a consistent environment
Separating file systems for different backup scenarios
Added security
Potentially gives developers access to more raw computing power.
There is a lot of innovation in the VM world, as evidenced by the growing popularity of VM farms, hardware support for virtualization, and controlled "turnkey" solutions, like MS's VirtualPC images for testing browser compatibility and the TurnKey set of appliances.
As others have said, your issues are probably due to insufficient hardware or sub-optimal configurations.
Development environment runs slower than normal (host OS is windows XP so memory is limited)
This should not be noticeable. XP vs. Windows Vista or Win7 is a marginal comparison. I would check the amount of physical RAM allocated to the VM.
Switching between DEV machine and Email/Office machine is a pain, simple things like cut and paste are made harder. This is less efficient from a usability perspective.
There are VM-specific optimizations/configurations that can make these tasks seamless. I would consult your VM maintenance staff.
Mouse in particular doesn't seem to work properly using VMWare player or RDP.
Again, should be seamless, but consult VM staff.
Need a separate login to Dev/Test network/domain
I would see this as a business decision: your company could obviously set up virtual machines with the same domain poicies as your own personal workstation, but may have other (big brother?) purposes for forcing you to login separately.
As far as using VM's as an agent of control, I think there are better solutions, like well-designed authorization controls around the production machines. There's nothing like paper trails to make people behave themselves.
I'm running a few Blockchain related containers in a cloud environment (Google Compute Engine). I want to measure the power/energy consumption of the containers or the instance that I'm running.
There are tools like powerapi which is possible to do this in real infrastructure where it has access to real CPU counters. This should be possible by doing an estimation based on the CPU, Memory, and Network usage. There's one such model proposed in the literature.
Is it theoretically possible to do this? If so is there already existing tools for this task.
A generalized answer is "No, it is impossible". A program running in a virtual machine deals with virtual hardware. The goal of virtualization is to abstract running programs from physical hardware, while access to physical equipment is required to measure energy consumption. For instance, without processor affinity enabled, it's unlikely that PowerAPI will be able to collect useful statistics due to virtual CPU migrations. Time-slicing that allows to run multiple vCPUs on one physical CPU is another factor to keep in mind. Needless to say about energy consumed by RAM, I/O controllers, storage devices, etc.
A substantive answer is "No, it is impossible". Authors of the manuscript use PowerAPI libraries to collect CPU statistics, and a monitoring agent to count bytes transmitted through network. HWPC-sensor the PowerAPI relies on has distinct requirement:
"monitored node(s) run a Linux distribution that is not hosted in a
virtual environment"
Also, authors emphasize that they couldn't measure absolute values of power consumption but rather used percentage to compare certain Java classes and methods in order to suppose the origin of energy leaks in CPU-intensive workloads.
As for the network I/O, the number of bytes used to estimate power consumption in their model differs significantly between the network interface exposed into the virtual guest and the host hardware port on the network with the SDN stack in between.
Measuring in cloud instances is more complicated yes. If you do have access to the underlying hardware, then it would be possible. There is a pretty recent project that allows to get power consumption metrics for a qemu/kvm hypervisor and share the metrics releveant to each virtual machine through another agent in the VM: github.com/hubblo-org/scaphandre/
They are also working (very early) on a "degraded mode" to estimate* the power consumption of the instance/vm if such communication between the hypervisor and the vm is not possible, like when working on a public cloud instance.
*: Based on the resources (cpu/ram/gpu...) consumption in the VM and characteristics of the instance and the underlying hardware power consumption "profile".
Can we conclude that , it is impossible to measure power/energy consumption of process running on virtual machines?
I wondered if anyone uses virtualized desktop PCs (running WinXP Pro or older) to have some old applications that are seldom used available for some ongoing tasks.
Say you have a really old project that every once in a while needs a document update in a database system or something like that. The database application is running on a virtualized desktop that is only started when needed.
I think we could save energy, hardware and space if we would virtualize some of those old boxes. Any setups in your company?
edit Licensing could be of concern, but I guess you have a valid license for the old desktop box. Maybe the license isn't valid in a VM environment, I'd definitly check that before.
Sure enough, if the application is performance critic, virtualization could hurt. But I'm thinking about some kind of outdated application that is still used to perform, say a calculation every 12 weeks for a certain customer/service.
I use virtualized desktops for:
Support that requires VPN software I do not want on my own desktop. This also lets a whole team share the support computer for a specific customer.
A legacy system which we use several different versions of (depending on customer's version) and they're not really compatible so its good to have a virtualized desktop for each version.
We use virtualisation to test on a variety of Operating Systems - the server application runs under linux, and we have a production (real) server, and a couple of test servers, which are all VMs.
The client runs under Windows, which, being an OS X user I have to run in a VM, and the other developer I work with runs an XP VM on his 8-core Vista box.
(I also have a seperate VM for running CAD software, but that's not really programming)
It depends on the requirements of the legacy systems. Very often if a system is relient on a certain clock frequency, then it better and morereliable to keep the older OS systems running as Virtulized OS' can do funy things to performance.
If the legacy systems aren't that critical, then go for it! One piece of advice I would give is to ensure that the system works FULLY before chucking out your old 3.11 systems as I have been stung before! To fully perform the testing can cost more money then you might save, but its up to anyone who make the decisions to ensure that is considered.
We use virtualisation for testing out applications on Vista. Or rather customers do the testing and we use virtualisation to reproduce the bugs they complain about.
I guess the thing that would stop me from using lots of virtual instances of my favourite proprietary OS would be licencing. I presume Microsoft would want me to have a licence for every installation, virtual or otherwise?
We use VMWare with a virtual windows XP here at work to run some old development tools with very expensive licenses that don't run at all on Vista. So VMWare saved us about $5000 in licenses.
Since my last machine upgrade I have been running virtualised OS's for a number of tasks. For example I use a different set of Visual Studio plugins for managed and c++ unmanaged development. Some things I found:
Run your vmware setup on a machine with plenty of resources. I'll repeat...plenty of resources! A fast quad and 8GB of memory is what my current machine is running and it runs sweet (warning you need a 64bit OS for the 8GB!).
I wouldn't worry about app performance if your current physical hardware is old (2+ years). With a decent machine I find the virtualized apps run faster than on the legacy hardware!
When upgrading to a new workstation, p2v your old workstation. No need to worry about synergy or a KVM in the transition period any more!
I've used virtualisation so I could take my development environment around with me while travelling. As long as I could install MS Virtual PC, (and the PC/laptop had generous enough RAM) then I could access all my tools, VPN, Remote desktop links, SQL databases etc...
Worked fairly well, just a little slower than I like. I could have carted a laptop around, but found a small portable harddrive to be lighter/easier and just as effective.
However, consulting for several clients - all with different VPN requirements/passwords/databases/versions of frameworks & tools etc, I've found that having a Virtualised support environment for each is well worth it. Then multiple users have access to what is needed when supporting each client - they just need to either remote desktop (or run directly) the virtualised instance.
I've used VMs to handle work-related tasks that I didn't want / couldn't do on the company-issued laptop. Specifically, I needed to have several editions of the JRE running at the same time, which Java doesn't really like.
To get around this, I built several VMs that each ran the one tool I needed in trimmed-down XP instances.
Another thing to consider is that if you have a 5-yr-old server running some app, it's probably going to run just fine on a VM on new hardware. So, if you have a rack of old devices, buying one or two "real" servers, installing something like ESX (I'm most familiar with that tool, though Xen and others exist), then use a physical-to-virtual conversion tool to get those old devices switched to VMs so you can reduce your electricity consumption, management headaches, and worries about a critical device failing and not being able to find hardware for it.
We use VM for legacy apps, and have retired old machines that served up those apps. It eliminated the concern of matching drivers from NT to Win2k3. From a disaster recovery perspective this also helped as we couldn't find boxes to support the old apps at the DR data center.
The likes of VMWare are invaluable tools for browser testing of web applications. You can pretty easily test many combinations of OS and browser without having rank upon rank of physical machines running that software.