I need to build an application on a remote server so it can be run locally, where it will issue commands to a website it is hosting. I'm building this application on my machine in QtCreator, but I would like to streamline testing the application by having it build on the remote server.
What is the best way to go about this?
That first sentence is so long and so confusing... -_-
That said Qt Creator supports remote deploying and debugging but not remote building (as far as I know; please someone correct me if I'm wrong).
The only remote-related thing about the building step is cross-compilation which again is performed locally (using a cross-compiler, specifying the sysroot etc.). Of course if the target platform is the same (architecture and installation wise) as the one you use for development the cross-compilation chaos can be completely omitted.
If you want to build Qt-based application (and not only run it) on the remote platform, you will have to setup the development infrastructure (Qt dev libraries, qmake etc.). However, I would suggest using your local system for the development unless the server provides a very noticeable boost during the building step. It's easier that way and makes sense especially if the application that you are building on the remote will be executed locally.
you have 3 options really:
run the IDE on the remote server and connect using vnc or x2go. This requires a relatively high bandwidth/low latency connection, or the GUI won't be reponsive. This is personally what I do at my work - although we have a dev server set up to mirror prod in our building - so the data connection is great.
sync your files using lsyncd and build via commandline. You code-completion will be based on your local machine, so won't be perfect, and you wont be able to double click compile errors, etc. If you are brave you could maybe set up a qt creator build configuration to do this for you, but includes would still be broken.
use another IDE. NetBeans supports remote builds. I have never personally used this feature, but I've heard that it works ok.
Currently we are running a VMWare Server on a Windows Server 2008 R2. The hardware specs of the machine are very good. Nonetheless, performance in virtual machines is not at all acceptable when two or more virtual machines are running at the same time (just running, not performing any CPU or disk intensive tasks).
Hence we are looking for alternatives. VMWare's website is full of buzz words only, I cannot figure out if they provide a product fitting our requirements. But alternatives from other suppliers are also welcome.
There are some constraints:
The virtualization product must run on Windows 2008 R2 - the server will not be virtualized (hence esx is excluded)
Many Virtual Machines already exist. They must be usable with the new system, or the conversion process must be simple
The virtualization engine must be able to run without an interactive user session (hence VMWare Player and VirtualBox are excluded)
It must be possible to reset a machine to a snapshot and to start a machine via command line from a different (i.e. not the host) machine (something like the vmrun command)
Several machines must be able to run in parallel without causing an enormous drop in performance
Do you have some hints for that?
Have you considered Hyper-V (native hypervisor in Windows)?
However I would suggest troubleshooting the performance issues (the most common is not enough RAM for VM or host - which result in paging and poor performance)
Though I could not find a real alternative to VMWare Server with the constraints given, I could at least speed the performance up:
changing the disk policies from "Optimize for safety" to "Optimize for performance" reduced the time of most build projects by a third
installing IP version 6 protocol on the XP machines typically brought another 10%
The slowest integation testing project (installation of Dragon Naturally Seaking 12) is now done in 20 minutes instead of 2h20min.
Still, when copying larger files from the host to the virtual machine, performance is inacceptable - while copying them from a different VM on the same host works far better...
I would still consider esxi and 2008 on top of that if i would be in your place.
We used vmware server and performance is simply not comparable to esxi especially if you are using IO intensive applications.
Does anyone have a good way to set up multiple CFML engines, and versions of them, together in a suitable environment for cross testing a CFML based application.
Ideally, I'd like this to be Ubuntu Server based as I'm using it with VirtualBox (under Windows 7). Plus it'd be helpful if it was possible to switch between, so my laptop can cope with one at a time rather than all running at once. I'm thinking of the following:
Adobe ColdFusion 9
Adobe ColdFusion 10
Railo 3.3.x
Railo 4.x
OpenBD 2.x
I'd also like to get them serving from the same shared directory, so I don't have to have a copy of the code for each engine. Cheers
You mentioned being able to "switch between, so my laptop can cope with one at a time rather than all running at once", I'm guessing that you are thinking that each one will run on a different VM, or that they might require a huge amount of memory. I don't think you need to worry about that. Unless you require that they be on different machines, I think you could do this all on one VM and with one instance of a servlet container (like Tomcat).
From a high-level view, here is how I would do it.
Install Tomcat
Create or download .wars for each of the engines.
Deploy said .wars to that one instance of Tomcat
Set up Tomcat to use each of those servlets from a different host name (server.xml)
Create a code directory outside of Tomcat for your one copy of the code
Set up a Symbolic link in each webapp to link the code folder into the servlet
You should then be able to hit the same source from each engine by visiting the different host names in the browser.
I may be missing something. It has been a long time since I set something like this up. You'll likely need to make a bunch of tweaks (JVM settings, switching to Sun/ORACLE JVM vs. OpenJDK, etc).
I don't think running this many engines will cause you great trouble. In my experiences, for development, I have had 3 instances of CF9 running on Tomcat using only 189mb of RAM. And each additional instance did not increase that number by 1/3. Far less. It would not surprise me if you could run all of those handily with less than 512md of RAM. Possibly even 256mb if you are really hurting on memory.
I hope this helps.
For ColdFusion 10, Railo and OpenBD you would be looking at deploying with standalone installations of Tomcat, Jetty or JBoss.
ColdFusion 9, probably the easiest solution is "Enterprise Multiserver configuration" setup.
With these kinds of installation they are pretty much platform agnostic.
The things to be aware of are the web server, proxy and jndi ports that are used by each installation, but only if you want to run more than one server at a time.
After that it's whether you are bothered about proxying from apache or Nginx to the server instances and the connector you want to use.
No idea if this helps...
Since you've mentioned the VirtualBox, I'll share my personal approach to this task. It includes few fairly simple steps:
Install Ubuntu Server as VirtualBox guest (host is also Ubuntu).
Set up only basic software like JVM and updates. Set up virtual
machine networking as bridged adapter to use my Wi-Fi connection.
Configure my Wi-Fi router DHCP to assign static IP for MAC address of the virtual machine.
Add entry to my (host) system hosts: ip_assigned_to_vm virtual.ubuntu
Set up guest additions and mount my ~/www directory inside the machine to access web applications.
Now, when I need another machine for experiments, or some other configuration of software (I've tested ACF 10 and Railo 4 this way) I do two things:
Clone existing clean machine.
Make sure it is using the same MAC address with bridged interface.
That's it.
It doesn't matter which of the machines I run, they all can be accessed as http://virtual.ubuntu (of course, it requires proper web-server configuration on the guest). Same time they are independent and it is completely safe to make anything I wish and test anything that runs on Ubuntu.
Obvious downsides are that I can run just one machine at a time, plus much more disk space is used. Not a problem to me.
I've tried approach with Tomcat and multiple WARs, but it has couple of issues: I can't use different JVM and Tomcat settings, also if I screw the setup -- all the Tomcat hosts are down.
Hope this helps.
I am planning to use VMWare workstation for installing linux. But my use case is to have multiple kernel versions as part of development requirement.
Does VMWare allow use of this?
I mean will GRUB or loader prompt me for loading of kernel of my choice the way which it will do on actual system ?
Thanks, kedar
Yes, it will allow this. Linux does not care if it is running in a VM or on real hardware. As far as Linux knows (except for the VMWare tools, of course), it is running on real hardware.
The VM "disk" is just a file on the host file system so can be set up independently of that host file system, including boot loaders and such.
Vmware workstation mimics a true hardware installation very well, almost everything you can do in a physical box you can do in a virtual machine. It's not perfect but it is pretty close to it. I use a 2 physical machine setup to mimic a 10 machine domain lab. The ability to save snapshots or to pause a machine makes it better than a physical machine in some respects.
It is a great tool and one that I recommend for anyone learning IT
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Curious, how many of you develop under a VMware environment?
Is it popular for employers to setup vmware for everyone?
Seems like a great way to rollout new desktop computers and perform backups etc.
Just worried about the performance though (PC vmwares).
Update
I was just looking at vmware's site, 1.3 BILLION in sales..wow!
I almost exclusively use Virtual Machines for development and am very happy doing so. The flexibility of multiple sand-boxed environments is definitely worth a small trade in performance.
Clearly a VM will never give you the same results as running on a native system, but you should be able to get performance that's easily within 10-15% of the real thing. In my experience many of the performance problems people encounter are due to underspecced or poorly configured systems and VM;s.
I primarily develop with a Vista x64 virtual machine on a 2.4Ghz Core 2 Duo with 4GB of Ram. Of this I assign 2GB of RAM and two virtual core's to my main VM. If I'm running more than one VM I usually change this to 1-1.5GB and one core.
Here's some quick GeekBench test results; (Note than GeekBench results under OSX and Vista don't seem comparable, they're listed here to show the impact of configs on both systems).
Fresh boot, no active applications:
Native OSX - 3115
Native OSX running Vista 64 VM - 3042
Native Vista 64 (2.4GHz x 2, 4GB) - 2596
Vista 64 VM (2 VCore, 3GB) - 2362
Vista 64 VM (1 VCore, 2GB) - 1892
These are the most common reasons for poor VM performance in my experience;
Under-specced machines. Ideally you should be able to dedicate one core and 1GB of memory to each VM you plan to work in. Contrary to what you might read I've found that Vista runs within a few percent of XP with 1GB of memory.
Running too many things on your VM. Keep your email, web browsing and IM's to Mummy on your native OS.
On your VM turn off items such as screensavers, background apps and non-essential services. If your VM's are backed-up you may want to turn off system restore.
If possible have your VM's on a separate hard-drive than your native OS so their disc access is independent if one or the other starts paging.
Defrag your VM drive. It does make a difference.
VMware Workstation 6.5 runs like a champ on my older Athlon X2. I use Visual Studio on my host machine and have many VMs installed with various OS, framework and browser combinations. VMware Workstation adds VM debugging into Visual Studio as well, so I can just hit F6 to start my app in any one of my VMs and debug it under any OS I want. The only catch is that you need at least 4gb RAM to make it practical to use more than 1 VM at a time.
My company uses VMware to test our webapp using different browsers/OS versions. Everyone has at least 1 VM on their machine for this purpose. We all develop on the native machine, however -- even on a quad core machine with 4GB RAM, it takes about 20 minutes to do a clean build of our app! For me, I dislike using VM images because of how much paging they do. A few developers here have started using Linux has the host OS and running Windows VMs inside it and they get much better performance due to reduced paging (Linux is way better at memory and disk cache management, plus is has a better scheduler). The extra VMs for testing that would normally be run inside our Windows instance thus get moved to run side by side on the Linux host, which improves performance.
I switched to developing exclusively in VMs around the time I started doing work with technologies like BizTalk Server, Sharepoint, and betas/CTPs of various things...it just got to be impossible to have all the stuff co-exist on the same box.
Since switching I have enjoyed many other benefits to developing in a VM - snapshots, portability, dynamically marshaling resources, etc.
The ultimate benefit is due to VMWare having a presence on many different hosts operating systems, thus I am free to select the host OS of my choice - XP, Vista, Linux, OSX, etc.
Now I run OSX on a MacBook Pro, which allows me to do Mac and iPhone development as well as Windows development, all on the same box.
That is the long winded backstory that brings me to answering the question - as long as your hardware is decently spec'd you should not run into any performance problems - even doing crazy shit with BizTalk and SQL Server.
We use it where I work. We are even making a dvd with the appliance on it to reduce the time it takes new developers to get up to speed.
Regarding performance, I have seen a performance hit. It seems mostly limited by the hard drive if you have snapshots enabled. Of course after I moved my vm's to a VelociRaptor, even that performance hit is no longer noticable.
Oh, I develop ASP websites and C/C++ applications using Visual Studio 2005 and 2008.
Sadly, it's not yet "popular" in the sense of "common," but it's definitely "popular" in the sense of "enjoyed" by those who try it. As a consultant, I love it, since it allows me to swap tool chains in a matter of minutes and, at the end of an engagement, burn a DVD, throw it in the project file, and be done with it.
Several responders seem to be emphasizing the use of VMs for testing, where I think it is beginning to gain some traction, at least within more sophisticated shops. It's clearly a huge win for deployment and compatibility testing.
Depends on the employer, I suppose. On a machine that is adequately-equipped, VMWare (or any virtualization software) performs perfectly fine. On machines that you are more likely to be forced to use at the majority of programming jobs, not so much.
I personally do not use VMWare at work. My work machine barely has enough power to natively handle the tools I need to use.
Its very popular unless employer is cheap, i used it in a few companies. its great for .NET or any language where you have to check if the thing works on different OS versions/platforms. The most common way is not to use VMWare on your own computer but to remotely join it.
I've started using VMware for almost everything on my personal PC.
I keep my native Windows install for games only and have seperate VMs for everything else:
a general office workstatation (MSOffice, accounting software, general crapware). This one stays on almost all the time.
a WAMP stack dev environment
a MS stack dev environment
a throwaway environment for beta testing and toying around with things that might break the OS install.
Everything is pretty fast. I use a streamlined WinXP base install that takes up very little space/memory.
Disk I/O seems to be the bottleneck for me, but I feel we are only one generation (6 months?) away from quite affordable SSDs.
I couldnt go back to physical computing.
Once you start using VM's you'll never go back. I use VMware on a MacBook Pro for Windows and Linux development and I'm very happy with the result.
Observations:
get plenty of RAM. 4GB is quite usable, but 8 is better. You're a developer, you have a lot of apps and web pages open, right?
allocate 1 core to the VM - it's faster than 2.
follow VMware's recommendations for allocating RAM to the guests
use a virtual hard drive for the guest OS. It's much faster than running the guest from a BootCamp partition.
VMware doesn't have the WDDM driver needed to enable Aero.
when I did an eval, the VMware Linux host video drivers didn't seem nearly as fast as for Windows or OSX hosts. Video for Windows guests is noticeably slower on a Linux host vs the other two OS's. This was the main reason I chose Mac over a Linux machine.
In my development environment I use a couple of VM's. Usually one (linux) server per role (such as subversion, MySQL databases, web server, trac server, etc.. ). This way my primary machine remains clean and can't affect my work by running amok, and the data remains secure on the VM-host.
VmWare is quite high-level, for production I'd recommend using a more low-level, bare-metal solution, like Xen.
VMWare as a windows development environment runs terrible on my dual core with 2GB ram (XP guest, XP host). Even with nothing running on the host except for VMware, constant paging that takes about a minute to settle every time I switch applications. Heck, native VS2008 doesn't even run that great during intellisense-heavy use (occasional noticible lag). While using a fixed VM image as my day-to-day working environment has a ton of benefits, the second-to-second performance lag is just too frustrating.
My employer is buying me a nice 64bit system with a ton of ram so I'll revisit the subject in a month. For now I just reimage my machine every couple months.
...console development is obviously performs just fine. for server applications (deployment) where high memory applications aren't launching and closing vmware is lovely and performs fine.
I am doing some SharePoint development and I really love the flexibility that comes from using the VMPlayer on my laptop. I have an image with WSS and the VS2005 tool chain and another image with MOSS and VS2008/SQL server 2008 when I need to it to the max.
When the 2008 image became corrupt (to many beta version I guess) I could just delete it and create a new one from a prior backup.
Being able to develop in a server environment while on the train speakes for it self.
PS: It only takes 4 GB to run the VMWare and it performing really nice, even with a slow 5600 rpm disk drive
Personally I would love to use a virtualization solution for my day to day development because of the ability to test and develop on multiple operating systems simultaneously. However, since my day-to-day development involves quite a bit of opengl this currently isn't a workable solution because most of the time the OS on the VM will default back to software rendering due to the lack of drivers and hardware acceleration.
I develop under a VMWare version of my entire network, including; AD Server, DB Server, etc, needless to say the performance is terrible even on our VMWare server that is running 4gb of ram. But it does allow me to develop without fear of accidentally destroying my companies live databases or shutting down an important server in the real world. And if something crazy happens, no biggy, I can just roll it back to yesterday. If my entire network wasn't housed inside the VMWare environment the performance would be incredible, but running all those other systems really bogs it down a lot.
We tried going all-in with VMs, but found that SQL Server running multiple times on the same physical box basically bogged it down to uselessness. However, I don't think we've seen any serious issues once the DBs were removed from the VM stacks.
Virtualization on desktop / workstation: Sun Virtualbox or VPC. Easy, light. We share our favorite images, keep it causal, and sometime even sysprep them.
Main QA environments get serious with Manager. It's a beast to get working, but can't live without it. There's no way we could afford our test matrix in real machines, or maintain it without the template management. Without such a resource, there are probably things you should do and don't.
Long lived servers or QA DB: VM Ware ESX. (No short explanation).
We don't have perf problems with DBs and virtualization. Well, I did in Lab Manager - which is part of why DB's live on ESX in our shop. For I/O, our IT guys do magic with SAN, iSCSI, and high quality wire. It is certainly simpler to avoid perf problems on db servers if they are bare metal, and probably possible to squeeze out more perf from a dedicated host.
Which brings up what virtualization is and isn't for: Virtualization isn't for a scenario where you are maxing out your hardware already. For example, I don't use it dev on, because I need everything my dev box can give me. It's to replace dozens of underutilized, hard to provision physical servers, with dozens of easy to provision virtual clones on many fewer hosts. It allows hot swapping more capacity, or allows engineering flexibility.
I also have some late 90s computer games that I run in virtualized Windows 98.