C++ IDE that can build over SSH - c++

We are moving our development for C to C++, but all build servers run Linux and development happens on Windows machines. Our C editor does not do C++ very well so we are looking at alternatives.
The code itself lives on the build server connected by \\opt\code... type link in Windows.
We need SSH as that is the normal connection to the build servers. We would like an integrated solution for errors/warnings being able to opened in the editor. We do not care about running the code.
Are there any free editors that can execute builds over SSH?
Thanks.

NetBeans allows to build over ssh. We are using this from Linux development computers to linux build machines. I am not sure if this is possible from Windows to Linux. Here is a tutorial: Tutorial

This link suggest this should be possible using plink and any editor that can run plink as a compile and capture the resulting stdio and stderr output.

You can use Visual Slick Edit it has a scripting language which can be used to spawn commands. Btw, if your build is linux based I suggest you to edit on linux (there're various programmers editors there available for free).
In windows almost every editor worth being calling editor has the power to manage builds (often via make), so even VIM or Emacs can be productivity choices (yeah I'm not a big fans of IDEs, my actual editor is VIM with a good tons of vimscripts and yeah, I've got code navigation, error browsing and the speed of light while typing - and the same is possible with emacs).
Ultraedit is another editor able to Edit file over ftp (and maybe over SSH), but I doubt it can spawn a remote make and fetch results.
If you're not reliyng on builds, but just compiling, maybe is worth checking the Cygwin porject and see if you can arrange a compilation under Windows, then for the build manually resort to the Linux box, this would probably remove the compilation problem (every programmer will compile on his box, and only builds (compile+link) would be left on the linux box.

You might use X forwarding and run any linux IDE on the Linux side, while operating it via Windows machines. See http://www.math.umn.edu/systems_guide/putty_xwin32.html for an example.
Or even setup a VNC remote desktop connection.

I never tried this, but it looks like that using Code::Blocks and Xming you can do this.
Here is tutorial http://wiki.codeblocks.org/index.php?title=Using_Xming_for_remote_compilation

You could try using Dokan SSHFS to mount the code as a drive on each Windows machine. Then developers are free to use whatever IDE they choose.

Related

In what IDE / compiler can I develop C++ code that will be deployed on FreeBSD server.

MY code will be deployed on FreeBSD. Would I be able to code using VS2010 in Win7? can this be done on Linux using gcc? or do I need to have freebsd installed on my laptop.
I plan on buying a new ultrabook and not sure FreeBSD will support the drivers or wireless. What the best practice here? VMs?
Thanks mods.
You can just write code in Visual Studio, however you will not be able to compile and run it in Windows, unless you are only using standard libraries.
To compile and test your code you need a FreeBSD somewhere.
I have never used FreeBSD on a laptop, so I can't say if dualbooting it with Windows is a good idea.
The best thing would be if you had access to a test server with the same configuration as your production server (i.e. same FreeBSD version, same packages, etc.). Then you could write code in VS, check it into a repository, SSH to the server, update a local copy of the source there and build.
If that is not an option, I recommend setting up a virtual environment. Download VirtualBox, obtain FreeBSD and install it on the VM. Set up port forwarding for SSH and then the process is pretty much the same as with a separate server. You may have to make some additional tweaks depending on the nature of the work you'll be doing.
Here is a setup I have used in cases where I am not using any OS specific library.
On your Windows 7 machine develop your application as usual without using MFC/ATL etc. If you are linking to external libraries/APIs you will have to use one which is portable across Windows and Linux (e.g. Boost libraries). Do not use pre-compiled headers (stdfax.h) or any other option which is Windows specific.
Get VirtualBox and install your favorite OS (FreeBSD in this case) and your favorite tool chain (gcc clang)
Share the disk/folder you are developing your app in the Windows file system so that it is visible in VirtualBox. In Ubuntu they end up in the /media/sf_Folder.
Compile in FreeBSD in VirtualBox. You will need to have an alternative make system setup in parallel in addition to the stuff Visual Studio creates. Make sure you do not have any conflicts in the way the directories used by the build system (to store object files etc.) between FreeBSD and Windows. You can try out this http://code.google.com/p/make-it-so/ to convert your VS solutions to gcc makefiles.
This way you can continue to use your favorite IDE and also rebuild on your target *ix OS on the same machine.
gcc + gvim + ( ( ctags with omnicppcomplete ) or ( clang with clang_complete ) and STL + Boost.

What are some ways that I can develop C++ apps in Linux from a Windows workstation?

I'm developing C++ apps for Linux, but my workstation is Windows 7. I've read that Visual Studio is the strongest C++ IDE for Windows, but I actually want to execute the code on Ubuntu and be able to use a more graphically pleasing debugger than gdb, although the functionality of gdb is pretty good. I'm really happy with valgrind as well, but again, I'd like to be able to leverage that in an IDE in windows.
I currently use QtCreator as my C++ IDE and I edit the files over a samba mount to the linux box. I use Putty to run the Linux commands. I use git as my source control system, gcc as my compiler and cmake as my build system. I like QtCreator, but as I have it configured, I'm not taking advantage of code-completion or debugging.
The closest thing I've seen is CodeWarrior. It allows for executing code on remote embedded systems and a full debugger. Has anyone ever used this for general app development on Ubuntu?
Is QtCreator the right IDE for me? Is there something else that I can do to configure it so that it'll give me those rich IDE features that I'm looking for? Or should I look to another IDE? Also, are there some tools that I've neglected to mention that would make C++ development easier on a Linux box from a Windows workstation?
Thanks in advance...
It is not clear, you run QtCreator on windows?
If so, you can run QtCreator in Linux,
plus install nxserver on Linux,
and nxclient on windows (http://www.nomachine.com/).
So you run nxclient on windows, login to linux,
and work on linux, in compare with virtual machines,
you get more prefomance.
Use VirtualBox and linux virtual machines?
X Windows.
You could install Cygwin to run an X11 server on your Windows 7 desktop, then run an X11 graphical IDE like QtCreator on your Linux server that renders directly to your Cygwin Windows 7 desktop. I actually tried setting this up with Code::Blocks on openSUSE and Cygwin on Windows 7 just a few weeks ago because I'm in the same situation you're in. It works... kind of. There are weird intermittent errors.
Your scenario is exactly the scenario that the X Windows system was designed for, and it is awesome in concept, but the actual X11 protocol design and implementation is, I gather, old and pretty hairy. I have very little experience with X, but the people who do have lots of experience with it seem to complain about it a lot, and I suppose there are good reasons for that. Too bad, because it would be wonderful if there were a technology like X Windows that worked. AJAX is basically a cheap hack for solving the same kind of problem that X Windows tried to solve... running a remote application with local rendering of a rich GUI.
I gave up on X and I still do the same thing you do: I have putty and Samba-mounted files that I edit with Visual Studio. Visual Studio is the best text editor I've ever used. All the other Visual Studio IDE features are gravy.
There's some solutions :
VmWare : not free but really good
Virtualbox : free but less powerfull than VmWare
KVM/Qemu : Free but less powerfull than VmWare

Tools for Unix <-> Windows C++ development

I am doing some C++ cross development - been doing that for a while on Windows and recently started on Unix.
I suppose what I am after is to simplify Unix development experience - I have a local windows box I do development on, and a remote Solaris box which I use to compile and test code on unix environment.
What I do now - I develop, compiled and test code on Windows (VC++) and once it is done, I move code to Solaris box using Filezilla over SSH. I also use Putty to connect to Solaris box and execute shell commands.
Since I am quite new to unix development - I suppose what I do is by far not optimal and the tools/technics I use not optimal too.
Can you recommend me a better tools - how to move code around more easily and may be a replacement for Putty (which looks quite outdated anyway).
Thanks.
If by any chance you want to run the same C++ IDE on both Windows and Solaris, I recommend taking a look at Code::Blocks. Also, as I suggested to Charles, running an X server on the Windows box gives you a lot more flexibility than running Putty or similar.
Is there any reason that You can't test software on Solaris using Virtual Machine? They can share folders so there is no need to uploading code to remote machine.
Second: use svn or git or mercurial. In one machine You check in your code, on other you checkout plus You have history of changes. No need to use Filezilla over SSH.
edit:
Also, I think that it would be good to use cmake (or scons - but I don't used it) to generating build files. For example - cmake generates Makefiles or project files for Your IDE, so You don't need to maintaint few different files that build Your code on different platforms.
You might want to look into Samba, so you can work directly with the Windows file explorer to move files to and from Windows/Unix environments, rather than using FTP.
But for UNIX shell access via Windows, you really can't beat Putty.
I recommend mercurial.
Just use a version control system such as Subversion or Mercurial. I strongly recommend the latter because it's distributed so you don't need to have a server per say and you can work offline. Every time you want to shove your Windows code to the unix machine you just need to do 'hg push' and off you go. To sort out the build you can with good old Make or just use SCons (again I prefer the latter because it comes with the power of Python).
I actually, very recently developed a cross platform project in C++ using wxWidgets and GraphicsMagick. I wrote it all in Mac OS X and then compiled both in Windows and Linux. One thing I'd like to point out is that GCC seems to be more pedantic about compile warnings and errors than Microsoft's compiler so if you grow to like the Unix environment I'd recommend to develop there and then compile in Windows (maybe even using a VMWare image).
Instead of moving your source code around manually, consider using a version control system. Not necessarily a distributed VCS such as git or mercurial, but you should use version control nonetheless.
Sooner or later, you'll need to use a debugger on the Unix machine, and if you prefer using a graphic debugger, you should install a local X server on your Windows machine.
IMHO vim is quite good editor ;)
gcc, nm, ld for compilation/build/diagnosis
makefile for builds
gdb as debugger, if you prefer GUI check ddd (if you want to stick
with Visual Studio for debugging
check www.vsbridge.com or
www.wingdb.com - they both
depends on gdb as back-end)
other commercial debugger for Unixes
is TotalView
(http://www.roguewave.com/products/totalview.aspx the price is
high, although they have their own
engine instead of gdb)
CVS, SVN as source control
If you want to edit files in VisualStudio you can use e.g. Samba as "transparent file system" ;)
By the way VirtualBox may be very helpful (I debug (Open)Solaris or Linux as VBox machines very frequently).
ps
yet another environment you may be interested in is Magic C++ www.magicunix.com/

C++ development for Linux on Windows

I am trying to setup a development environment for Linux C++ application. Because I'm limited to my laptop (vista) which provides essential office applications, I want to program and access email, word at the same time.
I'd prefer a local Windows IDE. SSH to a company linux server and using VI doesn't seem productive to me. Even using some IDE installed on the linux server doesn't seem good to me, because I can't do the work at home.
So does Eclipse CDT + MinGW work for me, or is there any other choice?
Thanks.
ZXH
Why not install a Linux virtual machine on your laptop, in VMware or similar? That way you can test while you're developing too.
You can also try http://cygwin.com/
Is it a GUI app? And do you have to target Linux specifically? If not, Qt (http://trolltech.com/) may be something that you can use. It would allow you to more or less develop your whole application on Windows, and then spend a few hours on a linux machine getting the whole thing ported...
Qt is the best choice. I develop with tis tool for a long time. And you can develop with the same ide : QtCreator and the same framework : Qt on MacOS, Linux based or Windows plateform...
Moreover, specifically on Linux, Qt is well integrated with Kdevelop !
If you have Visual Studio, which I feel is an excellent IDE, you can try to set it up to use GCC/G++. I've done this before, back in the Visual Studio 6 days. As long as you aren't using any Windows-specific libraries and write portable C++, you can compile and test on Windows, then periodically ensure that the code also compiles properly for Linux.
Another approach, one that I actually prefer, is to host your source and make files on the Linux box, share the files through Samba, then use your Windows IDE/text editor to edit those files. Then, you can do the compiling through an SSH terminal. Sure, you'd lose the convenience of being able to compile through your IDE, but at least you wouldn't have to muck around getting the compiler set up on Windows.
If you have a linux server available to you, you could also use NX to log in graphically, and use a Linux IDE there like Code::Blocks, or shudder Eclipse. Of course, there's nothing unproductive about shelling in and using VIM. I find it's a good way to shake out the IDE-induced cobwebs every now and again. Happy coding however you end up doing so!
I use (and recommend) Netbeans for C/C++ Development together with Cygwin to develop POSIX applications on Windows that will run on Linux/Solaris later on.
It is pretty easy to setup as long as you stick to the stable version of Cygwin.
I was in a similar position 2-3 years ago and tried several approaches, but the only one that really worked wor me was vim+ssh (+gdb, make, svn, etc). But again, I use vim even for Windows development.
This slideshow (PDF) walks through how to set up a cross compiler from Windows to Linux.

Using Visual Studio to develop for C++ for Unix

Does anyone have battle stories to share trying to use Visual Studio to develop applications for Unix? And I'm not talking using .NET with a Mono or Wine virtual platform running beneath.
Our company has about 20 developers all running Windows XP/Vista and developing primarily for Linux & Solaris. Until recently we all logged into a main Linux server and modified/built code the good old fashioned way: Emacs, Vi, dtpad - take your pick. Then someone said, "hey - we're living in the Dark Ages, we should be using an IDE".
So we tried out a few and decided that Visual Studio was the only one that would meet our performance needs (yes, I'm sure that IDE X is a very nice IDE, but we chose VS).
The problem is, how do you setup your environment to have the files available locally to VS, but also available to a build server? We settled with writing a Visual Studio plugin - it writes our files locally and to the build server whenever we hit "Save" and we have a bit fat "sync" button that we can push when our files change on the server side (for when we update to the latest files from our source control server).
The plugin also uses Visual Studio's external build system feature that ultimately just ssh's into the build server and calls our local "make" utility (which is Boost Build v2 - has great dependency checking, but is really slow to start as a result i.e. 30-60 seconds to begin). The results are piped back into Visual Studio so the developer can click on the error and be taken to the appropriate line of code (quite slick actually). The build server uses GCC and cross-compiles all of our Solaris builds.
But even after we've done all this, I can't help but sigh whenever I start to write code in Visual Studio. I click a file, start typing, and VS chugs to catch up with me.
Is there anything more annoying than having to stop and wait for your tools? Are the benefits worth the frustration?
Thoughts, stories, help?
VS chugs to catch up with me.
Hmmm ... you machine needs more memory & grunt. Never had performance problems with mine.
I've about a decade's experience doing exactly what you're proposing, most of it in the finance industry, developing real-time systems for customers in the banking, stock exchanges, stock brokerage niches.
Before proceeding further, I need to confess that all this was done in VS6 + CVS, and of late, SVN.
Source Code Versioning
Developers have separate sourcesafe repositories so that they can store their work and check it packages of work at logical milestones. When they feel they want to do an integration test, we run a script that checks it into SVN.
Once checked into SVN, we've a process that kicks off that will automatically generate relevant makefiles to compile them on the target machines for continuous integration.
We've another set of scripts that synchs new stuff from SVN to the folders that VS looks after. There's a bit of gap because VS can't automatically pick up new files; we usually handle that manually. This only happens regularly the first few days of the project.
That's an overview of how we maintain codes. I have to say, I've probably glossed over some details (let me know if you're interested).
Coding
From the coding aspect, we rely heavily on the pre-processors (i.e. #define, etc) and flags in the makefile to shape compilation process. For cross platform portability, we use GCC. A few times, we were force to use aCC on HP-UX and some other compilers, but we did not have much grief. The only thing that is a constant pain, is that we had to watch out for thread heap spaces across platforms. The compiler does not spare us from that.
Why?
The question is usually, "Why the h*ll would you even what to have such a complicated way of development?". Our answer is usually another question that goes, "Have you any clue how insane it is to debug a multi-threaded application by examining the core dump or using gdb?". Basically, the fact that we can trace/step through each line of code when you're debugging an obscure bug, makes it all worth the effort!
Plus!... VS's intellisense feature makes it so easy to find the method/attribute belonging to classes. I also heard the VS2008 has refactoring capabilities. I've shifted my focus to Java on Eclipse that has both features. You'd be more productive focusing coding business logic rather than devote energy making your mind do stuff like remember!
Also! ... We'd end up with a product that can run on both Windows and Linux!
Good luck!
I feel your pain. We have an application which is 'cross-platform'. A typical client/server application where the client needs to be able to run on windows and linux. Since our client base mostly uses windows we work using VS2008 (the debugger makes life a lot easier) - however we still need to perform linux builds.
The major problem with this was we were checking in code that we didn't know would build under gcc, which would more than likely break the CI stuff we had setup. So we installed MingGW on all our developer's machines which allows us to test that working copy will build under gcc before we commit it back to the repository.
We develop for Mac and PC. We just work locally in whatever ide we prefer, mostly VS but also xcode. When we feel our changes are stable enough for the build servers we check them in. The two build servers (Mac and PC) look for source control checkins, and each does a build. Build errors are emailed back to the team.
Editing files live on the build server sounds precarious to me. What happens if you request a build while another developer has edits that won't build?
I know this doesn't really answer your question, but you might want to consider setting up remote X sessions, and just run something like KDevelop, which, by the way, is a very nice IDE--or even Eclipse, which is more mainstream, and has a broader developer base. You could probably just use something like Xming as the X server on your Windows machines.
Wow, that sounds like a really strange use for Visual Studio. I'm very happy chugging away in vim. However, the one thing I love about Visual Studio is the debugger. It sounds like you are not even using it.
When I opened the question I thought you must be referring to developing portable applications in Visual Studio and then migrating them to Solaris. I have done this and had pleasant experiences with it.
Network shares.
Of course, then you have killer lag on the network, but at least there's only one copy of your files.
You don't want to hear what I did when I was developing on both platforms. But you're going to: drag-n-drop copy a few times a day. Local build and run, and periodically checking it out on Unix to make sure gcc was happy and that the unit tests were happy on that platform too. Not really a rapid turnaround cycle there.
#monjardin
The main reason we use it is because of the re-factoring/search tools provided through Visual Assist X (by Whole Tomato). Although there are a number of other nice to haves like Intelli-sense. We are also investigating integrations with our other tools AccuRev, Bugzilla and Totalview to complete the environment.
#roo
Using multiple compilers sounds like a pain. We have the luxury of just sticking with gcc for all our platforms.
#josh
Yikes! That sounds like a great way to introduce errors! :-)
I've had good experience developing Playstation2 code in Visual Studio
using gcc in cygwin. If you've got cygwin with gcc and glibc, it
should be nearly identical to your target environments. The fact that you
have to be portable across Solaris and Linux hints that cygwin should
work just fine.
Most of my programming experience is in Windows and I'm a big fan of visual studio (especially with Resharper, if you happen to be doing C# coding). These days I've been writing an application for linux in C++. After trying all the IDEs (Netbeans, KDevelop, Eclipse CDT, etc), I found Netbeans to be the least crappy. For me, absolute minimum requirements are that I be able to single-step through code and that I have intellisense, with ideally some refactoring functions as well. It's amazing to me how today's linux IDE's are not even close to what Visual Studio 6 was over ten years ago. The biggest pain point right now is how slow and poorly implemented the intellisense in Netbeans is. It takes 2-3 seconds to populate on a fast machine with 8GB of RAM. Eclipse CDT's intellisense was even more laggy. I'm sorry, but a 2 second wait for intellisense doesn't cut it.
So now I'm looking into using VS from Windows, even though my only build target is linux...
Chris, you might want to look at the free automation build server 'CruiseControl', which integrates with all main source control systems (svn, tfs, sourcesafe, etc.). It's whole purpose is to react to check-ins in a source control system. In general, you configure it so that anytime anyone checks code in, a build is initiated and (ideally) unit tests are run. For some languages there are some great plugins that do code analysis, measure unit test code coverage, etc. Notifications are sent back to the team about successful / broken builds.
Here's a post describing how it can be set up for C++: link (thoughtworks.org).
I'm just getting started with converting from a linux-only simple config (Netbeans + SVN, with no build automation) to using Windows VS 2008 with build automation back-end that runs unit tests in addition to doing builds in linux. I shudder at the amount of time it's going to take me to get that all configured, but the sooner the better, I guess.
In my ideal end-state I'll be able to auto-generate the Netbeans project file from the VS project, so that when I need to debug something in linux I can do so from that IDE. VS project files are XML-based, so that shouldn't be too hard.
If anyone has any pointers for any of this, I'd really appreciate it.
Thanks,
Christophe
You could have developers work in private branches (easier if you're using a DVCS). Then, when you want to checkin some code, you check it into your private branch on [windows|unix], update your sandbox on [unix|windows] and build/test before committing back to the main branch.
We are using a similar solution to what you described.
We have our code stored on the Windows side of the world and UNIX (QNX 4.25 to be exact) has access though an NFS mount (thanks to UNIX services for Windows). We have an ssh into UNIX to run make and the pipe to output into VS. Accessing the code is fast, builds are a little slower than before, but our longest compile is currently less than two minutes, not a big deal.
Using VS for UNIX development has been worth the effort to set it up, because we now have IntelliSense. Less typing = happy developer.
Check out "Final Builder" (http://www.finalbuilder.com/). Select a version control system (e.g. cvs or svn, to be honest, cvs would suit this particular use case better by the sounds of it) and then set up build triggers on FinalBuilder so that checkins cause a compile and send the results back to you.
You can set up rulesets in FinalBuilder that prevent you checking in / merging broken code into the baseline or certain branch folders but allow it to others (we don't allow broken commits to /baseline or /branches/*, but we have a /wip/ branching folder for devs who need to share potentially broken code or just want to be able to commit at the end of the day).
You can distribuite FB over multiple "build servers" so that you don't wind up with 20 people trying to build on one box, or waiting for one box to process all the little bitty commits.
Our project has a Linux-based server with Mac and Win clients, all sharing a common codebase. This set up works ridiculously well for us.
I'm doing the exact same thing at work. The setup I use is VS for Windows development, with a Linux VM running under VirtualBox for local build / execute verification. VirtualBox has a facility where you can make a folder on the host OS (Windows 7 in my case) available as a mountable filesystem in the guest (Ubuntu LTS 12.04 in my case). That way, after I start a VS build, and it's saved the files, I can immediately start a make under Linux to verify it builds and runs OK there.
We use SVN for source control, the final target is a Linux machine (it's a game server), so that uses the same makefile as my local Linux build. That way, if I add a file to the project / change a compiler option, usuall adding / changing a -D, I do the modifications initially under VS, and then immediately change the Linus makefile to reflect the same changes. Then when I commit, the build system (Bamboo) picks up the change, and does its own verification build.
Hard earned experience says this is an order of magnitude easier if you build like this from day one.
The first project I worked on started as Windows only, I was hired to port it to Linux, since they wanted to switch to a homogenous server environment, and everything else was Linux. Retrofitting a Windows project into this sort of a setup was a fairly major expenditure of effort.
Project number two was done "two system build" right from day one. We wanted to maintain the ability to use VS for development / debug since it is a very polished IDE, but we also had the requirement for final deploy to Linux servers. As I alluded to above, when the project was build with this in mind right from the start, it was quite painless. The worst part was a single file: system_os.cpp that contained OS specific routines, things like "get current time since linux epoch start in milliseconds", etc. etc. etc.
At the risk of getting a little off topic, we also created unit tests for this, and having the unit tests for the OS specific pieces provided a great deal of confidence that they worked the same on both platforms.