SVN and SFTP synchronisation with eclipse - c++

I have to create and configure an eclipse (Mars 2) for a C project. The project is on a SVN repository, and can only be compiled on a specific linux redhat server that has the appropriate toolchain.
What I need is an IDE that would allow me to commit my changes to the repository and that would automagically synchronize them on the Linux server. I tried a few things but none of them worked. I must (to my great regret) avoid the need of a terminal while using that IDE, but of course not while configuring it.
Firstly, I used the Remote System Explorer feature in eclipse. I connected succefully to the server, created a "Remote Project" that I could open in the C/C++ perspective. However, the whole thing is impossible to use, as it has no indexation, I had to create "User Actions" in order to compile (which is on my point of vue pretty anti-ergonomic) and the SVN plugin does not detect the project as an SVN copy. Furthermore, in the C/C++ perspective, there is a 2s gap between the moment I type something, and the moment it appears on my screen.
I also tryed to mount a network filesystem on my local machine, with sshfs, and if it works far better, I still experience lags. Also, I had to write a Makefile and call my compiler via "ssh $(USER)#$(HOST) build.ksh". (one of the point of the projetc is to write a real Makefile...). But SVN is working.
I also tried to run eclipse on the host machine, with X forwarding, and if it works perfectly, there is still lags...
Finally, I tried an sftp synchronisation, but it seems I can't use my SVN plugin features and the sftp together.
I am out of solutions, and pretty frustrated as I feel that this kind of things should be pretty easy. I mean, all I want is that eclipse automatically copy my files on my remote home directory... Thanks for your help...

To me this sounds like a perfect use-case for a continuus integration (CI) system. Generally speaking, this CI system pulls the code from your repository (for example in regular intervals) and then executes the build chain, collects artifacts, informs you about the state of your build, etc.
Although it originated from the Java world, I have successfully used Jenkins for continuus integration of C-projects on a Linux server, but there are others, like TeamCity or GitLab CI (the latter would require you to switch to Git, but it's a really neat system with a YAML configuration for CI).
Of course CI systems have a learning curve - you don't something like a free meal - but it may really be worth the effort.

Related

visual studio removes backslash from path, generated by cmake [duplicate]

We just did a move from storing all files locally to a network drive. Problem is that is where my VS projects are also stored now. (No versioning system yet, working on that.) I know I heard of problems with doing this in the past, but never heard of a work-around. Is there a work around?
So my VS is installed locally. The files are on a network drive. How can I get this to work?
EDIT: I know what SHOULD be done, but is there a band-aid I can put on right now to fix this and maintain the network drive?
EDIT 2: I am sure I am not understanding something, but Bob King has the right idea. I'll work with the lead web developer when he gets back into the office to figure out a temporary solution until we get some sort of version control setup. Thanks for the ideas.
While we do use Source Control, we do also run all our projects from Network Drives (not shared directories, private directories on network drives). The network drives are backed up nightly, and also use Volume Shadow Copy, so if you need to revert to something before it made it's way to SC, then you can.
To get projects to run correctly with the right permission, follow these steps.
Basically, you've just got to map the shared directory to a drive, and then grant permission, based on that Url, to all code. Say you map to "N:\", then use "N:\*" as your Url pattern. It isn't obvious you need to wildcard, but you do.
The question is rather generic so I'll give an answer to one issue I was facing.
I run Visual Studio 2010 using a Parallels virtual machine on my Mac while keeping all my projects on the mac side via a network share. Visual Studio however wouldn't load the projects assembly files from there. Trying to set the rights using "caspol" alone didn't help in my case.
What finally worked for me to allow Visual Studio to load assemblies from a network share was to edit the file
"C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\devenv.exe.config" (assuming a default installation).
in the xml "<runtime>" section you have to add
<loadFromRemoteSources enabled="true"/>
You may have to change the permissions on that file to allow write access. Save the file. Restart Visual Studio.
In the interests of actually answering the question, I copied this comment from jcarle.com:
Trusting Network Shares with Visual Studio 2010 / .NET Framework v4.0
January 20, 2011, 4:10 pm
If you are like me and you store all your code on a server, you will have likely learned about trusting a network share using CasPol.exe. However, when moving from Visual Studio 2008 (.NET Framework 2.0/3.0/3.5) over to Visual Studio 2010 (.NET Framework 4.0), you may find yourself scratching your head.
If you are used to using the Visual Studio Command Prompt to quickly get to CasPol, you may find that some of your projects will not seem to respect your new FullTrust settings. The reason is that, unless you are carefully paying attention, the Visual Studio Command Prompt defaults to adding the .NET Framework 4.0 folder to its path. If your project is still running under .NET Framework 2.0/3.0/3.5, it will require setting CasPol for those versions as well. Just a note, I have also personally had more success with using 1 as a code group instead of 1.2.
To trust a network share for all versions of the .NET Framework, simply call CasPol for each version using the full path as below:
C:\Windows\Microsoft.NET\Framework\v2.0.50727\CasPol -m -ag 1 -url file://YourSharePath* FullTrust
C:\Windows\Microsoft.NET\Framework\v4.0.30319\CasPol -m -ag 1 -url file://YourSharePath* FullTrust
I would not recommend doing that if you have (or even if you don't have) multiple people who are working on the projects. You're just asking for trouble.
If you're the only one working on it, on the other hand, you'll avoid much of the trouble. Performance is going to out the window, though. As far as how to get it to work, you just open the solution file from VS. You'll likely run into security issues, but can correct that using CASPOL. As I said, though, performance is going to be terrible. Again, not recommended at all.
Do yourself and your team a favor and install SVN or some other form of source control and put the code in there ASAP.
EDIT: I'll partially retract my comments. Bob King explains below the reason they run VS projects from a network drive and it makes sense. I would say unless you're doing it for a specific reason like Bob, stay away from it. Otherwise, get your ducks in a row before setting up such a development environment.
So I was having a similar issue. Visual Studio wouldn't recognize a network location I had mapped for a drive letter for anything. The funny thing is, it worked for a day. I set up my project and began working on it and had no issues. Then, I shut down and the next day nothing works. I couldn't read/write files in code, output my executables or anything. My project is local but my output was intended to be thrown up on the network.
Anyways, the problem is probably about the administrator context but one way to fix it which I found while digging around online is to get Visual Studio to browse to the drive in question some how. There are plenty of ways to do this but VS will magically be able to recognize mapped drive letters. My solution is to go the the Debug Output Location in the Project Properties, click browse and go to my previously made output location on my network drive and Voila!!!
I wanted to put this up because I spent half a day trying to figure this out and figured it might save someone else some time. Thanks much and good luck!!!
Erik
I understand this is an older thread, but this was the best thread I found when looking to solve a similar issue I had visual studio 2013 on a virtual box (using Win 8.1) and the code on the host machine (Win 7). Although I could open the solution, I could not compile. All of the other answers on this relate to older software, so I am adding this answer to update this frequently found question with the solution that worked for me.
Here's what I did; Made a registry entry to be able to use a UNC path as the current directory.
WARNING: Using Registry Editor incorrectly can cause serious, system-wide problems that may require you to reinstall Windows NT to correct them. Microsoft cannot guarantee that any problems resulting from the use of Registry Editor can be solved. Use this tool at your own risk.
Under the registry path:
HKEY_CURRENT_USER
\Software
\Microsoft
\Command Processor
add the value DisableUNCCheck REG_DWORD and set the value to 0 x 1 (Hex).
WARNING: If you enable this feature and start a Console that has a current directory of an UNC name, start applications from that Console, and then close the Console, it could cause problems in the applications started from that Console.
Found this information at link: http://support.microsoft.com/kb/156276
How about we rephrase this into a question that everyone can answer? I have the exact same problem as the initial poster.
I have a copy of VB 2008 (recently upgraded from VB6). If I store my solutions on the backed up network drive, then it won't run a single thing ever. It gives "partially trusted caller" errors for accessing a module, even when "allowpartiallytrustedcallers" is set in the assembly. If I store the files on my (not backed up) C:, then it will run wonderfully, until I put it on the share drive for everyone to use, and I'm back to my same problem.
This isn't a big request. I just want to be able to put a solution and executable on the share drive and run it without an absurd amount of nonsense about security. I shouldn't have to cram all my work into form files.
-Edit: I found the problem with why it was ignoring the AllowPartialllyTrustedCallers command. I'm trying to reference ADODB, which doesn't allow partially trusted. So, no network executable can access a database? What does Microsoft have against intranets anyway?
I was facing the same issue just recently so this answer is more for the sake of keeping track of my own knowledge. Anyway, should soumeone find it useful, below is the issue and the solution.
Issue:
NET 4.0 projects, SVN repo, checkout folders are on local drives, referenced assemblies are build by build server and available on a network drive. Visual studio on W7 is is able to add the reference but unable to build projects.
Solution:
Since NET 4.0 does not automatically provide a sandbox anymore for network assemblies, you have to make those full-trusted via machine.config update. http://msdn.microsoft.com/en-us/library/dd409252.aspx
I had a similar problem with opening Visual Studio projects on a network drive, and I fixed it by creating a symbolic link on my local C:\ drive that points to the UNC directory
e.g.
mklink /D "C:\Users\Self\Documents" "\\domain.net\users\self\My Documents"
then you can just open the project using the C:\Users\Self\Documents\ path, instead of the UNC path
(You have to be careful, because Visual Studio will automatically redirect you to the '\\domain.net..' path if you double click the symlink when you're browsing for the project. I had to copy paste the 'C:\Users\' path to get it to open with the drive letter path)
Don't do it. If you have source control (versioning), you do not want your files on a network drive. It totally bypasses all you want to achieve by using source control, because once your files are on a network drive, anyone can modify them .... even while you're currently building your project. Ka-boooom!
PS: this sounds like a typical case of over-engineering to me.
Are you having any specific problems?
If you allow more than one person to open the solution, your first problem will be that the .NCB file (Intellisense) will be locked exclusively and only one user will be able to browse the class tree. And of course you have the potential for one user's changes to overwrite the other user's changes.
You should be warned that some feature in Visual Studio will refuse to work with network drive.
For example, mdf file of SQL Express user instance must be located in local drive.
For another example, if you use UNC path, you have to make sure they are short enought.
i found this helpful while trying use vc11 with parallels which run on mac:
http://social.msdn.microsoft.com/Forums/en-US/toolsforwinapps/thread/2ffdcb01-c511-4961-834b-afd5f2fbb8e1, and specifically:
1) You can switch from local debugging to remote debugging and set the machine name as 'localhost'. This will do a remote deployment on your local machine (thus not using the project's directory). You don't need to install the Remote Debugger tools, nor start msvsmon for this to work on localhost.
In case this helps anyone else, I had to do the steps outlined here to add the network share location to Windows intranet zone. In particular, I was having trouble with Visual Studio hanging on load when opening a solution on a network share (i.e. using VMware Fusion and opening a solution from my Mac's hard drive). I also had problems with PostSharp running in this scenario.
If i understand you correctly, your Visual Studio project files are stored on the network drive and you are running them from there. This is what I do and don't have any problems. You will need to make sure that you have set the security policy. You can use Caspol to do this, or via the control panel-admin tools menu.
"How can I get this to work?"
You have a couple choices:
Choice A:
1. Move all files back to your local hard drive
2. Implement some type of backup software on your machine
3. Test said backup solution
4. keep on coding
Choice B:
1. Get a copy of one of the FREE source control products and implement it.
2. Make sure it's being backed up
3. Test it
Choice C:
Use one of the many ONLINE source control repositories available. Google, SourceForge, CodePlex, something.
Well, my question would be why you are asking this. Is it not working when you are storing it on a network drive? I haven't tried this myself, and one problem I could envision would be that .NET code running from a network drive (ie. from the bin\Debug directory, also located on the network drive) would be running in a sandbox mode, unless you mess around with CASPOL (or use 3.5 SP1 which I hear has removed that obstacle).
If you have specific problems, ask about them. Never ask "Why is doing X not working?".
You're not saying if you're just one person or multiple persons accessing the same remote drive, but I'm assuming you're just one for each network directory. Is this correct? If not, no, there is no band-aid. Get version control, move the files back to a local disk.

What is the recommended way for packaging a C++ daemon on Mac OSX?

I'm working on a multi-platform project that is composed of a service/daemon which runs on Windows, Linux, and Mac OSX.
The code I have is portable, and the application runs fine (from the command line) on all the systems. As this application is designed to run in the background, I made it a Windows service on Windows and a Linux daemon (with the appropriate scripts in init.d) for Linux.
Now my problem is Mac OSX: I have little experience with this operating system, and I am having hard times figuring out the best practices for it regarding my situation:
I'd like to have an installer for my project (I believe a .dmg file, that would likely install an .app; please correct me if there is a better alternative).
Here some information about this project of mine:
It is build entirely in C++ (it uses boost, curl, iconv)
The current build system is not XCode (however If there is a way of keeping my current code layout while integrating and building everything into XCode, I don't mind. I've done something similar for Windows anyway).
There is no graphical user interface
The daemon should start on startup automatically (or even better: make that a user's choice).
The daemon requires root access during its execution.
That's probably a lot of context to consider for a single question, so I will try to make it easier to read:
How would you package/create an installer for a pure-C++ daemon on Mac OSX ?
Since this doesn't have a UI, I wouldn't package it as a .app -- that's the preferred format for double-clickable GUI apps, not for daemons. If it's just a single binary (no support files except maybe things like config files, etc), I'd follow unix conventions and put the binary someplace like /usr/local/libexec (or wherever you put it on Linux). Note that /usr/local doesn't exist by default on OS X, so your installer will need to create it if it doesn't exist.
For getting it to execute: I'll agree with James Bedford's suggestion of using launchd. The launchd .plist file should be installed in /Library/LaunchDaemons (LaunchDaemons run as root at startup, while LaunchAgents run as normal users when that user logs in). Make sure the daemon does not drop itself into the background -- launchd keeps watch over the programs it launches, and if they background themselves it thinks they've crashed, and generally tries to relaunch them, which doesn't work very well. You can adjust the settings to work with background programs, but it's best to have it run in the foreground.
For packaging: Here, I agree with mah -- use an installer package. I actually still like the old GUI PackageMaker tool (deprecated, but it still works), but the new CLI tools are probably better to learn at this point. If you follow my recommendation about /usr/local/libexec, your package should actually contain the "local" directory (with libexec subdir and your binary in that), and install that into /usr -- if /usr/local already exists, it'll just merge with what's already there, but if not it'll create the entire thing. On the other hand, /Library/LaunchDaemons is guaranteed to exist, so your package only needs to contain the actual .plist file to put in it.
Packaging as a .app makes some sense if what you're distributing is more than just a command line (for example, if it has resources such as static configuration data, images, frameworks/dylibs) that need to come along with it).
Regardless of what exactly is getting distributed, you can create an installer using tools that you already have -- pkgbuild and productbuild, both in /usr/bin. Making OS X Installer Packages like a Pro - Xcode Developer ID ready pkg can get you started using these tools.
Have you checked out the Daemons and Services Programming Guide provided by Apple? I think that would be very helpful as an introduction to the platform and should point you in the right direction (if not show you how to do exactly what you want).
You should also check out launchd (which is discussed in that programming guide). launchd is the official deamon launcher/manager for OSX, and is heavily integrated with the operating system. It should be easy enough to wrap your existing cross-platform deamon into a launched deamon, and you can integrate with OS X so that the deamon will start up automatically.

Local continuous integration system for C++?

"local continuous integration system" may not be the correct term, but what I'm hoping to find is an continuous integration system that can be configured to monitor changes to local files (C++ files in particular) and 1) try to compile the affected object files (stopping on first failure), and if successful and no new source file changes 2) link the affected binaries, and if successful and no new source file changes 3) run affected tests.
By monitor changes to local files, I do not mean monitor commits to a revision control system, but the state of local files as they are saved. Ideally the system would be provide integrations into source editors so it could monitor changes in the editor that haven't even been saved to disk yet.
Ideally it would also provide a graphical indication (preferably on Windows 7) of current and recent status that quickly allows drilling into failures when desired.
The closest thing I found was nose as described here but that only covers running Python tests not building C++ files.
The closest thing to what you are looking for is cdash and the Boost test bench; I think that a tool like the one you are looking for will never exist for C++ because compiling each project after editing a single file it's only a waste of time in a productive C++ workflow.
Continues Integration is a rising concept today, so you are not alone here.
Assuming you are developing on Windows, if you are working with Microsoft Visual Studio
you may consider Microsoft's Visual Studio Team Foundation Server (TFS)
(formerly Visual Studio Team System).
That will give you Source-Control AND Build-Automation in one package,
with great integration to Microsoft products, of course
(I think there is a free version for MSDN users).
If not keen on Microsoft products, or just looking for build-automation,
I would recommend a great Open-Source Continues Integration tool:
Jenkins CI.
Good luck!
I would look at Jenkins CI - it is a good tool, works on any platform, and can be configured to do almost anything. I used it to run Python Code that talked to a mobile phone, made calls and recorded those calls (and tested the "quality" of the call, although my project never got the £xxxx real quality software, as we were just showing a concept), and then Jenkins would produce graphs of "how well it worked".
You can also do what you describe of "chaining" - so it would discover that your source has changed, try to build it [generally this is done using make, so it would automatically stop at the first errored file (although it could be hundreds of errors in one file!)]. Compile and build success then chains to running tests. Not entirely sure how you determine what is "relevant". If your test cycle isn't enormous, I'd run them all!

How can a C++ binary replace itself?

I asked this question in a more general design context before. Now, I'd like to talk about the specifics.
Imagine that I have app.exe running. It downloads update.exe into the same folder. How would app.exe copy update.exe over the contents of app.exe? I am asking specifically in a C++ context. Do I need some kind of 3rd mediator app? Do I need to worry about file-locking? What is the most robust approach to a binary updating itself (barring obnoxious IT staff having extreme file permissions)? Ideally, I'd like to see portable solutions (Linux + OSX), but Windows is the primary target.
Move/Rename your running app.exe to app_old.exe
Move/Rename your downloaded update.exe to app.exe
With the next start of your application the update will be used
Renaming of a running i.e. locked dll/exe is not a problem under windows.
On Linux it is possible to remove the executable of a running program, hence:
download app.exe~
delete running app.exe
rename app.exe~ to app.exe
On Windows it is not possible to remove the executable of a running program, but possible to rename it:
download app.exe~
rename running app.exe to app.exe.old
rename app.exe~ to app.exe
when restarting remove app.exe.old
It's an operating system feature - not a C++ one.
What OS are you on?
In Windows see the MoveFileEx() function, on linux simply overwrite the running app ( Replacing a running executable in linux )
On Windows at least an application running is locking its own .exe file and all statically linked .dll files. This prevents an application from updating itself directly, at leads if it desires to prevent a re-boot (if re-boot is OK the app can pass in the MOVEFILE_DELAY_UNTIL_REBOOT flag to MoveFileEx and is free to 'overwrite' it's own .exe, as is delayed anyway). This is why typically applications don't check for updates on their own .exe, but they start up a shim that checks for updates and then launches the 'real' application. In fact the 'shim' can even be done by the OS itself, by virtue of a properly configured manifest file. Visual Studio built application get this as a prefab wizard packaged tool, see ClickOnce Deployment for Visual C++ Applications.
The typical Linux app doesn't update itself because of the many many many flavors of the OS. Most apps are distributed as source, run trough some version of auto-hell to self-configure and build themselves, and then install themselves via make install (all these can be automated behind a package). Even apps that are distributed as binaries for a specific flavor of Linux don't copy themselves over, but instead install the new version side-by-side and then they update a symbolic link to 'activate' the new version (again, a package management software may hide this).
OS X apps fall either into the Linux bucket if they are of the Posix flavor, or nowadays fall into the Mac AppStore app bucket which handles updates for you.
I would day that rolling your own self-update will never reach the sophistication of either of these technologies (ClickOnce, RPMs, AppStore) and offer the user the expected behavior vis-a-vis discovery, upgrade and uninstall. I would go with the flow and use these technologies in their respective platforms.
Just an idea to overcome the "restart" problem. How about making a program, that does not need to be updated. Just implement it in a plugin structure, so it is only an update host which itself loads a .dll file with all the functionality your program needs and calls the main function there. When it detects an update (possibly in a seperate thread), it tells the dll handle to close, replaces the file and loads the new one.
This way your application keeps running while it updates itself (only the dll file is reloaded but the application keeps running).
Use an updater 3rd executable like many other apps.
Download new version.
Schedule your updater to replace the app with the new version.
Close main app.
Updater runs and does the work.
Updater runs new version of your app.
Updater quits.

How to set up a local test/build machine?

I am about to start a new personal project. It aims to be a pretty big one so I thought it would be a good idea to keep some sort of CVS. I have also read lot of interesting stuff about unit testing and I would like to include some system that automatically builds the project and runs a series of test after each check in.
The characteristics are:
Only one developer and one machine (just me and my computer!).
Include a CVS.
Include automated testing.
The software should be free (as in no-cost) and run under Linux.
It is going to be C++ and ANTLR based.
So far, I have set up SVN and Eclipse+CDT+ANTLR for development but I am pretty lost about the automated build+test setting. To write the tests I have been thinking in Boost.Test or UnitTest++.
So that's the source of my question. How should I set up my local test/build machine?
Links to valuable tutorials are more than welcome.
Thanks.
It seems that most open source continuous integration servers are built on java and does not support C++ "out-of-the-box". However there are some links you can start with (note that for running most open source continuous integration servers you need a java environment):
What continuous integration tool is best for a C++ project - some alternatives for continuous integration software
Continuous integration for C++ - some ideas for Hudson configuration
Using CruiseControl with C++ - some ideas and configurations for CruiseControl
Compiling C/C++ code with Ant - if you do use the "Makefile project" in CDT and do not want to use make as a build tool
I personally prefer Hudson because of its simply install (no need for application server just start with java -jar hudson.war) and easy to use and quite "clever" gui. Hudson can checkout your code from SVN (or CVS) and can run a shell script or Ant file as a build script. Maybe you have to spend a few days to set up a configuration with a proper build script but I think it worth the time.
The sort of automatic process you are looking at is called continuous integration. There is software to help you with this - a good example is JetBrains TeamCity. You will also hear of people using CruiseControl, Atlassian Bamboo and so on for this.
To take full advantage of this, you may also want to look at an automated build tool like Ant or Mavenl; your continuous integration build will then use this as its build runner.
A good starting point would be the Martin Fowler page on CI or the Wikipedia one.