LNK2005 error because I have two c++ windows running parallel - c++

I am working on a uni project with multiple tasks, and I am trying to debug a simple program, however I am getting the error "LNK2005 main already defined in task 1".
I realise this is because I have used "int(main)" for both tasks (I have code for task 1 and code for task 2). I do not want to have to create a new project folder for every task. Is there a way around this ?

While it is generally advisable to have a project for each executable you build, you can get away with having a single project for multiple executables if you manage to somehow get rid of the undesired duplicate mains. You have quite a few options available to you:
Have only one main. Have it test its own executable name, and take specific action depending on the name it finds. In the post-build rules, set up rules for creating each (specifically named) executable from your base executable. This allows you to build all your executables at the same time in a fairly efficient manner.
Have multiple mains, but hide them using #ifdefs. Add a #define to the project settings or just somewhere above main(), and compile as needed. This is ok if you don't want to build all your executables all the time.
Just bite the bullet and set up multiple projects.
Whatever you do, consider that being able to build everything you have in a single step is considered a highly desirable trait of build systems, and is usually high on the list of features a properly engineered development process should have.

Related

What could be the simplest way to incorporate Windows WPP Software Tracing into SCons builds?

I ask my question in such a specific way because I am afraid that a more generic form could lead to excessively theoretic discussions of how the things should be done best and in the most appropriate way (like a question about pre and post-process actions in SCons).
WPP incorporation actually requires execution of an additional command (commands) before compilation of a file and only even if the build process finds necessity to compile the file without any regard to WPP.
I would remark that this is easily achieved with few lines of definitions in a shared Visual Studio property page file making this work for multiple files in multiple projects, folders, etc. in an absolutely transparent for developers way.
Thus I am wondering whether this can be done in a similarly simple way with SCons? I do not have any deep knowledge of either SCons or MSBuild frameworks; I work with them for simple practical use so I would truly appreciate a practical and useful advise.
Here's what I'd suggest.
SCons builds command lines from Environment() variables.
For example the compile command line for building shared object for c++ is stored in SHCXXCOM (and the variable for what is displayed to user when the command is run defaults to SHCXXCOM, but can be changed by modifying SHCXXCOMSTR).
Back to the problem at hand.
Assuming you have a limited number of build steps you want to wrap, you can do something like.
env['SHCXXCOM'] = [ 'MPP PRE COMMAND LINE', env['SHCXXCOM'], 'MPP POST COMMAND LINE']
You'll have to figure out which variables you need to do this with, but take a look at the manpage to figure that out.
https://scons.org/doc/production/HTML/scons-man.html
p.s. I've not tried this, but in theory it should work. Let us know if not.

Visual Studio: Can I add a custom build step without adding a file? (C/C++)

Not sure what the best way to explain what I want is, but here goes...
I have a header file ('generated.h') that is generated from another header file ('interface.h') using some python script.
If I add a custom build step to generated.h, that's a circular dependency. Also, 'generated.h' doesn't even exist in a new workspace, so it gets a bit more confusing there.
Should I instead change interface.h to a custom build tool?
'generated.h' is only for use in testing (generated.h's are mock headers) and there may be several.
Therefore I don't really want to add a custom build step to interface.h, since that's used in "real" code. It's not really interface.h's responsibility to generate 'generated.h' (or is it?).
I could add the script as an item next to 'generated.h', but if there are many such generated.h-like-files, I would need to modify the script to accept multiple sets of args, or otherwise find a way to add the generation script several times.
What would you recommend?

How do we do releases involving multiple platforms using Jenkins?

We have a bit of a mess of a situation on our build machine...
I finally managed to get one matrix build working, a "first check that everything compiles" type of task which just compiles everything on the current platform it's running on. It runs on multiple platforms fine (about the only problem it might have is that it's compiling the java code multiple times when it could probably be optimised to do that once.)
I imagine that setting up a matrix build for "build installers" would be not too hard, either.
But there are two problems which will definitely hit.
There's one zip file we redistribute which ideally would contain all platform-dependent binaries in a single zip file to reduce duplication (essentially it's a library we hand out to others.)
The process we have for copying the actual releases up to the server relies on every single generated file for the same version number of the same product being ready before the build starts. No single-OS builds would have a complete enough view of the produced files to be able to do the release and it doesn't seem to be possible to add build steps which run in the parent job.
We're using Archive for Clone Workspace SCM as a post-build step for this initial matrix build, but it looks like that runs independently on each OS and no attempt is made to merge the results together.
How do other people get around all these issues?
I know I can just ditch matrix builds entirely and do everything via configuring multiple of each job, but we have three platforms now and the number of jobs would skyrocket.
Options which involve alternatives to Jenkins will be looked at seriously as well, as lately... the number of problems we have been having with it is enormous.
Here's what we eventually set up which is working:
Project "platform-releases" is still a "Matrix Build".
Slaves are each labeled with their relevant platform and we use the slave label as the parameters of the matrix.
The "Archive Artifacts" is used to archive the useful files. ("Clone Workspace SCM" seems to be a dead end when working with matrix builds.)
Project "unified-releases" is a normal build which copies in the artifacts from platform-releases. Since we copy the artifacts without specifying a specific platform, we get the artifacts for all platforms, which appear in directories named /os/windows, /os/macosx, etc.
The Ant build knows about the location of the artifacts (they are outside the working copy) and pulls all the files into a single directory structure before uploading.
"Parameterized Trigger" plugin is set up for platform-releases to trigger unified-releases so that the svn version it's working with matches the actual files produced. Unfortunately, we have to check out the entire repository despite only using a tiny portion of it, because there is a bug (suspected to be in the subversion plugin) which prevents subdirectories being checked out with the correct revision otherwise.

C++ vim IDE. Things you'd need from it

I was going to create the C++ IDE Vim extendable plugin. It is not a problem to make one which will satisfy my own needs.
This plugin was going to work with workspaces, projects and its dependencies.
This is for unix like system with gcc as c++ compiler.
So my question is what is the most important things you'd need from an IDE? Please take in account that this is Vim, where almost all, almost, is possible.
Several questions:
How often do you manage different workspaces with projects inside them and their relationships between them? What is the most annoying things in this process.
Is is necessary to recreate "project" from the Makefile?
Thanks.
Reason to create this plugin:
With a bunch of plugins and self written ones we can simulate most of things. It is ok when we work on a one big "infinitive" project.
Good when we already have a makefile or jam file. Bad when we have to create our owns, mostly by copy and paste existing.
All ctags and cscope related things have to know about list of a real project files. And we create such ones. This <project#get_list_of_files()> and many similar could be a good project api function to cooperate with an existing and the future plugins.
Cooperation with an existing makefiles can help to find out the list of the real project files and the executable name.
With plugin system inside the plugin there can be different project templates.
Above are some reasons why I will start the job. I'd like to hear your one.
There are multiple problems. Most of them are already solved by independent and generic plugins.
Regarding the definition of what is a project.
Given a set of files in a same directory, each file can be the unique file of a project -- I always have a tests/ directory where I host pet projects, or where I test the behaviour of the compiler. On the opposite, the files from a set of directories can be part of a same and very big project.
In the end, what really defines a project is a (leaf) "makefile" -- And why restrict ourselves to makefiles, what about scons, autotools, ant, (b)jam, aap? And BTW, Sun-Makefiles or GNU-Makefiles ?
Moreover, I don't see any point in having vim know the exact files in the current project. And even so, the well known project.vim plugin already does the job. Personally I use a local_vimrc plugin (I'm maintaining one, and I've seen two others on SF). With this plugin, I just have to drop a _vimrc_local.vim file in a directory, and what is defined in it (:mappings, :functions, variables, :commands, :settings, ...) will apply to each file under the directory -- I work on a big project having a dozen of subcomponents, each component live in its own directory, has its own makefile (not even named Makefile, nor with a name of the directory)
Regarding C++ code understanding
Every time we want to do something complex (refactorings like rename-function, rename-variable, generate-switch-from-current-variable-which-is-an-enum, ...), we need vim to have an understanding of C++. Most of the existing plugins rely on ctags. Unfortunately, ctags comprehension of C++ is quite limited -- I have already written a few advanced things, but I'm often stopped by the poor information provided by ctags. cscope is no better. Eventually, I think we will have to integrate an advanced tool like elsa/pork/ionk/deshydrata/....
NB: That's where, now, I concentrate most of my efforts.
Regarding Doxygen
I don't known how difficult it is to jump to the doxygen definition associated to a current token. The first difficulty is to understand what the cursor is on (I guess omnicppcomplete has already done a lot of work in this direction). The second difficulty will be to understand how doxygen generate the page name for each symbol from the code.
Opening vim at the right line of code from a doxygen page should be simple with a greasemonkey plugin.
Regarding the debugger
There is the pyclewn project for those that run vim under linux, and with gdb as debugger. Unfortunately, it does not support other debuggers like dbx.
Responses to other requirements:
When I run or debug my compiled program, I'd like the option of having a dialog pop up which asks me for the command line parameters. It should remember the last 20 or so parameters I used for the project. I do not want to have to edit the project properties for this.
My BuildToolsWrapper plugin has a g:BTW_run_parameters option (easily overridden with project/local_vimrc solutions). Adding a mapping to ask the arguments to use is really simple. (see :h inputdialog())
work with source control system
There already exist several plugins addressing this issue. This has nothing to do with C++, and it must not be addressed by a C++ suite.
debugger
source code navigation tools (now I am using http://www.vim.org/scripts/script.php?script_id=1638 plugin and ctags)
compile lib/project/one source file from ide
navigation by files in project
work with source control system
easy acces to file changes history
rename file/variable/method functions
easy access to c++ help
easy change project settings (Makefiles, jam, etc)
fast autocomplette for paths/variables/methods/parameters
smart identation for new scopes (also it will be good thing if developer will have posibility to setup identation rules)
highlighting incorrect by code convenstion identation (tabs instead spaces, spaces after ";", spaces near "(" or ")", etc)
reformating selected block by convenstion
Things I'd like in an IDE that the ones I use don't provide:
When I run or debug my compiled program, I'd like the option of having a dialog pop up which asks me for the command line parameters. It should remember the last 20 or so parameters I used for the project. I do not want to have to edit the project properties for this.
A "Tools" menu that is configurable on a per-project basis
Ability to rejig the keyboard mappings for every possible command.
Ability to produce lists of project configurations in text form
Intelligent floating (not docked) windows for debugger etc. that pop up only when I need them, stay on top and then disappear when no longer needed.
Built-in code metrics analysis so I get a list of the most complex functions in the project and can click on them to jump to the code
Built-in support for Doxygen or similar so I can click in a Doxygen document and go directly to code. Sjould also reverse navigate from code to Doxygen.
No doubt someone will now say Eclipse can do this or that, but it's too slow and bloated for me.
Adding to Neil's answer:
integration with gdb as in emacs. I know of clewn, but I don't like that I have to restart vim to restart the debugger. With clewn, vim is integrated into the debugger, but not the other way around.
Not sure if you are developing on Windows, but if you are I suggest you check out Viemu. It is a pretty good VIM extension for Visual Studio. I really like Visual Studio as an IDE (although I still think VC6 is hard to beat), so a Vim extension for VS was perfect for me. Features that I would prefer worked better in a Vim IDE are:
The Macro Recording is a bit error prone, especially with indentation. I find I can easily and often record macros in Vim while I am editing code (eg. taking an enum defn from a header and cranking out a corresponding switch statement), but found that Viemu is a bit flakey in that deptartment.
The VIM code completion picks up words in the current buffer where Viemu hooks into the VS code completion stuff. This means if I have just created a method name and I want to ctrl ] to auto complete, Vim will pick it up, but Viemu won't.
For me, it's just down to the necessities
nice integration with ctags, so you can do jump to definition
intelligent completion, that also give you the function prototype
easy way to switch between code and headers
interactive debugging with breaakpoints, but maybe
maybe folding
extra bonus points for refactoring tools like rename or extract method
I'd say stay away from defining projects - just treat the entire file branch as part of the "project" and let users have a settings file to override that default
99% of the difference in speed I see between IDE and vim users is code lookup and navigation. You need to be able to grep your source tree for a phrase (or intelligently look for the right symbol using ctags), show all the hits, and switch to that file in like two or three keystrokes.
All the other crap like repository navigation or interactive debugging is nice, but there are other ways to solve those problems. I'd say drop the interactive debugging even. Just focus on what makes IDEs good editors - have a "big picture" view of your project, instead of single file.
In fact, are there any plugins for vim that already achieve this?

Avoiding unneccessry recompilations using "branchy" development model

I'm using Mercurial for development of quite a large C++ project which takes about 30 minutes to get built from the scratch(while incremental builds are very quick).
I'm usually trying to implement each new feature in the new branch(using "hg clone") and I may have several new features developed during the day and it's quickly getting very boring to wait for the new feature branch to get built.
Are there any recipes to somehow re-use object files from other already built branches?
P.S. in git there are named branches within the same repository which make re-usage of the existing object files possible for the build system, however I prefer the simpler Mercurial separate branches model...
I suggest using ccache as a way to speed up compilation of (mostly) the same code tree. The way it works is as following:
You define a place to be used as the cache (and the maximum cache size) by using the CCACHE_DIR environment variable
Your compiler should be set to ccache ${CC} or ccache ${CXX}
ccache takes the output of ${CC} -E and the compilation flags and uses that as a base for its hash. As long as the compiler flags, source file and the headers are all unchanged, the object file will be taken from cache, saving valuable compilation time.
Note that this method speeds up compilation of any source file that eventually produces the same hash. If you share source files across projects, ccache will handle them as well.
If you already use distcc and wish to use it with ccache, set the CCACHE_PREFIX environment variable to distcc.
Using ccache sped up our source tree compilation around tenfold.
A simple way to speed up your builds could be to use a local "build directory" on your disk. This way you can checkout into this directory and start the build. The first time it will take the full time, but after that it will (hopefully) only rebuild the files where the source code changed.
My Localbranch extension was designed partly around this use case. It uses a single working directory, but I think it's simpler than git. It's essentially a mechanism for maintaining multiple repository clones under one working directory, where only one is active at a given time.
Woops, I missed your P.S. where you don't like having multiple named branches in the same repo and that you prefer separate clones.. sorry about that.
I too have somewhat large C++ projects and the clone-per-feature workflow didn't work for me very well. Firstly, I had to close down my Vim session and then reopen (many of the same) files once I've created the clone. Secondly, like you said, a lot of code must be recompiled unnecessarily. Thirdly, I have to keep track of where I've pushed to and pulled from - gets confusing when you start a new feature and then get sidetracked onto a new one. Before you know it you have many clones and not sure which ones need to be pushed back to your main.
You definitely don't want to use named branches (as I'm sure you know) to handle this as they are quite permanent.
What you need are bookmarks: https://www.mercurial-scm.org/wiki/BookmarksExtension
Bookmarks allow you to create lightweight (and otherwise anonymous) branches per feature by facilitating the naming of heads in your repo. These heads would normally be unnamed and you would have to look at the output of 'hg log' or use some graphical tool to find the revision numbers for the tip of your feature-branch. With bookmarks you can name them descriptive names like 'my-cool-feature' or 'bugfix-392'.
If you like the idea of bookmarks, I'd also recommend my own extension called 'tasks': http://bitbucket.org/alu/hgtasks. This extension works like bookmarks but adds some more functionality. It allows you created feature-branches (now called tasks) and suppress the pushing of incomplete tasks. This is handy when you have a few feature-branches at once. You may not be ready to push your 'my-cool-feature' task, but 'bugfix-392' is ready to go. Because tasks track a set of changesets (and not just one 'tip' changeset) there are some things you can do with tasks that you can't with bookmarks. See an example workflow here: http://x.zpuppet.org/2009/03/09/mercurial-tasks-extension/.
Mercurial also has local named branches, see the hg branch command.
If you insist on using hg clone to do branchy development, I guess you could try creating a folder link (shortcut under windows) in your repo to a shared obj folder. This will work with hg clone, but I'm not sure your build tool will pick it up.
Otherwise, you probably keep all your repos in one folder - just put your obj folder there (it shouldn't be under source control anyways, imo). Use relative paths to refer to it.
A word of warning: many .o symbol tables (or equivalent) contain the full path name of the source file. If that other file changes (or if the path is not visible from the new directory) you may encounter weirdness when debugging.