AssemblyInfo.cs needs to change in SourceControl after build completes - build

I'm using TFSBuildExtensions to auto version my libraries.
I've got the build template modified so that it does that nicely. Although it doesn't actually update the AssemblyInfo.cs file.
Now I would like that AssemblyFileVersion attribute in the AssemblyInfo to update in source control. How can that be done and why isn't that being done by any of the projects I've seen out there?
Is there a reason not to have this updated in source control? (essentially checkout the file and update it.)

The way I see it, if you want to update the Version number of your assemblies you have 2 choices:
Update the one(s)* in source control and then read them on build to get the version. You can update the existing version with Regex replaces, custom scripts, etc.
Get the build to set the version numbers as it goes.
If you are doing #1, you don't need #2.
If you are doing #2, it's normally because you don't want a check-in for each version - otherwise you would just do #1 :). What you are suggesting doing is automating #1 on check-in, albeit after you edit the version number in the build definition, so it isn't without user interaction.
If you really want this, you'll have to mess with the template a bit more, people who use the extension you linked normally just do #2.
* You can link many projects to a single file if you want.

Related

Get scons to generate a new build number

I'd like to get scons to read a previous version number from a file, update a source file with a new version number and current date and then write the number back to the original file ready for the next build.
This needs to happen only when the target is out of date. IOW the version number doesn't change if no build takes place. The original file is source controlled and isn't a source file else it could trigger another build on check-in (due to CI). CLARIFICATION From scons' point of view the code will always be out of date due to the auto-generated source file but scons will only be run from a Continuous Integration job (Jenkins) when a SCM change is detected.
I've looked into AddPostMethod, but this seems to fire for all files within the list of source files.
Command and Builder methods use the VARIANT_DIR so I can't edit these files and then check them back in as they no longer map to the repo.
I'm hoping I'm just misunderstanding some of the finer details of scons else I'm running out of ideas!
Update
Thinking this through some more, Tom's comment is correct. Although I have two files, one version controlled text file (non-source code) and one non-source controlled source file there is no way to check one file in and prevent a continuous build/check-in cycle. Jenkins will see the new text file and spin off a build, and scons will see the new generated file. So unless I delete the generated file at some point, although this seems to go against the workflow of both tools.
Does anyone have any method for achieving this? It seems pretty straightforward. Ultimately I just want to generate build numbers each time a build is started.
From SCons User Guide section 8, Order-Only Dependencies, you can use the Requires method:
import time
# put whatever text you want in your version.c; this is just regular python
version_c_text = """
char *date = "%s";
""" % time.ctime(time.time())
open('version.c', 'w').write(version_c_text)
version_obj = Object('version.c')
hello = Program('hello.c',
LINKFLAGS = str(version_obj[0]))
Requires(hello, version_obj)
Two things to note: first you have to add the explicit Requires dependency. Second, you can't make version_obj a source of the Program builder, you have to cheat (here we pass it as a linkflag), otherwise you'll get an automatic full dependency on it.
This will update the version.c always, but won't rebuild just because version.c changed.

Should these auxiliary files be under Git version control?

I decided to start using the Git version control system for my C++ project. I'm new to version control. For the trunk things are simple, I just commit all the project versions I have. I kept each version as a separate folder because I knew I'd very soon use Git. But I encountered a problem with my branches.
At some stage of the development, I decided there's one class I want to develop in a branch. Without version control, I had to use make a "manual" branch. I copied the most recent header file and source file of that class to a separate folder and started working there. I made several versions there to work with simultaneously. One version was the first prototype of the class according to the plan (for which I made the "branch"). Then I added another file, in which I copied the first one but removed things that seemed to not be necessary. This way I have 2 versions, one with all my ideas and features, the the other one just with what I really use in my code, without what's not in use at the moment.
But then I added more. As development went on, I decided it may be a good idea to make that class a template. So I added a third version, which is just like the second one, but now some functionality implemented using polymorphism is implemented using a template. And I can't tell yet which version is the best, as it's too early to tell, so I want to have all 3 together.
Then I made another special file: A copy of the third version header file, in which each line can be marked or not marked. Marked means I use that specific method or I'm sure it's going to be in use very soon, otherwise the line isn't marked.
Then, some time later, I started a new branch. And for that branch I needed a new version of that class developed in the first branch. So I just copied one of the versions to the new branch's folder and started working there. Now again I had some kind of auxiliary file: I had 2 files, one from which I delete class methods I use, and one into which I write new methods I need to have.
Now I want to start using Git and I wonder: For all the project's text files, plans, diagrams, etc., it obvious - I keep them outside the Git repo. Whenever collaborative editing is needed I can set up a wiki or something like that. But for all those copies of the same header file, and for those auxiliary "marked" files, what do I do with them? I mean, it's fine by me to have them all in a branch, but what happens when I merge a branch into the trunk? I don't want to have all these copies and versions and lists, just the one final class file I've made.
On one hand, these are C++ source files used while coding. On the other hand they're not part of the pure source code of the software package, they just help me while I work but will never get compiled because in the end there's just the final version of the class which I chose to merge, and all other aux files, lists, etc. are kept just for reference.
What would be the best thing to do?
Thanks for reading my long story :)
EDIT: It's a local repo on my personal computer
Always keep documentation in same repository as source code. If you do not, your documentation will rot. It is becouse documentation is written agains some version of your software, so it has to develop the same way as software develops.
If your documentation is automaticaly generated or compiled into another format, commit only source data, makefile and configuration of generator, just like you do with source code.
What you describe is the normal use of branches: You have your master branch ("official", if it where) and a branch to develop a new feature (it doesn't really have to live in a separate directory, if I understand you correctly). Periodically you synchronize the feature branch with the master, either by rebasing it on the master or merging its changes in. In its turn, you can well have subordinate branches in which you try out approaches to develop the feature, handled with respect to the feature branch just like that one respect to the master. But in that case you have to be careful whenever you rebase.
You should keep any data that isn't easy to recreate in the repository, be it source code, documentation or even design sketches. Stuff that can be recreated (object code, automatically formated documentation, ...) should be kept out (any change there will create a difference to be checked in). Your repository (particularly not published branches) is your own workspace, it can be all the messy you like.
Take a look at the book mentioned at the git homepage.
Well, that’s clearly documentation and not source code, so you should separate it from your source code. As your documentation seems to be branch dependent, you should still check it into the repo, but in a separate doc directory.
About the merging: How a merge works is up to you in the end. Git just has a default merge strategy which is what most people want most of the time. But if you say that a merge into the main branch should just bring the code and not the docu, then that’s fine. Just merge that way:
git merge mybranch --no-commit
rm -rf **docu-dir**
git add -A
git commit

Wix: How to add files to the RemoveFiles table from c++

I've been following the advice in this question.
How to add a WiX custom action that happens only on uninstall (via MSI)?
I have an executable running as a custom action after InstallFinalize which I intend to purge all my files and folders. I was just going to write some standard deletion logic but I'm stuck on the point that Rob Mensching made that the windows installer should handle this incase someone bails midway through an uninstallation.
"create a CustomAction that adds temporary rows to the RemoveFiles table"
I'm looking for some more information on this. I'm not really sure how to achieve this in c++ and my searching hasn't turned up a whole lot.
http://msdn.microsoft.com/en-us/library/windows/desktop/aa371201(v=vs.85).aspx
Thanks
Neil
EDIT: I've marked the answer due to the question being specific about how to add files to the removeFiles table in c++ however I'm inclined to agree that the better solution is to use the RemoveFolderEx functionality in wix even though it is currently in beta (3.6 I think)
Roughly you will have to use the following functions in this order:
MsiDatabaseOpenView - the (input) handle is the one you get inside your custom action functions
MsiCreateRecord - to create a record with the SQL stuff inside
MsiRecord* - set of functions to prepare the record
MsiViewExecute to insert the new record into whatever table you please ...
MsiCloseHandle - with the handle from the very first step and the record handle (from MsiCreateRecord)
Everything is explained in detail over at MSDN. However, pay special attention to the section "Functions Not for Use in Custom Actions".
The documentation of MsiViewExecute also explains how the SQL queries should look. To get a feel for them you may want to use one of the .vbs scripts that are part of the Windows Installer SDK.
If you use WiX to create your installation package, consider using RemoveFolderEx element. It does what you want and you don't have to write the code yourself.
Read Tactical directory nukes for an example of how to use it.
If you still want to implement it yourself, you can get your inspiration from this blog post, there's the code for doing this in VBScript.

C++ vim IDE. Things you'd need from it

I was going to create the C++ IDE Vim extendable plugin. It is not a problem to make one which will satisfy my own needs.
This plugin was going to work with workspaces, projects and its dependencies.
This is for unix like system with gcc as c++ compiler.
So my question is what is the most important things you'd need from an IDE? Please take in account that this is Vim, where almost all, almost, is possible.
Several questions:
How often do you manage different workspaces with projects inside them and their relationships between them? What is the most annoying things in this process.
Is is necessary to recreate "project" from the Makefile?
Thanks.
Reason to create this plugin:
With a bunch of plugins and self written ones we can simulate most of things. It is ok when we work on a one big "infinitive" project.
Good when we already have a makefile or jam file. Bad when we have to create our owns, mostly by copy and paste existing.
All ctags and cscope related things have to know about list of a real project files. And we create such ones. This <project#get_list_of_files()> and many similar could be a good project api function to cooperate with an existing and the future plugins.
Cooperation with an existing makefiles can help to find out the list of the real project files and the executable name.
With plugin system inside the plugin there can be different project templates.
Above are some reasons why I will start the job. I'd like to hear your one.
There are multiple problems. Most of them are already solved by independent and generic plugins.
Regarding the definition of what is a project.
Given a set of files in a same directory, each file can be the unique file of a project -- I always have a tests/ directory where I host pet projects, or where I test the behaviour of the compiler. On the opposite, the files from a set of directories can be part of a same and very big project.
In the end, what really defines a project is a (leaf) "makefile" -- And why restrict ourselves to makefiles, what about scons, autotools, ant, (b)jam, aap? And BTW, Sun-Makefiles or GNU-Makefiles ?
Moreover, I don't see any point in having vim know the exact files in the current project. And even so, the well known project.vim plugin already does the job. Personally I use a local_vimrc plugin (I'm maintaining one, and I've seen two others on SF). With this plugin, I just have to drop a _vimrc_local.vim file in a directory, and what is defined in it (:mappings, :functions, variables, :commands, :settings, ...) will apply to each file under the directory -- I work on a big project having a dozen of subcomponents, each component live in its own directory, has its own makefile (not even named Makefile, nor with a name of the directory)
Regarding C++ code understanding
Every time we want to do something complex (refactorings like rename-function, rename-variable, generate-switch-from-current-variable-which-is-an-enum, ...), we need vim to have an understanding of C++. Most of the existing plugins rely on ctags. Unfortunately, ctags comprehension of C++ is quite limited -- I have already written a few advanced things, but I'm often stopped by the poor information provided by ctags. cscope is no better. Eventually, I think we will have to integrate an advanced tool like elsa/pork/ionk/deshydrata/....
NB: That's where, now, I concentrate most of my efforts.
Regarding Doxygen
I don't known how difficult it is to jump to the doxygen definition associated to a current token. The first difficulty is to understand what the cursor is on (I guess omnicppcomplete has already done a lot of work in this direction). The second difficulty will be to understand how doxygen generate the page name for each symbol from the code.
Opening vim at the right line of code from a doxygen page should be simple with a greasemonkey plugin.
Regarding the debugger
There is the pyclewn project for those that run vim under linux, and with gdb as debugger. Unfortunately, it does not support other debuggers like dbx.
Responses to other requirements:
When I run or debug my compiled program, I'd like the option of having a dialog pop up which asks me for the command line parameters. It should remember the last 20 or so parameters I used for the project. I do not want to have to edit the project properties for this.
My BuildToolsWrapper plugin has a g:BTW_run_parameters option (easily overridden with project/local_vimrc solutions). Adding a mapping to ask the arguments to use is really simple. (see :h inputdialog())
work with source control system
There already exist several plugins addressing this issue. This has nothing to do with C++, and it must not be addressed by a C++ suite.
debugger
source code navigation tools (now I am using http://www.vim.org/scripts/script.php?script_id=1638 plugin and ctags)
compile lib/project/one source file from ide
navigation by files in project
work with source control system
easy acces to file changes history
rename file/variable/method functions
easy access to c++ help
easy change project settings (Makefiles, jam, etc)
fast autocomplette for paths/variables/methods/parameters
smart identation for new scopes (also it will be good thing if developer will have posibility to setup identation rules)
highlighting incorrect by code convenstion identation (tabs instead spaces, spaces after ";", spaces near "(" or ")", etc)
reformating selected block by convenstion
Things I'd like in an IDE that the ones I use don't provide:
When I run or debug my compiled program, I'd like the option of having a dialog pop up which asks me for the command line parameters. It should remember the last 20 or so parameters I used for the project. I do not want to have to edit the project properties for this.
A "Tools" menu that is configurable on a per-project basis
Ability to rejig the keyboard mappings for every possible command.
Ability to produce lists of project configurations in text form
Intelligent floating (not docked) windows for debugger etc. that pop up only when I need them, stay on top and then disappear when no longer needed.
Built-in code metrics analysis so I get a list of the most complex functions in the project and can click on them to jump to the code
Built-in support for Doxygen or similar so I can click in a Doxygen document and go directly to code. Sjould also reverse navigate from code to Doxygen.
No doubt someone will now say Eclipse can do this or that, but it's too slow and bloated for me.
Adding to Neil's answer:
integration with gdb as in emacs. I know of clewn, but I don't like that I have to restart vim to restart the debugger. With clewn, vim is integrated into the debugger, but not the other way around.
Not sure if you are developing on Windows, but if you are I suggest you check out Viemu. It is a pretty good VIM extension for Visual Studio. I really like Visual Studio as an IDE (although I still think VC6 is hard to beat), so a Vim extension for VS was perfect for me. Features that I would prefer worked better in a Vim IDE are:
The Macro Recording is a bit error prone, especially with indentation. I find I can easily and often record macros in Vim while I am editing code (eg. taking an enum defn from a header and cranking out a corresponding switch statement), but found that Viemu is a bit flakey in that deptartment.
The VIM code completion picks up words in the current buffer where Viemu hooks into the VS code completion stuff. This means if I have just created a method name and I want to ctrl ] to auto complete, Vim will pick it up, but Viemu won't.
For me, it's just down to the necessities
nice integration with ctags, so you can do jump to definition
intelligent completion, that also give you the function prototype
easy way to switch between code and headers
interactive debugging with breaakpoints, but maybe
maybe folding
extra bonus points for refactoring tools like rename or extract method
I'd say stay away from defining projects - just treat the entire file branch as part of the "project" and let users have a settings file to override that default
99% of the difference in speed I see between IDE and vim users is code lookup and navigation. You need to be able to grep your source tree for a phrase (or intelligently look for the right symbol using ctags), show all the hits, and switch to that file in like two or three keystrokes.
All the other crap like repository navigation or interactive debugging is nice, but there are other ways to solve those problems. I'd say drop the interactive debugging even. Just focus on what makes IDEs good editors - have a "big picture" view of your project, instead of single file.
In fact, are there any plugins for vim that already achieve this?

Avoiding unneccessry recompilations using "branchy" development model

I'm using Mercurial for development of quite a large C++ project which takes about 30 minutes to get built from the scratch(while incremental builds are very quick).
I'm usually trying to implement each new feature in the new branch(using "hg clone") and I may have several new features developed during the day and it's quickly getting very boring to wait for the new feature branch to get built.
Are there any recipes to somehow re-use object files from other already built branches?
P.S. in git there are named branches within the same repository which make re-usage of the existing object files possible for the build system, however I prefer the simpler Mercurial separate branches model...
I suggest using ccache as a way to speed up compilation of (mostly) the same code tree. The way it works is as following:
You define a place to be used as the cache (and the maximum cache size) by using the CCACHE_DIR environment variable
Your compiler should be set to ccache ${CC} or ccache ${CXX}
ccache takes the output of ${CC} -E and the compilation flags and uses that as a base for its hash. As long as the compiler flags, source file and the headers are all unchanged, the object file will be taken from cache, saving valuable compilation time.
Note that this method speeds up compilation of any source file that eventually produces the same hash. If you share source files across projects, ccache will handle them as well.
If you already use distcc and wish to use it with ccache, set the CCACHE_PREFIX environment variable to distcc.
Using ccache sped up our source tree compilation around tenfold.
A simple way to speed up your builds could be to use a local "build directory" on your disk. This way you can checkout into this directory and start the build. The first time it will take the full time, but after that it will (hopefully) only rebuild the files where the source code changed.
My Localbranch extension was designed partly around this use case. It uses a single working directory, but I think it's simpler than git. It's essentially a mechanism for maintaining multiple repository clones under one working directory, where only one is active at a given time.
Woops, I missed your P.S. where you don't like having multiple named branches in the same repo and that you prefer separate clones.. sorry about that.
I too have somewhat large C++ projects and the clone-per-feature workflow didn't work for me very well. Firstly, I had to close down my Vim session and then reopen (many of the same) files once I've created the clone. Secondly, like you said, a lot of code must be recompiled unnecessarily. Thirdly, I have to keep track of where I've pushed to and pulled from - gets confusing when you start a new feature and then get sidetracked onto a new one. Before you know it you have many clones and not sure which ones need to be pushed back to your main.
You definitely don't want to use named branches (as I'm sure you know) to handle this as they are quite permanent.
What you need are bookmarks: https://www.mercurial-scm.org/wiki/BookmarksExtension
Bookmarks allow you to create lightweight (and otherwise anonymous) branches per feature by facilitating the naming of heads in your repo. These heads would normally be unnamed and you would have to look at the output of 'hg log' or use some graphical tool to find the revision numbers for the tip of your feature-branch. With bookmarks you can name them descriptive names like 'my-cool-feature' or 'bugfix-392'.
If you like the idea of bookmarks, I'd also recommend my own extension called 'tasks': http://bitbucket.org/alu/hgtasks. This extension works like bookmarks but adds some more functionality. It allows you created feature-branches (now called tasks) and suppress the pushing of incomplete tasks. This is handy when you have a few feature-branches at once. You may not be ready to push your 'my-cool-feature' task, but 'bugfix-392' is ready to go. Because tasks track a set of changesets (and not just one 'tip' changeset) there are some things you can do with tasks that you can't with bookmarks. See an example workflow here: http://x.zpuppet.org/2009/03/09/mercurial-tasks-extension/.
Mercurial also has local named branches, see the hg branch command.
If you insist on using hg clone to do branchy development, I guess you could try creating a folder link (shortcut under windows) in your repo to a shared obj folder. This will work with hg clone, but I'm not sure your build tool will pick it up.
Otherwise, you probably keep all your repos in one folder - just put your obj folder there (it shouldn't be under source control anyways, imo). Use relative paths to refer to it.
A word of warning: many .o symbol tables (or equivalent) contain the full path name of the source file. If that other file changes (or if the path is not visible from the new directory) you may encounter weirdness when debugging.