How do we do releases involving multiple platforms using Jenkins? - build

We have a bit of a mess of a situation on our build machine...
I finally managed to get one matrix build working, a "first check that everything compiles" type of task which just compiles everything on the current platform it's running on. It runs on multiple platforms fine (about the only problem it might have is that it's compiling the java code multiple times when it could probably be optimised to do that once.)
I imagine that setting up a matrix build for "build installers" would be not too hard, either.
But there are two problems which will definitely hit.
There's one zip file we redistribute which ideally would contain all platform-dependent binaries in a single zip file to reduce duplication (essentially it's a library we hand out to others.)
The process we have for copying the actual releases up to the server relies on every single generated file for the same version number of the same product being ready before the build starts. No single-OS builds would have a complete enough view of the produced files to be able to do the release and it doesn't seem to be possible to add build steps which run in the parent job.
We're using Archive for Clone Workspace SCM as a post-build step for this initial matrix build, but it looks like that runs independently on each OS and no attempt is made to merge the results together.
How do other people get around all these issues?
I know I can just ditch matrix builds entirely and do everything via configuring multiple of each job, but we have three platforms now and the number of jobs would skyrocket.
Options which involve alternatives to Jenkins will be looked at seriously as well, as lately... the number of problems we have been having with it is enormous.

Here's what we eventually set up which is working:
Project "platform-releases" is still a "Matrix Build".
Slaves are each labeled with their relevant platform and we use the slave label as the parameters of the matrix.
The "Archive Artifacts" is used to archive the useful files. ("Clone Workspace SCM" seems to be a dead end when working with matrix builds.)
Project "unified-releases" is a normal build which copies in the artifacts from platform-releases. Since we copy the artifacts without specifying a specific platform, we get the artifacts for all platforms, which appear in directories named /os/windows, /os/macosx, etc.
The Ant build knows about the location of the artifacts (they are outside the working copy) and pulls all the files into a single directory structure before uploading.
"Parameterized Trigger" plugin is set up for platform-releases to trigger unified-releases so that the svn version it's working with matches the actual files produced. Unfortunately, we have to check out the entire repository despite only using a tiny portion of it, because there is a bug (suspected to be in the subversion plugin) which prevents subdirectories being checked out with the correct revision otherwise.

Related

LNK2005 error because I have two c++ windows running parallel

I am working on a uni project with multiple tasks, and I am trying to debug a simple program, however I am getting the error "LNK2005 main already defined in task 1".
I realise this is because I have used "int(main)" for both tasks (I have code for task 1 and code for task 2). I do not want to have to create a new project folder for every task. Is there a way around this ?
While it is generally advisable to have a project for each executable you build, you can get away with having a single project for multiple executables if you manage to somehow get rid of the undesired duplicate mains. You have quite a few options available to you:
Have only one main. Have it test its own executable name, and take specific action depending on the name it finds. In the post-build rules, set up rules for creating each (specifically named) executable from your base executable. This allows you to build all your executables at the same time in a fairly efficient manner.
Have multiple mains, but hide them using #ifdefs. Add a #define to the project settings or just somewhere above main(), and compile as needed. This is ok if you don't want to build all your executables all the time.
Just bite the bullet and set up multiple projects.
Whatever you do, consider that being able to build everything you have in a single step is considered a highly desirable trait of build systems, and is usually high on the list of features a properly engineered development process should have.

how Consistent is two Shared Object(.so) file which have same code

Is there any consistency when generating a .so file between two builds? When we perform clean and build?
Basically, I wanted the .so file for an app for a previous state of the code (C++), the files changed were very few, which i reverted back, if i build now will the so file be same as the one i got before?
I can replicate the code state to exactly what it was before, I needed that to map the stack trace to the code using this file as we can map hex values to function names.
Thanks
Assuming you are building from the same source code with the same build options, your output product should be the same (with the possible exception of some timestamps embedded into the code). Any compiler/kernel/library upgrades might break this guarantee.
This is exactly what version control (especially tagged snapshots) are for.

Get scons to generate a new build number

I'd like to get scons to read a previous version number from a file, update a source file with a new version number and current date and then write the number back to the original file ready for the next build.
This needs to happen only when the target is out of date. IOW the version number doesn't change if no build takes place. The original file is source controlled and isn't a source file else it could trigger another build on check-in (due to CI). CLARIFICATION From scons' point of view the code will always be out of date due to the auto-generated source file but scons will only be run from a Continuous Integration job (Jenkins) when a SCM change is detected.
I've looked into AddPostMethod, but this seems to fire for all files within the list of source files.
Command and Builder methods use the VARIANT_DIR so I can't edit these files and then check them back in as they no longer map to the repo.
I'm hoping I'm just misunderstanding some of the finer details of scons else I'm running out of ideas!
Update
Thinking this through some more, Tom's comment is correct. Although I have two files, one version controlled text file (non-source code) and one non-source controlled source file there is no way to check one file in and prevent a continuous build/check-in cycle. Jenkins will see the new text file and spin off a build, and scons will see the new generated file. So unless I delete the generated file at some point, although this seems to go against the workflow of both tools.
Does anyone have any method for achieving this? It seems pretty straightforward. Ultimately I just want to generate build numbers each time a build is started.
From SCons User Guide section 8, Order-Only Dependencies, you can use the Requires method:
import time
# put whatever text you want in your version.c; this is just regular python
version_c_text = """
char *date = "%s";
""" % time.ctime(time.time())
open('version.c', 'w').write(version_c_text)
version_obj = Object('version.c')
hello = Program('hello.c',
LINKFLAGS = str(version_obj[0]))
Requires(hello, version_obj)
Two things to note: first you have to add the explicit Requires dependency. Second, you can't make version_obj a source of the Program builder, you have to cheat (here we pass it as a linkflag), otherwise you'll get an automatic full dependency on it.
This will update the version.c always, but won't rebuild just because version.c changed.

Should these auxiliary files be under Git version control?

I decided to start using the Git version control system for my C++ project. I'm new to version control. For the trunk things are simple, I just commit all the project versions I have. I kept each version as a separate folder because I knew I'd very soon use Git. But I encountered a problem with my branches.
At some stage of the development, I decided there's one class I want to develop in a branch. Without version control, I had to use make a "manual" branch. I copied the most recent header file and source file of that class to a separate folder and started working there. I made several versions there to work with simultaneously. One version was the first prototype of the class according to the plan (for which I made the "branch"). Then I added another file, in which I copied the first one but removed things that seemed to not be necessary. This way I have 2 versions, one with all my ideas and features, the the other one just with what I really use in my code, without what's not in use at the moment.
But then I added more. As development went on, I decided it may be a good idea to make that class a template. So I added a third version, which is just like the second one, but now some functionality implemented using polymorphism is implemented using a template. And I can't tell yet which version is the best, as it's too early to tell, so I want to have all 3 together.
Then I made another special file: A copy of the third version header file, in which each line can be marked or not marked. Marked means I use that specific method or I'm sure it's going to be in use very soon, otherwise the line isn't marked.
Then, some time later, I started a new branch. And for that branch I needed a new version of that class developed in the first branch. So I just copied one of the versions to the new branch's folder and started working there. Now again I had some kind of auxiliary file: I had 2 files, one from which I delete class methods I use, and one into which I write new methods I need to have.
Now I want to start using Git and I wonder: For all the project's text files, plans, diagrams, etc., it obvious - I keep them outside the Git repo. Whenever collaborative editing is needed I can set up a wiki or something like that. But for all those copies of the same header file, and for those auxiliary "marked" files, what do I do with them? I mean, it's fine by me to have them all in a branch, but what happens when I merge a branch into the trunk? I don't want to have all these copies and versions and lists, just the one final class file I've made.
On one hand, these are C++ source files used while coding. On the other hand they're not part of the pure source code of the software package, they just help me while I work but will never get compiled because in the end there's just the final version of the class which I chose to merge, and all other aux files, lists, etc. are kept just for reference.
What would be the best thing to do?
Thanks for reading my long story :)
EDIT: It's a local repo on my personal computer
Always keep documentation in same repository as source code. If you do not, your documentation will rot. It is becouse documentation is written agains some version of your software, so it has to develop the same way as software develops.
If your documentation is automaticaly generated or compiled into another format, commit only source data, makefile and configuration of generator, just like you do with source code.
What you describe is the normal use of branches: You have your master branch ("official", if it where) and a branch to develop a new feature (it doesn't really have to live in a separate directory, if I understand you correctly). Periodically you synchronize the feature branch with the master, either by rebasing it on the master or merging its changes in. In its turn, you can well have subordinate branches in which you try out approaches to develop the feature, handled with respect to the feature branch just like that one respect to the master. But in that case you have to be careful whenever you rebase.
You should keep any data that isn't easy to recreate in the repository, be it source code, documentation or even design sketches. Stuff that can be recreated (object code, automatically formated documentation, ...) should be kept out (any change there will create a difference to be checked in). Your repository (particularly not published branches) is your own workspace, it can be all the messy you like.
Take a look at the book mentioned at the git homepage.
Well, that’s clearly documentation and not source code, so you should separate it from your source code. As your documentation seems to be branch dependent, you should still check it into the repo, but in a separate doc directory.
About the merging: How a merge works is up to you in the end. Git just has a default merge strategy which is what most people want most of the time. But if you say that a merge into the main branch should just bring the code and not the docu, then that’s fine. Just merge that way:
git merge mybranch --no-commit
rm -rf **docu-dir**
git add -A
git commit

Avoiding unneccessry recompilations using "branchy" development model

I'm using Mercurial for development of quite a large C++ project which takes about 30 minutes to get built from the scratch(while incremental builds are very quick).
I'm usually trying to implement each new feature in the new branch(using "hg clone") and I may have several new features developed during the day and it's quickly getting very boring to wait for the new feature branch to get built.
Are there any recipes to somehow re-use object files from other already built branches?
P.S. in git there are named branches within the same repository which make re-usage of the existing object files possible for the build system, however I prefer the simpler Mercurial separate branches model...
I suggest using ccache as a way to speed up compilation of (mostly) the same code tree. The way it works is as following:
You define a place to be used as the cache (and the maximum cache size) by using the CCACHE_DIR environment variable
Your compiler should be set to ccache ${CC} or ccache ${CXX}
ccache takes the output of ${CC} -E and the compilation flags and uses that as a base for its hash. As long as the compiler flags, source file and the headers are all unchanged, the object file will be taken from cache, saving valuable compilation time.
Note that this method speeds up compilation of any source file that eventually produces the same hash. If you share source files across projects, ccache will handle them as well.
If you already use distcc and wish to use it with ccache, set the CCACHE_PREFIX environment variable to distcc.
Using ccache sped up our source tree compilation around tenfold.
A simple way to speed up your builds could be to use a local "build directory" on your disk. This way you can checkout into this directory and start the build. The first time it will take the full time, but after that it will (hopefully) only rebuild the files where the source code changed.
My Localbranch extension was designed partly around this use case. It uses a single working directory, but I think it's simpler than git. It's essentially a mechanism for maintaining multiple repository clones under one working directory, where only one is active at a given time.
Woops, I missed your P.S. where you don't like having multiple named branches in the same repo and that you prefer separate clones.. sorry about that.
I too have somewhat large C++ projects and the clone-per-feature workflow didn't work for me very well. Firstly, I had to close down my Vim session and then reopen (many of the same) files once I've created the clone. Secondly, like you said, a lot of code must be recompiled unnecessarily. Thirdly, I have to keep track of where I've pushed to and pulled from - gets confusing when you start a new feature and then get sidetracked onto a new one. Before you know it you have many clones and not sure which ones need to be pushed back to your main.
You definitely don't want to use named branches (as I'm sure you know) to handle this as they are quite permanent.
What you need are bookmarks: https://www.mercurial-scm.org/wiki/BookmarksExtension
Bookmarks allow you to create lightweight (and otherwise anonymous) branches per feature by facilitating the naming of heads in your repo. These heads would normally be unnamed and you would have to look at the output of 'hg log' or use some graphical tool to find the revision numbers for the tip of your feature-branch. With bookmarks you can name them descriptive names like 'my-cool-feature' or 'bugfix-392'.
If you like the idea of bookmarks, I'd also recommend my own extension called 'tasks': http://bitbucket.org/alu/hgtasks. This extension works like bookmarks but adds some more functionality. It allows you created feature-branches (now called tasks) and suppress the pushing of incomplete tasks. This is handy when you have a few feature-branches at once. You may not be ready to push your 'my-cool-feature' task, but 'bugfix-392' is ready to go. Because tasks track a set of changesets (and not just one 'tip' changeset) there are some things you can do with tasks that you can't with bookmarks. See an example workflow here: http://x.zpuppet.org/2009/03/09/mercurial-tasks-extension/.
Mercurial also has local named branches, see the hg branch command.
If you insist on using hg clone to do branchy development, I guess you could try creating a folder link (shortcut under windows) in your repo to a shared obj folder. This will work with hg clone, but I'm not sure your build tool will pick it up.
Otherwise, you probably keep all your repos in one folder - just put your obj folder there (it shouldn't be under source control anyways, imo). Use relative paths to refer to it.
A word of warning: many .o symbol tables (or equivalent) contain the full path name of the source file. If that other file changes (or if the path is not visible from the new directory) you may encounter weirdness when debugging.