Is there a way to make Jenkins share a common source code directory / check out location for a full and an incremental build? - build

I am using Jenkins CI as the build server on a project that I am working on, I am also using Klocwork as a static analysis tool to identify deviations from our coding standards.
At present Jenkins has two builds (being performed in separate directories), a full build on a nightly basis that wipes out the workspace and performs a fresh checkout and full rebuild of everything.
In addition to the overnight build I also have an incremental build happening within 15 mins of a check in. Both builds are using the Klocwork analysis tool.
Klockwork works by displaying a list of potential issues which can then be fixed or chosen to be ignored if they are not applicable to the project, when issues are being ignored Klocwork uses the build file paths to remember where the issues that have been ignored reside. This means that when in Klocwork once I have ignored a warning in the full build and an incremental build is triggered the warning once again returns as the build path is different.
The most sensible solution I can see to this is for Jenkins to perform its full build on a nightly basis but for the incremental build to do an update in the full build location and to then do an incremental build - in the same way that an IDE on a PC functions.
The problem is that I have Jenkins running the full build and the incremental build as two separate jobs which causes them to check out into different locations and I cannot find a way of having the two jobs share a common directory.
Also I cannot find a way of having a single job that performs a nightly full checkout and rebuild, and an incremental build with an update on check in at the same time.
Is anyone familiar with a way of making Jenkins use a common source directory across multiple jobs?
Many thanks,
Pete.

Here's what I did.
Used one job to only check out source code.
In other jobs configuration settings', I set an environment variable that pointed to the workspace directory tree that contains the first job's source code (command line access to the Jenkins server is helpful here to figure out where it is, but not necessary). Then in my config scripting in Jenkins in the regular jobs, I 'cd' to that location and use the environment variable as path to all files so these other jobs would use the first job's checked out code.
Used locks, so regular jobs would not be running at same time as the check-out code job.
Since some results files (because of the tests being run) were generated and created in the source code tree because of the specifics of some of these jobs, in the config post-action script, I copied/moved the desired results back to the workspace of the job that should have them so I could process these results in the right job.
Worked for me.

You can easily make the two builds share the same build-area,
simply by extracting the files in both build-jobs to a shared location.
I strongly advise NOT to do this, as you can quickly get to a situation where
the nightly-build is cleaning the build-area while the incremental build is still running
(or that the incremental-build is checking-out sources while the nightly is still running).
Suggest you connect Klockwork only to one of the build-jobs (the nightly, probably)
so to avoid duplicate warnings.

Related

Visual Studio 2015 - Pre build event to determine which projects to compile

Motivation
PreBuild to disable compilation of redundant projects for faster compilation cycle.
Background
I have a VS15 ALL solution that contains many projects.
I have a single project, PreBuild, that all the other projects are dependent on, meaning, this PreBuild compiles first.
In addition, we also have a PostBuild project that do some more work once binaries are ready.
All projects are configured to build in Release mode (which is desired).
When a team member wants to release some binaries, he hits F7, Build Solution.
Now, the PreBuild, activates a separate dedicated process that calculates which projects should be released. The nature of the calculation is irrelevant to this discussion.
Problem
Out of the many many projects, it is often the case that only a few projects needs to be released. However, once the PreBuild process is done, ALL the projects are will compile which is very time consuming.
Question
Is it possible, after a solution build had started, to change the released projects?
Suggested unwanted approaches
A developer handpicks only the relevant projects and only build those.
PreBuild Kill & Revive. Once desired projects are calculated, PreBuild kills the VS15 process and activate a cmd compiling only the relevant projects.
Suggested approach
Change file ALL.sln and remove the the unwanted projects.
This would work had I changed that file prior to the process start but I'm not sure it would work if this change occurs during the process.
The simplest way I can think of, while still keeping most of the current infrastructure in place: have a dedicated project which invokes the release build (by calculating dependencies and invoking msbuild) and configure VS so it can be select just that project for a build. All from within your ALL.sln so the rest of the features remain. Steps:
Get rid of the PreBuild/PostBuild projects. I assume the PostBuild you mention is also meant for the actual release builds; if not just leave it there. Note by not requiring all projects to depend on the PreBuild project you already got rid of one maintainance burden.
Add one single project which will do the release building, say ReleaseBuild. Such name is also better than having PreBuild/PostBuild projects since it clearly states the intent of the project. A Makefile project is suitable, though technically it could be as simple as an msbuild file with just one Build target. Configure the build command line to do whatever is needed, i.e. figuring out what to build then build. For the sake of an example: say you use Powershell to do this you would configure the build commandline to be
Powershell -NoProfile -File BuildRelease.ps1 $(Platform)
and BuildRelease.ps1 contains something like
$projectsToRelease = CalculateMyProjectsForRelease()
$platform = $Args[0]
$projectsToRelease | %{& msbuild $_ "/p:Configuration=Release;Platform=$platform"}
In Configuration Manager add an extra Configuration called Deploy or so. This will be used to select what to build: you probably have Debug and Release configurations now already. Those stay in place, and are simply used to build everything. The idea is this extra configuration will take care of building the actual release. This is fairly consistent with the standard way of working in VS and easy to discover and understand for newcomers. Using the checkboxes, make it so that when the Deploy configuration is selected only the ReleaseBuild is built and none of the others whereas when Debug or Release is selected the ReleaseBuild project is not built. Looks like this:
To build a release, select Deploy from the configuration drop down menu in the VS toolbar and hit F7 (or whatever way you use to invoke Build Solution). Any build errors/warnings will be parsed and shown as usual in the Error List.
This is also easy to extend: suppose you only have a couple of release build versions just add more configurations like DeployA DeployB DeployC and adjust the build command line for them.

Visual Studio Team Services No Build Artifacts Created

I use an onsite build agent to perform my VSTS builds. This is working fine, sort of. I have 2 build definitions, one of which is a clone of the other and the only difference between the 2 is the solution that is built, all other parameters are exactly the same.
One of my builds completes without error and creates build artifacts and compiled code zip files in the 'build/1/a' artifacts folder. My other build completes without error BUT there are no build artifacts and compiled zip files created, my 'build/3/a' directory for this build is empty and I cannot see anywhere in the logs where the tasks to create this was executed, if at all. This did used to work before I cloned the build definition though. These are the MSBuild arguments that I have defined for both build definitions;
/p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:PackageLocation="$(build.artifactstagingdirectory)\\" /p:PackageTempRootDir="D:\Build\SiteManagerDev"
The only difference between them is the last parameter '/p:PackageTempRootDir'.
I have tried switching between the 2 directories for both build definitions to make sure it is not a permission error and the definition that finishes correctly works against either directory. I am starting to tear my hair out now and I have even tried creating a completely new build definition for the solution that creates an incomplete build and it is the same result, it is almost as if it is the solution itself that is causing the issue?
Any help would be greatly appreciated.
UPDATE: 05/02/2017
I think I finally understand what is going on! Question, if a build is manually triggered, not by a check-in trigger, if there have been no changes made to the code, and even though the build is executed, does this prevent the build artifacts from being created again because nothing has changed? The reason I ask is because I have found a strange in-house housekeeping routine that goes and deletes the contents of the 'D:\Build\1\a' directory on our build machine on a regular basis (I have no idea why!) and this results in there being nothing to publish UNTIL there is a code change checked in and then they are generated again! What a waste of everyone time this has been, my apologies and thank you for your help.

TFS Build 2015 - Using Globally Referred Files in Every Build

So, we are in this process of migrating XAML Builds to vNext (2015) Builds on TFS, and we are trying to "do things as clean as possible", since we had many, many customizations on the XAML builds that could be avoided and actually gave us problems along the way.
One major issue we are facing is with paths and "global files". Let me explain:
There are some files that, for commodity reasons, we have on a single place and every SLN file on that Collection refers them. Those files are such ones as Code Analysis RuleSets, Signing Files (SNK), etc. So the change is made in one place only and it affects every build.
Well, in XAML Builds we have a Build that runs with CI that downloads (Gets) those files, and since we hammered-in the same exact pathing for TFS and Machine (with a environment variable for the beginning of the path), the path is the same on the Developers and Build machines. However, this creates dependencies between builds and workspace issues.
My question here is, is there a configuration that I am missing that allows referring to files in other branches other than the build one? Since I’m trying to keep the build machines as “disposable” as possible, it’s running with an Agent Config Out of the Box: No custom paths, no hardwiring.
I already tried referring the files directly with their source control path, for example. The only options I’m seeing are either creating a PowerShell/CMD Script that downloads those files right into the same folder as the SLN or keeping it “as it is” and use relative paths putting a “Build” Build Step before the actual Build Step so it downloads the files to the server.
Isn’t there an “Elegant” way of doing this? Or is our methodology wrong from the get go?
You can add a Copy Files step to copy the files that the build needs:

Jenkins - conditions between build steps

I want to build a Maven project using Jenkins. However, the project only must be built if a certain file in the SVN repository has changed (and contains a special key)
So my plan is to create a job with two build steps:
the first step executes a shell or python script that checks that "condition".
the second step is the actual Maven build
The second step only must be invoked if the condition check in step 1 returned "true".
Is there a possibility to do so? Well, I guess I could return an exit code 1 in the first script if the condition is not met. This will stop the build at once, but the job will be marked as "failed". So this is not a good idea since the red icon makes my users panic ;-)
Any other ideas around this?
Cheers,
Frank
We do something similar with our own Jenkins setup.
We have a "trigger" job that monitors SVN on a periodic basis. When a change occurs in SVN, the trigger job executes its build steps. One of the build steps examines some aspects of the code and decides whether a build is necessary or not. If it is necessary, it uses CURL to initiate the start of a the "build" project. The "build" project gets the source code and does a build - it doesn't bother to figure out whether it needs to build or not - it always does.
Having the two tasks separate also makes it easy to trigger a manual build without worrying the should-I-build logic kicking and stopping the build.

Versioning with an automatic build system

We recently moved to an automatic build system (something internal, not Hudson or Teamcity, yet).
Our version is stored in a header file and is included by some cpp and resource files. It is also used by the installer.
Its format is A.B.C.D where:
A didn't change in years.
B changes rarely (major version).
C changes with minor versions.
D changes when a new minor version (bug fix) is delivered to QA.
Up until now, the one incharge of building a new version, incremented C/D by hand (D being the more common) before starting the build, checked in the change and then started the build. The version stayed the same until that person built the app successfully.
Naturally with the move to an automatic build system I'd like to get rid of the manual step of changing the version number.
How should this be approached?
Do I increment D whenever a new build is made, whether it's a QA build or an internal-test build (i.e. I'm working on some feature and I'd like to test I haven't broke anything)?
Is the increment step a task in the automatic build system?
After incrementing, should I commit the version file?
How do I avoid having a lot of noise in my version control? I don't want tons of "version incremented" commits.
What do I do if the build failed? Still increment the version and commit?
Do I increment D whenever a new build is made, whether it's a QA build or an internal-test build (i.e. I'm working on some feature and I'd like to test I haven't broke anything)?
The Eclipse Foundation adds an E element, the date and time of the build. I think that's a good idea for the internal-test builds. It's up to you if you want to use E for the QA builds.
Is the increment step a task in the automatic build system?
Seems logical, but you have to have some way of telling that task what kind of build you're doing.
How do I avoid having a lot of noise in my version control? I don't want tons of "version incremented" commits.
Commit the version control file with the source code.
Basically, your development build process should proceed in the following order.
Build the product from the development source code.
If the build succeeds, increment the version number.
Commit the source code and the version control file.
Build the product again from your version control system.
If the build fails, back out the source code and version control file commit.
This tests your build and your build process. The second build should never fail, but if it does, there's a problem in the process.
Your production build process would start at the 2nd step, skipping the 3rd step.
What do I do if the build failed? Still increment the version and commit?
My E is auto-incrementing. :-) I'd say no for the other elements A, B, C, or D.
Do I increment D whenever a new build is made, whether it's a QA build or an internal-test build (i.e. I'm working on some feature and I'd like to test I haven't broke anything)?
Yes, change your process so that D increments with every build (successful or not) rather than with every delivery to QA.
It can be quite frustrating having several builds, some working some not and not being able to tell them apart because the failed build is the same id as the good one, well eventually.
Then you don't even have to consider if it was on the same day or in the same hour.
Is the increment step a task in the automatic build system?
I'd have the build system auto increment the build number (D) only.
After incrementing, should I commit the version file?
How do I avoid having a lot of noise in my version control? I don't want tons of "version incremented" commits.
The version control storage is all about recording the detailed noise.
I'd have the version update checked in, this can make a reasonable tag visible in SVN of what build the previous changes where included in, have the build system ignore checkins by the build system, or those identified as the version update checkin.
Then to view the version history you should have an appropriate tool that allows you to filter the history to show you the view you need, in some cases excluding the version commit tags.
If you choose not to commit the version number for each build, then it might be a good idea to maintain the version number in a separate file to avoid accidental updates.
What do I do if the build failed? Still increment the version and commit?
Still increment the version number, I wouldn't commit the version number unless it was a successful build. You can have a variety of failures outside of source change in version control that don't need to be recorded - build server out of disk, server crash, compiler got all wobbly in the knees building 32 and 64 bit, debug and release aix, linux and windows builds at the same time...
You could consider to use the convention for .NET assemblies, as described in the documentation for class System.Version. Quote:
Build [your C]: A difference in build number represents a recompilation of the same source. Different build numbers might be used when the processor, platform, or compiler changes.
Revision [your D]: Assemblies with the same name, major, and minor version numbers but different revisions are intended to be fully interchangeable. A higher revision number might be used in a build that fixes a security hole in a previously released assembly.
How are you going to automate this? I mean, what system would know that "this build is the release build!". It would seem to me that all your digits in a version is relevant. If the next release (D + 1) requires two builds, then would A.B.C.D+2 be the next version? Sounds fishy to me. I would rather add the build number on top of the version instead, if it's really necessary to have this information on your DLLs and EXEs.
I don't think the build number is a relevant piece of information to have attached to the binary, unless you distribute files of version A.B.C.D from different builds (which you shouldn't do anyway!)
I would setup the build server to store the artifacts (DLLs, EXEs, MSIs, PDBs, etc) in a directory, whose name includes the build number and version, and then burn DVD/whatever from there. If you ever need to back track from a version to a specific build, you can use this information, provided that you keep an archive of your releases (recommended!).
I would recommend the use of autorevision.
You could still keep the A.B.C.D format for your tags and use the script to create header files that are generated at build time that have the needed info in them.