Is it possible to have a TFS build activity run on a specific agent? - build

We are using TFS 2010 and we've got a requirement to run a separate EXE to execute our old VB6 build script. We've got 3 build agents spread across 2 machines but only one machine has the VB6 development environment on it (as well as the old build script) so only one agent can do this work. I know how to do this with a build activity in our build template file but because we've got 3 build agents on 2 different machines, my question is how can I make sure this activity only runs on the one build agent which contains the VB6 development environment?
I realize this creates a bottleneck but it's only temporary for now. It would be ideal if I could do this without creating a custom DLL for a custom activity. Also, I understand with tagging I could create a different build definition for this work but again, this is not ideal... I want one build workflow that runs this on the one agent but the rest of the activities afterward continue on their respective build agent(s).
EDIT: All that said above, if it is possible to chain 2 build definitions together (the VB6 build definition runs and then the "regular" build definition), that would be an acceptable solution if possible.

You could certainly chain together 2 build definitions using the RunWorkflow and WaitForWorkflow Activities that are included out of the box (if you look at the LabDefaultTemplate it does this).
A more elegant solution would be to do it in one workflow using the AgentScope activity to separate out the sections of the workflow that need to run on different specific agents. If you look at the DefaultTemplate most of it is wrapped in a single AgentScope activity (labeled Run On Agent), but you could split it out into a couple AgentScope activities that are called in sequence (or in parallel if possible).

Related

Advice for Web-based Remote Build System

I'm interested in setting up a remote build system at work, initially for internal use, potentially for some customers going forward. We need to compile library code on several different machines (PC, Mac) and with multiple compilers, and it can be a real pain trying to get access to a full set. This is not our main build system, which is Jenkins-based and uses an approach that is not easily modified for the purpose envisaged here.
The idea would be that you could post your source to a website with some basic build parameters, it would compile the code and you could then download the generated code. Ideally users could pick which version of the underlying software they compiled their libraries against. I envisage it being supported by a virtual machine.
Reason I'm posting is that I don't really want to roll-my-own as much as possible - longer term it has maintenance implications - and would prefer something as pre-existing as possible. Obviously one would expect some adaptation in terms of scripting.
Any suggestions? It would have to be supported on Mac and PC at absolute minimum.
This sounds like something you could do by creating a parameterized Jenkins job (the build params given as input to your web frontent could be passed on to the job, perhaps via the Jenkins API). Personally, I would see if you could skip the step of creating a new webfrontend, and have users pass their build params directly to Jenkins.
To support downloading the resulting compiled code, you could have the Jenkins job archive the build as an artifact. Users could then download the files from the result page for that individual build.
As for how to make a Jenkins job accept source code to compile as input, perhaps you could use branches in your CM system? Your users could push their code to a branch, and then pass the branch as a build param. Otherwise, you might be able to use the file parameter feature of Jenkins.

Implementing single script build - non-portable dependencies

It seems that a build system best-practice is to have a single script that can build all source and package the releases. See Joel Test #2
How do you account for non-portable dependencies? For example, if you code for .net 4, then you need .net 4 installed on the box. Standard MS release .net 4 is not xcopy deployable (unless I'm mistaken?). I can see a few avenues:
the dependencies are clearly stated in some resource file (wiki, txt, whatever). When you
call the build script, the build will fail if you don't have the dependency installed. This is an acceptable outcome.
The build script is responsible for setting up the environment. So if you require .net 4 and its not on the box then it installs it for you.
A flavor of #2 - instead of installing dependencies, the script spawns a pre-packaged image (virtual machine, Amazon EC2 AMI) that is setup with all dependencies
???
For implementing a build script you have to ask yourself, how much work you want/can spent on it. This leads to the question how often you have to set up the build environment. I can see #2 would be the perfect solution, but i would need a lot of work, since usually you have more than one non portable dependency.
So we use #1 one. And it works quite well. The most important thing is, that the build script is starting with some sort of self-test. It looks for everything which is needed to build the whole software and gives an error if something is not found. And it gives a clear error message, so that any new guy knows what to do to make it running. Of course as with a lot of software it is nearly never finished and gets extended by needs. The drawback that this test can take some seconds is insignificant when whole build process needs more than minutes.
A wiki (or even sth. else) with the setup solution was not a good solution for us, since after three month nobody knows where this was, but the build script is used every day.
The build script itself is a set of a lot of different things, which where chosen by needs. It is starting with a batch (we are using Windows) which invokes a lot of other things. Other batches, MSBuild, home grown tools. Each step by it self is checking for its own dependencies, to have the problem local and you can see three lines later why this special thing is needed.
Number 2 states "Can you make a build in one step?" As described this means for a development team to be effective the build process must be as simple as possible to reduce errors in the build process and insure consistency. This is especially important as a team gets larger. You want to make sure everyone is building the same thing. (What is done with that package should also be simple, but it is not as important IMHO.) Msbuild is great at this; they provide the facilities to set up a build server that access the source control system independently so the developers actions can't corrupt the build environment. I highly recommend setting up a build server using TFS -- many build issues will go away and you will have the 1-click build Joel describes.
As for your points about what that package does for deployment -- you have many options with MS, but the more "one click" you can make it the better. I believe this is slightly different than Joel's #2. In his example he describes changing what software he will use for the install not because one performs with fewer steps, but instead because one can be incorporated into a one step build.

How start identical jobs with different parameters in parallel execution?

I have a build job and a test job parameters.
I want to be after the build job, simultaneously run test job with one parameter and the same test job with different parameters in parallel execution.
build job
|
/ \
test job test job
with one params with other params
| |
How to accomplish this and whether it is possible to perform without having to write your own plugin?
Thanks!
When you create your test job, create it as a "Build multi-configuration project"
While configuring the job select "Configuration Matrix" then "User-defined axis"
You can use the name of this axis as a parameter in your job. the given parameters will be started simultaneous in different jobs. (if enough executors are available)
Playing off #Soo Wei Tan's answer, I found the following works well.
Parameterized Trigger Plugin
Choose "Parameter Factory"
Choose "For every property file, invoke one build"
Then, in a shell, write a series of property files, and the Trigger Plugin will take care of the rest.
You can even combine this with a matrix style job at the top level in interesting ways. For example, triggering on the user-defined axis, keeping track of it all with a grid. Really quite a flexible approach, if a bit hidden.
I had the same requirement, and found that Parameterized Trigger Plugin was not flexible enough for passing different parameters to different (or the same) jobs in parallel. Yes you can use a Parameter Factory with property files, but that would mean adding new property files to my version control solely for the purpose of configuring Jenkins. A Multi-Configuration project with a configuration matrix also seemed overcomplicated.
The better and more straightforward solution for me was the Multijob Plugin, which has the concept of Phases. A MultiJob can have multiple phases. Phases run sequentially and jobs within a phase will run concurrently (in parallel).
After installing the MultiJob plugin, when creating a new Jenkins item, select MultiJob Project. You can then create one or more phases.
Each job within a phase has it own parameters, click Advanced... -> Add Parameters
Also it is very easy to configure what should happen if a particular job fails, should the entire MultiJob continue or fail etc, see the Kill the phase on: and Continuation condition to next phase
when jobs' statuses are: settings.
For me this was much more intuitive to use than the Parameterized Trigger Plugin or a Mult-Configuration project, and did not require any extra configuration outside of Jenkins.
Assuming you know the parameters when you are finishing your build job, you can use the Parameterized Trigger Build plugin to fire both downstream jobs with different parameters.
One option would be to use Build Flow plugin (https://wiki.jenkins-ci.org/display/JENKINS/Build+Flow+Plugin) potentially together with Job DSL plugin (https://wiki.jenkins-ci.org/display/JENKINS/Job+DSL+Plugin). You can use Job DSL to define job steps that invoke your build with different command line arguments and orchestrate the build with Build Flow.
I have a slightly different use case. We have test jobs that run against our main build during the development cycle. Toward the end of the cycle; we create a release candidate build and run the same tests against that. We want to also continue testing the main build.
Main Build Release Build
\ /
|
same set of tests
I can create duplicate jobs with just different names to handle this. But there have to be a more elegant/simpler way.
Could you please say a bit more why do you need your test jobs to run concurrently?
I do use test that need to split and run simultaneously, but I use a single Jenkins/Hudson job that has a weight > 1 (see Heavy Job Plugin).

Hudson - different build targets for different triggers

I would like to have different build targets for periodic builds and for those that are triggered by polling SCM.
More specific: The idea is that nightly builds should call 'mvn verify' which includes integration tests, while a normal build calls 'mvn test' that just executes Unit tests.
Any ideas how this can be achieved using Hudson?
Cheers
Chris
You could create two jobs - one scheduled and the other polled.
In the scheduled you can specify a different maven goal from the polled.
The answer by Raghuram is straight forward and correct. But you can also have three jobs. The first two do the triggering and pass the maven goal as a parameter into the third job. Sounds like a lot of clutter, and to a certain point it is. But it will help, if you have a lot of configuration to do (especially if the configuration needs to be changed regularly). It will help to have the configuration correct for both jobs. Configuration does not only include the build steps but also the harvesting of all reports, post build cleanup, notifications, triggering of downstream jobs, ... Another advantage is, that you don't need to synchronize the two jobs, so that they not run in parallel (if that causes problems).
Don't understand me wrong, my first impulse would go for two jobs, which has it's own advantages. The history for the nightly build will be contain the whole day (actually since the last nightly build) and not only for the time since the last build (which could be a triggered one. Integration tests usually need a more extensive setup or access to scarce resources. With two jobs you don't block these resources when you run the test goal. In addition I expect that more test results need to be harvested to be displayed and tracked over time by Hudson. You also might want to run more metrics against your code whose results should be displayed by Hudson. The disadvantage is that you of course need to keep the build steps basically the same all the time.
But in the end it is a case-by base decision if you go with 2 or 3 jobs.

How do you get up and running with a build server?

I think everyone here would agree that in order to be considered a professional software house there are number fundamental things you must have in place.
There is no doubt that one of these things is a build server, the question is, how far do you need to go.
What are the minimum requirements for the build server? (Somewhere to just compile?)
What is the ultimate goal for your build server? (Scheduled, source control integration, auto deployment to test / live servers)
Where is a good place to start assuming you have nothing at the moment?
It would be great if we could list out a few simple tasks that an amateur developer could take on board in order to set them on the right track to a fully functional build server.
It would also be good to hear about people that feel they have a "complete" system setup that performs all the functionality they require and how they went about setting it all up from scratch.
You can start by looking into Cruise Control.
There's also CruiseControl.net if that's your poison.
Essentially though, you need the following ingredients:
A dedicated environment (Virtual Machine/server. Don't use a developer's machine, unless it's just you. Even then, run a VM if you can. Much easier to move it to a server when/if one becomes available in your organisation)
A source control system that supports labelled/tagged revisions (for example, Subversion+TortoiseSVN)
Build scripts. These can be batchfiles that start the devenv.exe or msbuild.exe applications with a command line, or you can use something like Ant or NAnt.
In this scenario, CruiseControl acts as the Continous Integration server, and can make sure that you have builds done as you check in your code. This means you know whether the build is broken quicker than if you just had nightly builds. You should probably also have nightly builds, though.
Hudson is a great CI.
We run farm locally, but we started by downloading hudson.war and doing
java -jar hudson.war
It integrates with SCM, bug trucking systems it is really awesome.
You'll need some disk space if you want to keep old build.
Enjoy it is most straightforward CI solution so far.
HTH,
Hubert.
If you're using Cruise Control, the place to start is an Ant build.xml that does the job manually.
You need a version control system that can do labeled check-outs.
You need JUnit tests to run using the Ant task and generate HTML reports.
Id say you'd have to start by implementing a build strategy so you can build your code in a structured way - I use NANT.
For a basic build server - use one of the CI offerings out there that monitors your source control and triggers a build whenever a change is detected. eg: cruiseControl.
Once you get the basic build together - add the running of your unit tests after a successfuly build.
The most successful system i've had in place had 3 different builds :-
- one that fired on a check in - all this did was build the code.
- an on demand one that would build the application, generate the installer and then put
the installer into a shared drive for the testers to pick up
- a daily build that fired at 10pm. This:
- ran some code generation to build DB and C# code from a UML model
- build the code
- created a new build verification test user on a test oracle instance
- ran the application schema into the db
- fired off a bunch of unit tests
- cleaned up the db user (if the tests were successful)
- ran coverage analysis to build a report of the unit code coverage
Software we used for this was NANT, CruiseControl.NET, a custom code generation system, custom app to build an oracle schema, and NCover for the code analysis.
Start by having a read of Martin Fowler's excellent paper on Continuous Integration.
We built such a system for a major project >2,000 kSLOC and it proved itself to be invaluable.
HTH
cheers,
Rob
Cruise, Maven, Hudson etc are all great but its always worth having a stopgap solution.
You should have a batch file, shell script or simply written instructions that will allow you to run a build from any machine. We have had build servers unavailable in the past and the ability to switch quickly to another machine was invaluable!
The spec of the build machine need not be important unless you have a monster project. We try and keep our build times down to 10 minutes (including unit tests) and we have a pretty big project.
Don't be tempted to create or write your own build system because "none of the tools out there are good enough". All modern build systems allow you to write plugins to do custom stuff.
I'm using Cruisecontrol.NET and an msbuild buildscript.
I can use the buildscript manually so that I can get the latest version of the codebase, built the codebase very easily using the commandline. (This is very interesting if you are working on an application that consists of multiple solutions).
Next to that, my CruiseControl.NET buildserver uses this buildscript as well. It checks on a regular interval if there have been changes committed to the source-control.
If that happens, CC.NET performs the 'get-latest' task that I've defined in the buildscript, builds everything, executes unit-tests and performs a statical code analysis (fxcop).
My 'buildserver' is just an old workstation. It's a PIV, 3Ghz with 1gb RAM, and it does its job perfectly.
One additional thing that I would find interesting, is to have the ability to automatically deploy a new version, or build a setup.
I haven't done that yet, since I'm not sure whether it is a good idea, nor have I found a good strategy yet to do so ...
I mean; is deploying a new version of some components into production for a mission-critical application a good idea ? I don't think so ...
I think this is a good place to start:
[http://confluence.public.thoughtworks.org/display/CC/Home;jsessionid=5201DA7E8D361EB164C40E519DA0F0DE][1]
At least, that's where I started looking when setting up my build server. :)
[1]: Home of CruiseControl
Roughly in order - minimal/least sophisticated through more sophisticated
able to get a specific set of source onto any machine
able to build that source (with no problems)
able to (schedule) build each night/or some other defined period with no user intervention
One (or more) dedicated build server (not shared as qa or dev machine)
able to do a build after each check-in/commit
Notify interested parties of the build status after a build
Provide build status at any time
Create installers as part of the build
ability to deploy/live if build is good
Run unit tests
Run tests on the product
Report the results of those tests
Static code analysis and reporting
...
And the list goes on and on
Don't be afraid to just start with batch files or shell scripts or other ad-hoc means. People made perfectly good software before the CI craze. there were plenty of good processes before Hudson and Cruise Control - ( I am not knocking those or others - I use Hudson among others) - but don't miss the point - these things are here to help you - not become overbearing process)
I couldn't give you all the details about how we set our build server up (I was only involved at the start), but:
We started with an in-house system, implemented in ASP.NET and a .NET Windows Service, using NAnt to do the actual builds. Actually, most of the workflow was implemented in NAnt (e.g. emailing people, copying stuff around, etc.).
We moved to JetBrains TeamCity (there's a free cut-down version available), which is still serving us well.
We use it for builds triggered by a commit: these just build the binaries and run the unit tests. From here, we can do a complete build, which does the MSI as well. From there, we have system test builds that run more in-depth tests, across an environment built with virtual machines (with a separate domain controller, SQL Server box, etc.). When the system tests pass, the build is made available to our QA department for manual testing and some regression tests that we've not automated yet.
In the java space I've tested most of the available build environments. The issue with automatic build is that you quite often end up spending a fair amount of time following it up. After we switched to the commercial bamboo from atlassian, we found that we have to spend a lot less time pampering the build box, which in our case turns out to be very good economy. Bamboo also supports clustering, so you can add inexpensive boxes as needs evolves.
Try & find something that fits in with your existing practices in terms of building - e.g. it's not going to be a good fit to try & use an Ant-based buildserver if you're using Maven, for instance!
Ideally, it should just be able to monitor your source-control system, checkout the code, build, run some tests & publish the results without you being aware of it, or at least not 'till it's reporting a failure. Personally, I'd suggest Hudson (https://hudson.dev.java.net/) as a good starting point as it's easy to get installed & running & has a decent UI.
We start by writing batch scripts that will run on the developers machine. Once we have all the processes automated, we move them to the build server.
On the tools side we are currently moving from Cruise Control to TFS.