How start identical jobs with different parameters in parallel execution? - concurrency

I have a build job and a test job parameters.
I want to be after the build job, simultaneously run test job with one parameter and the same test job with different parameters in parallel execution.
build job
|
/ \
test job test job
with one params with other params
| |
How to accomplish this and whether it is possible to perform without having to write your own plugin?
Thanks!

When you create your test job, create it as a "Build multi-configuration project"
While configuring the job select "Configuration Matrix" then "User-defined axis"
You can use the name of this axis as a parameter in your job. the given parameters will be started simultaneous in different jobs. (if enough executors are available)

Playing off #Soo Wei Tan's answer, I found the following works well.
Parameterized Trigger Plugin
Choose "Parameter Factory"
Choose "For every property file, invoke one build"
Then, in a shell, write a series of property files, and the Trigger Plugin will take care of the rest.
You can even combine this with a matrix style job at the top level in interesting ways. For example, triggering on the user-defined axis, keeping track of it all with a grid. Really quite a flexible approach, if a bit hidden.

I had the same requirement, and found that Parameterized Trigger Plugin was not flexible enough for passing different parameters to different (or the same) jobs in parallel. Yes you can use a Parameter Factory with property files, but that would mean adding new property files to my version control solely for the purpose of configuring Jenkins. A Multi-Configuration project with a configuration matrix also seemed overcomplicated.
The better and more straightforward solution for me was the Multijob Plugin, which has the concept of Phases. A MultiJob can have multiple phases. Phases run sequentially and jobs within a phase will run concurrently (in parallel).
After installing the MultiJob plugin, when creating a new Jenkins item, select MultiJob Project. You can then create one or more phases.
Each job within a phase has it own parameters, click Advanced... -> Add Parameters
Also it is very easy to configure what should happen if a particular job fails, should the entire MultiJob continue or fail etc, see the Kill the phase on: and Continuation condition to next phase
when jobs' statuses are: settings.
For me this was much more intuitive to use than the Parameterized Trigger Plugin or a Mult-Configuration project, and did not require any extra configuration outside of Jenkins.

Assuming you know the parameters when you are finishing your build job, you can use the Parameterized Trigger Build plugin to fire both downstream jobs with different parameters.

One option would be to use Build Flow plugin (https://wiki.jenkins-ci.org/display/JENKINS/Build+Flow+Plugin) potentially together with Job DSL plugin (https://wiki.jenkins-ci.org/display/JENKINS/Job+DSL+Plugin). You can use Job DSL to define job steps that invoke your build with different command line arguments and orchestrate the build with Build Flow.

I have a slightly different use case. We have test jobs that run against our main build during the development cycle. Toward the end of the cycle; we create a release candidate build and run the same tests against that. We want to also continue testing the main build.
Main Build Release Build
\ /
|
same set of tests
I can create duplicate jobs with just different names to handle this. But there have to be a more elegant/simpler way.

Could you please say a bit more why do you need your test jobs to run concurrently?
I do use test that need to split and run simultaneously, but I use a single Jenkins/Hudson job that has a weight > 1 (see Heavy Job Plugin).

Related

How to see the full build queue in Jenkins

Our Jenkins instance has a job for our main application. It builds all git branches in the one job, and so can sometimes get pretty far behind. However, the Build Queue on the lefthand side only ever shows the next job, not all the others. Is there a way to see all the queued executions of a single job? Ideally it'd even show the branch as well.
I'm aware of solutions like creating a new job for each branch, but this really clutters up the already horrible interface, and I'd rather avoid that.
For a single job, with same parameters, Jenkins doesn't place a build in the queue if it already contained in the queue. You can use a simple trick to add an unused parameter and set some random value to this parameter every time you run the job. Now you can have multiple jobs in the queue for the same job.

Is it possible to have a TFS build activity run on a specific agent?

We are using TFS 2010 and we've got a requirement to run a separate EXE to execute our old VB6 build script. We've got 3 build agents spread across 2 machines but only one machine has the VB6 development environment on it (as well as the old build script) so only one agent can do this work. I know how to do this with a build activity in our build template file but because we've got 3 build agents on 2 different machines, my question is how can I make sure this activity only runs on the one build agent which contains the VB6 development environment?
I realize this creates a bottleneck but it's only temporary for now. It would be ideal if I could do this without creating a custom DLL for a custom activity. Also, I understand with tagging I could create a different build definition for this work but again, this is not ideal... I want one build workflow that runs this on the one agent but the rest of the activities afterward continue on their respective build agent(s).
EDIT: All that said above, if it is possible to chain 2 build definitions together (the VB6 build definition runs and then the "regular" build definition), that would be an acceptable solution if possible.
You could certainly chain together 2 build definitions using the RunWorkflow and WaitForWorkflow Activities that are included out of the box (if you look at the LabDefaultTemplate it does this).
A more elegant solution would be to do it in one workflow using the AgentScope activity to separate out the sections of the workflow that need to run on different specific agents. If you look at the DefaultTemplate most of it is wrapped in a single AgentScope activity (labeled Run On Agent), but you could split it out into a couple AgentScope activities that are called in sequence (or in parallel if possible).

Job inheritance in Jenkins jobs

How do you handle mapping Jenkins jobs to your build process, and have you been able to build in cascading configurations on inheritance?
For any given build I'll have at least three jobs (standard continuous integration/nightly, security scan, coverage) and then some downstream integration testing jobs. The configuration slicer plugin handles some aspects cross jobs but each jobs is still very much its own individual entity with no relationship to the other jobs in its group.
I recently saw QuickBuild and it has job inheritance where a parent jobs can define a standard group of steps and its children can override and specialize. With Jenkins, I have copies of jobs, which is fine until I need to change something. With QuickBuild the relationship between jobs allows me to spread my changes with little effort.
I've been trying to figure out how to handle this in Jenkins. I could use the parameterized build trigger plugin to allow jobs to call others and override aspects. I'd then harvest the data from the called jobs to its caller. I suspect I'll run into a series of problems where there are aspects which I can't override which will force me to implement Jenkins functionality in my own script thus making Jenkins less useful.
How do you handle complexity in your build jobs in Jenkins? Have you heard of any serious problems with QuickBuild?
I would like to point out to you the release of a plugin that my team has developed and only recently published under open source.
It implements full "Inheritance between jobs".
Here for further links that might help you:
Presentation: https://www.youtube.com/watch?v=wYi3JgyN7Xg
Wiki: https://wiki.jenkins-ci.org/display/JENKINS/inheritance-plugin
Releases: http://repo.jenkins-ci.org/releases/hudson/plugins/project-inheritance/
I had pretty much the same problem. We have a set of jobs that needs to run for our trunk as well as at least two branches. The branches represent our versions, and a new branch is created every few months. Creating new jobs by hand for this is no solution, so I checked out some possibilities.
One possibility is to use the template plugin. This lets you create a hierarchy of jobs of a kind. It provides inheritance for builders, publishers and SCM settings. Might work for some, for me it was not enough.
Second thing I checked out was the Ant Script for job cloning, and his sibling the Bash Script. These are truly great. The idea is to make the script create a new job for, copy all settings from a template job, make changes as you need them. As this is a script it is very flexible and you can do a lot with that. Only drawback is, that this will not result in a real hierarchy, so changes in the template job will not reflect on jobs already cloned, only on jobs that will be created going forward.
Looking at the drawbacks and virtues of those two solutions, a combination of both might work best. You create a template project with some basic settings that will be true for all jobs, and then use a bash or ant script to create jobs depending on that template.
Hope that helps.
I was asked what our eventual solution to the problem was... After many months of fighting with our purchasing system we spent around $4000 US on Quickbuild. In a about 2-3 months we had a templated build system in place and were very happy with it. Before I left the company we had several product groups in the system and were automating the release process as well.
Quickbuild was a great product. It should be in the $40k class but it's priced at much less. While I'm sure Jenkins could do this, it would be a bit of a kludge whereas Quickbuild had this functionality baked in. I've implemented complex behaviors on top of products before (e.g. merge tracking in SVN 1.0) and regretted it. Quickbuild was reasonably priced and provided a solid base for our build and test systems.
At present, I'm at a firm using Bamboo and hope its new feature branch feature will provide much of what Quickbuild can do
EZ Templates plugin allows you to use any job as a template for other jobs. It is really awesome. All you need is to set the base job as a template:
* Usually you would also disable the base job (like "abstract class").
Then create a new job, set it to use the base job template, and save:
Now edit the new job - it will include everything! (and you can override existing configurations).
Note: There's another plugin Template Project for configuration templates, but it was not updated recently (last commit on 2016).
We use quickbuild and it seems to work great for most things. I have even been able to use their APIs to write custom plugins. One area where quickbuild is lacking is sonar integration. The sonar team has a Jenkins plugin and not one for quickbuild.
Given that the goal is DRY (don't repeat yourself) I presently favor this approach:
Use jenkins shared library with jenkins pipeline unit to support TDD
Use docker images using groovy/python or whatever language you like to execute complex actions requiring apis etc
Keep the actual job pipeline very spartan (basically just for pulling build params and passing them to functions in shared library which may use docker images to do the work.
This works really well an eliminates the DRY issues around complex build jobs.
Shared Pipeline Docker Code Example - vars/releasePipeline.groovy
/**
* Run image
* #param closure to run within image
* #return result from execution
*/
def runRelengPipelineEphemeralDocker(closure) {
def result
artifactory.withArtifactoryEnvAuth {
docker.withRegistry("https://${getDockerRegistry()}", 'docker-creds-id') {
docker.image(getReleasePipelineImage()).inside {
result = closure()
}
}
}
return result
}
Usage example
library 'my-shared-jenkins-library'
releasePipeline.runRelengPipelineEphemeralDocker {
println "Running ${pythonScript}"
def command = "${pythonInterpreter} -u ${pythonScript} --cluster=${options.clusterName}"
sh command
}

Hudson - different build targets for different triggers

I would like to have different build targets for periodic builds and for those that are triggered by polling SCM.
More specific: The idea is that nightly builds should call 'mvn verify' which includes integration tests, while a normal build calls 'mvn test' that just executes Unit tests.
Any ideas how this can be achieved using Hudson?
Cheers
Chris
You could create two jobs - one scheduled and the other polled.
In the scheduled you can specify a different maven goal from the polled.
The answer by Raghuram is straight forward and correct. But you can also have three jobs. The first two do the triggering and pass the maven goal as a parameter into the third job. Sounds like a lot of clutter, and to a certain point it is. But it will help, if you have a lot of configuration to do (especially if the configuration needs to be changed regularly). It will help to have the configuration correct for both jobs. Configuration does not only include the build steps but also the harvesting of all reports, post build cleanup, notifications, triggering of downstream jobs, ... Another advantage is, that you don't need to synchronize the two jobs, so that they not run in parallel (if that causes problems).
Don't understand me wrong, my first impulse would go for two jobs, which has it's own advantages. The history for the nightly build will be contain the whole day (actually since the last nightly build) and not only for the time since the last build (which could be a triggered one. Integration tests usually need a more extensive setup or access to scarce resources. With two jobs you don't block these resources when you run the test goal. In addition I expect that more test results need to be harvested to be displayed and tracked over time by Hudson. You also might want to run more metrics against your code whose results should be displayed by Hudson. The disadvantage is that you of course need to keep the build steps basically the same all the time.
But in the end it is a case-by base decision if you go with 2 or 3 jobs.

Slow Complex Builds & Hudson vs. Electric Cloud

Is hudson the right tool for complex C++ builds?
I have a C++ build that takes about 4 hours. Compile and packaging take about 1/2 the time and testing consumes the other half. Presently, we are using a home grown system but there's some move to go to hudson since we use it for all of our java builds.
My problem is that continuous integration isn't very...continuous at 4 hour intervals. I want a tool that's going to let me parallelize the build in an understandable way.
Hudson's been great for small builds or java builds where I'm sitting at the top of a large maven project, but I don't think it will scale well for complex c++ builds.
What have your experiences been?
Seems like you have a few questions here:
Should I use a CI server to manage my C++ build? The answer to this is unequivocally YES. Your homegrown system may be adequate, but it's not standard, extending it is probably difficult, and maintaining it is a distraction from the work you're actually paid to do.
Is Hudson the right choice for my project? It will probably get the job done, and it has the advantage of being in deployment at your site already. However, you specifically mention that you want a tool that supports parallelization well, and I don't think that Hudson really fits the bill. The problem is that Hudson was not designed with parallelism in mind. See, the representation of a build process in Hudson is a "job", which is just a series of steps executed in sequence -- checkout, compile, test, package, etc. There's no way to get those steps to run in parallel. Now, you can get around this by modeling your process with multiple jobs. Each job is completely independent, so of course they could be run in parallel; you can use something like the Locks and Latches plugin to coordinate the jobs, but the whole process is more complicated than it ought to be, and kind of clumsy -- instead of a single job representing a single run of the build process, you have several unconnected jobs, at best tied together via naming convention.
Can Electric Cloud help? Again, an unequivocal YES. Electric Cloud offers ElectricCommander, a CI server with parallel support built-in from inception. As with Hudson, a job is used to represent a build process, but the steps within a job can easily be run in parallel (just check the "parallel" box on those steps), so you don't have to resort to add-ons and kludges: one run build process is one job, with as many parallel steps as you like.
Will the right CI server put "continuous" back into my integration? A CI server will only get you so far. The thing is, a CI server can provide you coarse-grained parallelism -- so with a little work, you can set it up to run packaging in parallel with tests, for example. With a little more work, you can probably split your test phase into a few independent pieces that can be run in parallel.
You didn't give many details, but let's assume that your build is 90 minutes of compile, 30 minutes of packaging, and 2 hours of tests that can be broken down into four 30 minute pieces. Suppose further that you can do packaging and testing simultaneously. That would bring your 4 hour process down to 2 hours total. At this point the "long pole" in your process is the compile phase, and although you might be able to break that up by hand into pieces that can be run in parallel by your CI server, the truth is that the CI server is just not the right tool for that job.
A better option is to use a build tool that can give you automatic fine-grained parallelism within the compile phase. For example, if you're using gmake already, you can try gmake -j 8 to run 8 compiles at once. If your makefiles are clean and your dependencies are all correct, and you have a beefy build server, this could give you a pretty good performance boost. You could also use ElectricAccelerator, another product from Electric Cloud, that was specifically designed to accelerate this portion of the build process, even for builds that can't safely use gmake -j due to incorrect or incomplete depedencies.
Hope that helps.
Can you not split the build into multiple parts whatsoever?
You do mention that the job has several distinct parts. The general guidance with Hudson is to do the build part in one job, testing in another, packaging in another, and so on.
You can compile the code in Job A and archive the output, then tell Job B to copy these artifacts from Job A and run the tests on them. Meanwhile, another Job A can be kicked-off, due to further commits to the source repository..
Sounds to me like the problem is with your build process (make files?, msbuild?) and not Hudson. Hudson is simply going to execute the build process the same way a user would from a command-line. Is it possible to optimize your build process?
Even if a 4 hour build process is unavoidable, Hudson can help because you can attach an unlimited number of slave machines which can all be running multiple builds in parallel, given adequate hardware horsepower.