Hudson - different build targets for different triggers - build

I would like to have different build targets for periodic builds and for those that are triggered by polling SCM.
More specific: The idea is that nightly builds should call 'mvn verify' which includes integration tests, while a normal build calls 'mvn test' that just executes Unit tests.
Any ideas how this can be achieved using Hudson?
Cheers
Chris

You could create two jobs - one scheduled and the other polled.
In the scheduled you can specify a different maven goal from the polled.

The answer by Raghuram is straight forward and correct. But you can also have three jobs. The first two do the triggering and pass the maven goal as a parameter into the third job. Sounds like a lot of clutter, and to a certain point it is. But it will help, if you have a lot of configuration to do (especially if the configuration needs to be changed regularly). It will help to have the configuration correct for both jobs. Configuration does not only include the build steps but also the harvesting of all reports, post build cleanup, notifications, triggering of downstream jobs, ... Another advantage is, that you don't need to synchronize the two jobs, so that they not run in parallel (if that causes problems).
Don't understand me wrong, my first impulse would go for two jobs, which has it's own advantages. The history for the nightly build will be contain the whole day (actually since the last nightly build) and not only for the time since the last build (which could be a triggered one. Integration tests usually need a more extensive setup or access to scarce resources. With two jobs you don't block these resources when you run the test goal. In addition I expect that more test results need to be harvested to be displayed and tracked over time by Hudson. You also might want to run more metrics against your code whose results should be displayed by Hudson. The disadvantage is that you of course need to keep the build steps basically the same all the time.
But in the end it is a case-by base decision if you go with 2 or 3 jobs.

Related

At what stage should coverity static analysis be done?

When should we do coverity static analysis (no build, buildless capture since we don't use compiled language) in our CI lifecycle? We have stages like test, build, deploy. What are the pros and cons of different approaches?
This is for a django application which is deployed onto kubernetes.
test stage involves testing django end-points.
build stage involves building a docker container.
deploy stage involves rolling out the recently built docker image.
If I were to create a new stage when should it be done? Any convention followed while doing this?
Deciding where to put certain checks in your build pipeline is a matter of what you want to get out of those checks.
A build pipeline should give you fast feedback first and foremost. You want to know as quickly as possible if there's anything significant that should stop your build from going out to production. That's why you tend to move checks that run fast to the earlier stages of your pipeline. This way you quickly check whether it's worth it to move on to the slower, more cumbersome steps of your pipeline.
If your static code analysis detects issues, do you want to fail the build? If so, this might be an indicator to put this step early into your pipeline.
How long does your static code analysis take to analyse your codebase? If it's a matter of a few seconds, you can put it into an early stage into your pipeline without thinking too much about it. If it takes significant time to build (maybe tens of seconds or even minutes) this is an indicator that you should move this to a later stage so that other, faster checks can run first.
You can but don't have to put static code analysis into one of your existing (test, build and deploy) stages but there's no one stopping you from creating a dedicated stage in your pipeline for that (verification maybe?).
There's no reason to be dogmatic about this. It's valuable to experiment and see what works for you. Putting emphasis on fast feedback is a good rule of thumb to come up with a build pipeline that doesn't require you to watch the build for 20 minutes only to see that you made an indentation error on line 24.

Job inheritance in Jenkins jobs

How do you handle mapping Jenkins jobs to your build process, and have you been able to build in cascading configurations on inheritance?
For any given build I'll have at least three jobs (standard continuous integration/nightly, security scan, coverage) and then some downstream integration testing jobs. The configuration slicer plugin handles some aspects cross jobs but each jobs is still very much its own individual entity with no relationship to the other jobs in its group.
I recently saw QuickBuild and it has job inheritance where a parent jobs can define a standard group of steps and its children can override and specialize. With Jenkins, I have copies of jobs, which is fine until I need to change something. With QuickBuild the relationship between jobs allows me to spread my changes with little effort.
I've been trying to figure out how to handle this in Jenkins. I could use the parameterized build trigger plugin to allow jobs to call others and override aspects. I'd then harvest the data from the called jobs to its caller. I suspect I'll run into a series of problems where there are aspects which I can't override which will force me to implement Jenkins functionality in my own script thus making Jenkins less useful.
How do you handle complexity in your build jobs in Jenkins? Have you heard of any serious problems with QuickBuild?
I would like to point out to you the release of a plugin that my team has developed and only recently published under open source.
It implements full "Inheritance between jobs".
Here for further links that might help you:
Presentation: https://www.youtube.com/watch?v=wYi3JgyN7Xg
Wiki: https://wiki.jenkins-ci.org/display/JENKINS/inheritance-plugin
Releases: http://repo.jenkins-ci.org/releases/hudson/plugins/project-inheritance/
I had pretty much the same problem. We have a set of jobs that needs to run for our trunk as well as at least two branches. The branches represent our versions, and a new branch is created every few months. Creating new jobs by hand for this is no solution, so I checked out some possibilities.
One possibility is to use the template plugin. This lets you create a hierarchy of jobs of a kind. It provides inheritance for builders, publishers and SCM settings. Might work for some, for me it was not enough.
Second thing I checked out was the Ant Script for job cloning, and his sibling the Bash Script. These are truly great. The idea is to make the script create a new job for, copy all settings from a template job, make changes as you need them. As this is a script it is very flexible and you can do a lot with that. Only drawback is, that this will not result in a real hierarchy, so changes in the template job will not reflect on jobs already cloned, only on jobs that will be created going forward.
Looking at the drawbacks and virtues of those two solutions, a combination of both might work best. You create a template project with some basic settings that will be true for all jobs, and then use a bash or ant script to create jobs depending on that template.
Hope that helps.
I was asked what our eventual solution to the problem was... After many months of fighting with our purchasing system we spent around $4000 US on Quickbuild. In a about 2-3 months we had a templated build system in place and were very happy with it. Before I left the company we had several product groups in the system and were automating the release process as well.
Quickbuild was a great product. It should be in the $40k class but it's priced at much less. While I'm sure Jenkins could do this, it would be a bit of a kludge whereas Quickbuild had this functionality baked in. I've implemented complex behaviors on top of products before (e.g. merge tracking in SVN 1.0) and regretted it. Quickbuild was reasonably priced and provided a solid base for our build and test systems.
At present, I'm at a firm using Bamboo and hope its new feature branch feature will provide much of what Quickbuild can do
EZ Templates plugin allows you to use any job as a template for other jobs. It is really awesome. All you need is to set the base job as a template:
* Usually you would also disable the base job (like "abstract class").
Then create a new job, set it to use the base job template, and save:
Now edit the new job - it will include everything! (and you can override existing configurations).
Note: There's another plugin Template Project for configuration templates, but it was not updated recently (last commit on 2016).
We use quickbuild and it seems to work great for most things. I have even been able to use their APIs to write custom plugins. One area where quickbuild is lacking is sonar integration. The sonar team has a Jenkins plugin and not one for quickbuild.
Given that the goal is DRY (don't repeat yourself) I presently favor this approach:
Use jenkins shared library with jenkins pipeline unit to support TDD
Use docker images using groovy/python or whatever language you like to execute complex actions requiring apis etc
Keep the actual job pipeline very spartan (basically just for pulling build params and passing them to functions in shared library which may use docker images to do the work.
This works really well an eliminates the DRY issues around complex build jobs.
Shared Pipeline Docker Code Example - vars/releasePipeline.groovy
/**
* Run image
* #param closure to run within image
* #return result from execution
*/
def runRelengPipelineEphemeralDocker(closure) {
def result
artifactory.withArtifactoryEnvAuth {
docker.withRegistry("https://${getDockerRegistry()}", 'docker-creds-id') {
docker.image(getReleasePipelineImage()).inside {
result = closure()
}
}
}
return result
}
Usage example
library 'my-shared-jenkins-library'
releasePipeline.runRelengPipelineEphemeralDocker {
println "Running ${pythonScript}"
def command = "${pythonInterpreter} -u ${pythonScript} --cluster=${options.clusterName}"
sh command
}

buildbot vs hudson/jenkins for C++ continuous integration

I'm currently using jenkins/hudson for continuous integration a large mostly C++ project. We have separate projects for trunk and every branch. Also, there are some related projects for the Java code, but the setup for those are fairly basic right now (we may do more later though). The C++ projects do the following:
Builds everything with options for whether to reconfigure, do a clean build, or use a fresh checkout
Optionally builds and runs all tests
Optionally runs all tests using Valgrind's memcheck
Runs cppcheck
Generates doxygen documentation
Publishes reports: unit tests, valgrind, cppcheck, compiler warnings, SLOC, open tasks, and code coverage (using gcov, gcovr, and the cobertura plugin)
Deploys code nightly or on demand to a test environment and a package repository
Everything is configurable for automatic builds and optional for on demand builds. Underneath, there's a bash script that controls much of this, which farther depends on our build system, which uses automake and autoconf along with custom bash scripts.
We started using Hudson (at the time) because that's what the Java guys were using and we just wanted nightly builds. Since then, we've added a lot more and continue to add more. In some ways Hudson is great, but certainly isn't ideal.
I've looked at other solutions and the only one that looks like it could be a replacement is buildbot. Would buildbot be better for this situation? Is the investment worth it since we're already using Hudson? Why?
EDIT: Someone asked why I haven't found Hudson/Jenkins to be ideal. The short answer is that everything can be improved. I'm simply wondering if Jenkins is the best current solution for my use case or whether there is something better (buildbot?) that would be easier to maintain in the long run even as new requirements come up.
Both are open source projects, but you do not need to change buildbot code to "extend" it, it is actually quite easy to import your own packages in its configuration in which you can sub-class most of the features with your own additions. Examples: your own compilation or test code, some parsing of outputs/errors to be given to the next steps, your own formating of alert emails etc. there are lots of possibilities.
Generally I would say that buildbot is the most "general purpose" automatic builds tools. Jenkins however might be the best related to running tests, especially for parsing and presenting results in nice ways (results, details, charts.. some clicks away), things that buildbot does not do "out-of-the-box". I'm actually thinking of using both to have sexier test result pages.. :-)
Also as a rule of thumb it should not be difficult to create a new tool's config: if the specification of what to do (configs, builds, tests) is too hard to switch from one tool to another, it is a (bad) sign that not enough configuration scripts are moved to the sources. Buildbot (or Jenkins) should only call simple commands. If it is simple to run tests, then developers will do it as well and this will improve the success rate, whereas if only the continuous integration system runs the tests, you will be running after it to fix the new code failures, and will loose its non-regression value, just my 0.02€ :-)
Hope it'll help.
The 'result integration' is also in jenkins/hudson, and you can relatively easily capture build products without having to 'copy them elsewhere'.
For our instance, the coverage reports and unit test metrics and javadoc for the java code is all integrated. For our C++ code, the plugins are a little lacking, but you can still get most of it.
we ran buildbot since pre 0.7, and are now running 0.8 and are only now seeing any real reason to switch, as buildbot 0.8 forgot about windows slaves for an extended period of time and the support was pretty poor.
There are many other solutions out there, besides Jenkins/Hudson/BuildBot:
TeamCity by Jetbrains
Bamboo by Atlassian
Go by Thoughtworks
Cruise Control
OpenMake Meister
The specifics about what you are doing are not so important, in fact, as long as the agents (aka nodes) that you are doing them on support those tasks.
The beauty of a CI server is noticing when the build changes to trigger a new build (and test), publish the artifacts, and publish test results.
When you compare CI tools like those we mentioned, consider features like the usability of its interface, how easy is branching (and features it might offer like automatic merging), notifications (like XMPP/jabber), or an information-radiator (like hooking up a monitor to always show status). Product support is another thing to consider - Jenkins' support is only as good as who is responding to community questions at the time you have questions.
My personal favorite is Bamboo, but it comes with a license fee.
I'm a long-time Jenkins user in the middle of evaluating Buildbot and would like to offer a few items for folks considering using Buildbot for multi-module solutions:
*) Buildbot doesn't have any out-of-the-box concept of file artifacts related to each build. It's not in the UI and it's not in any of the builtin "steps" modules as far as I can see:
http://docs.buildbot.net/current/manual/configuration/buildsteps.html
...and I see no third party plugin:
https://github.com/buildbot/buildbot/wiki/PluginList#steps
Buildbot does collect all the console output from a given build, but critically, you can't collect files related to it.
*) Given that artifacts are not supported, it's not easy to create "collector" projects that bring multiple modules into say, a single installer. Jenkins has a great feature that lets you parameterize a build with builds from other modules (the parameter type is a run).
*) Establishing dependencies between modules is trickier in Buildbot. Say you have a library that three binaries depend on, and you want those binaries to rebuild each time the library changes. Jenkins has triggers built into the UI. If you want to do triggers in Buildbot you have to script them using schedulers.Dependent, and it causes a lot of item congestion in the Schedulers UI.
*) When you're working in Buildbot, it seems that pretty much all of the configuration is done in master.cfg in code. This is awesome and frustrating.
*) Buildbot forces you to create a worker in addition to a master server. This is annoying for beginners and systems for which a single build server is sufficient.
My impression after two days of Buildbot evaluation is that we'll stick with Jenkins, primarily due to it having artifacts. Buildbot is a tool we'd only use if we had more extensive customization needs, and the time to do it.
On the subject of buildbot and artifacts -- I don't have enough user score to make a comment -- you can get artifacts from buildbot 2.x series pretty easy with built-in file/directory upload actions. However you rarely want to just move files. Typically you make a triggered buildstep that does deployment directly off the worker for best results. eg push to cloud storage, containers, thirdparty (steam uploads), etc.
This way you can get metrics on the uploads and conditionally control them better (or even mix and match artifacts across worker machines).

How start identical jobs with different parameters in parallel execution?

I have a build job and a test job parameters.
I want to be after the build job, simultaneously run test job with one parameter and the same test job with different parameters in parallel execution.
build job
|
/ \
test job test job
with one params with other params
| |
How to accomplish this and whether it is possible to perform without having to write your own plugin?
Thanks!
When you create your test job, create it as a "Build multi-configuration project"
While configuring the job select "Configuration Matrix" then "User-defined axis"
You can use the name of this axis as a parameter in your job. the given parameters will be started simultaneous in different jobs. (if enough executors are available)
Playing off #Soo Wei Tan's answer, I found the following works well.
Parameterized Trigger Plugin
Choose "Parameter Factory"
Choose "For every property file, invoke one build"
Then, in a shell, write a series of property files, and the Trigger Plugin will take care of the rest.
You can even combine this with a matrix style job at the top level in interesting ways. For example, triggering on the user-defined axis, keeping track of it all with a grid. Really quite a flexible approach, if a bit hidden.
I had the same requirement, and found that Parameterized Trigger Plugin was not flexible enough for passing different parameters to different (or the same) jobs in parallel. Yes you can use a Parameter Factory with property files, but that would mean adding new property files to my version control solely for the purpose of configuring Jenkins. A Multi-Configuration project with a configuration matrix also seemed overcomplicated.
The better and more straightforward solution for me was the Multijob Plugin, which has the concept of Phases. A MultiJob can have multiple phases. Phases run sequentially and jobs within a phase will run concurrently (in parallel).
After installing the MultiJob plugin, when creating a new Jenkins item, select MultiJob Project. You can then create one or more phases.
Each job within a phase has it own parameters, click Advanced... -> Add Parameters
Also it is very easy to configure what should happen if a particular job fails, should the entire MultiJob continue or fail etc, see the Kill the phase on: and Continuation condition to next phase
when jobs' statuses are: settings.
For me this was much more intuitive to use than the Parameterized Trigger Plugin or a Mult-Configuration project, and did not require any extra configuration outside of Jenkins.
Assuming you know the parameters when you are finishing your build job, you can use the Parameterized Trigger Build plugin to fire both downstream jobs with different parameters.
One option would be to use Build Flow plugin (https://wiki.jenkins-ci.org/display/JENKINS/Build+Flow+Plugin) potentially together with Job DSL plugin (https://wiki.jenkins-ci.org/display/JENKINS/Job+DSL+Plugin). You can use Job DSL to define job steps that invoke your build with different command line arguments and orchestrate the build with Build Flow.
I have a slightly different use case. We have test jobs that run against our main build during the development cycle. Toward the end of the cycle; we create a release candidate build and run the same tests against that. We want to also continue testing the main build.
Main Build Release Build
\ /
|
same set of tests
I can create duplicate jobs with just different names to handle this. But there have to be a more elegant/simpler way.
Could you please say a bit more why do you need your test jobs to run concurrently?
I do use test that need to split and run simultaneously, but I use a single Jenkins/Hudson job that has a weight > 1 (see Heavy Job Plugin).

Slow Complex Builds & Hudson vs. Electric Cloud

Is hudson the right tool for complex C++ builds?
I have a C++ build that takes about 4 hours. Compile and packaging take about 1/2 the time and testing consumes the other half. Presently, we are using a home grown system but there's some move to go to hudson since we use it for all of our java builds.
My problem is that continuous integration isn't very...continuous at 4 hour intervals. I want a tool that's going to let me parallelize the build in an understandable way.
Hudson's been great for small builds or java builds where I'm sitting at the top of a large maven project, but I don't think it will scale well for complex c++ builds.
What have your experiences been?
Seems like you have a few questions here:
Should I use a CI server to manage my C++ build? The answer to this is unequivocally YES. Your homegrown system may be adequate, but it's not standard, extending it is probably difficult, and maintaining it is a distraction from the work you're actually paid to do.
Is Hudson the right choice for my project? It will probably get the job done, and it has the advantage of being in deployment at your site already. However, you specifically mention that you want a tool that supports parallelization well, and I don't think that Hudson really fits the bill. The problem is that Hudson was not designed with parallelism in mind. See, the representation of a build process in Hudson is a "job", which is just a series of steps executed in sequence -- checkout, compile, test, package, etc. There's no way to get those steps to run in parallel. Now, you can get around this by modeling your process with multiple jobs. Each job is completely independent, so of course they could be run in parallel; you can use something like the Locks and Latches plugin to coordinate the jobs, but the whole process is more complicated than it ought to be, and kind of clumsy -- instead of a single job representing a single run of the build process, you have several unconnected jobs, at best tied together via naming convention.
Can Electric Cloud help? Again, an unequivocal YES. Electric Cloud offers ElectricCommander, a CI server with parallel support built-in from inception. As with Hudson, a job is used to represent a build process, but the steps within a job can easily be run in parallel (just check the "parallel" box on those steps), so you don't have to resort to add-ons and kludges: one run build process is one job, with as many parallel steps as you like.
Will the right CI server put "continuous" back into my integration? A CI server will only get you so far. The thing is, a CI server can provide you coarse-grained parallelism -- so with a little work, you can set it up to run packaging in parallel with tests, for example. With a little more work, you can probably split your test phase into a few independent pieces that can be run in parallel.
You didn't give many details, but let's assume that your build is 90 minutes of compile, 30 minutes of packaging, and 2 hours of tests that can be broken down into four 30 minute pieces. Suppose further that you can do packaging and testing simultaneously. That would bring your 4 hour process down to 2 hours total. At this point the "long pole" in your process is the compile phase, and although you might be able to break that up by hand into pieces that can be run in parallel by your CI server, the truth is that the CI server is just not the right tool for that job.
A better option is to use a build tool that can give you automatic fine-grained parallelism within the compile phase. For example, if you're using gmake already, you can try gmake -j 8 to run 8 compiles at once. If your makefiles are clean and your dependencies are all correct, and you have a beefy build server, this could give you a pretty good performance boost. You could also use ElectricAccelerator, another product from Electric Cloud, that was specifically designed to accelerate this portion of the build process, even for builds that can't safely use gmake -j due to incorrect or incomplete depedencies.
Hope that helps.
Can you not split the build into multiple parts whatsoever?
You do mention that the job has several distinct parts. The general guidance with Hudson is to do the build part in one job, testing in another, packaging in another, and so on.
You can compile the code in Job A and archive the output, then tell Job B to copy these artifacts from Job A and run the tests on them. Meanwhile, another Job A can be kicked-off, due to further commits to the source repository..
Sounds to me like the problem is with your build process (make files?, msbuild?) and not Hudson. Hudson is simply going to execute the build process the same way a user would from a command-line. Is it possible to optimize your build process?
Even if a 4 hour build process is unavoidable, Hudson can help because you can attach an unlimited number of slave machines which can all be running multiple builds in parallel, given adequate hardware horsepower.