GO CD - Fetch materials on failure - go-cd

The way "fetch" materials works is that the latest "passed" build is transferred to the downstream pipelines.
Is it possible to do even if the upstream stage fails ?

I don't think that a stage failure even triggers the next stage or next pipeline, so nothing runs that could fetch the failed material.

Is it possible to do even if the upstream stage fails ?
No. It's not possible.
"Stages are meant to run sequentially". Why?
Mostly you should design your problem using stages in a way that they are dependent and sequential.
Like, "build > unit test > integration test > deploy.
If you look at the sequence above, it doesn't make sense to continue to next step if the previous one fails. So in go-cd stages are implemented to achieve this dependency pattern.
Maybe your requirement might be correct, but stages might not be the solution for that problem. I would suggest you re-think about why you want to do that and use correct abstraction in go-cd for that problem.
Gocd has pipelines,stages,jobs and tasks. Check what best fits your situation and apply it.

Related

At what stage should coverity static analysis be done?

When should we do coverity static analysis (no build, buildless capture since we don't use compiled language) in our CI lifecycle? We have stages like test, build, deploy. What are the pros and cons of different approaches?
This is for a django application which is deployed onto kubernetes.
test stage involves testing django end-points.
build stage involves building a docker container.
deploy stage involves rolling out the recently built docker image.
If I were to create a new stage when should it be done? Any convention followed while doing this?
Deciding where to put certain checks in your build pipeline is a matter of what you want to get out of those checks.
A build pipeline should give you fast feedback first and foremost. You want to know as quickly as possible if there's anything significant that should stop your build from going out to production. That's why you tend to move checks that run fast to the earlier stages of your pipeline. This way you quickly check whether it's worth it to move on to the slower, more cumbersome steps of your pipeline.
If your static code analysis detects issues, do you want to fail the build? If so, this might be an indicator to put this step early into your pipeline.
How long does your static code analysis take to analyse your codebase? If it's a matter of a few seconds, you can put it into an early stage into your pipeline without thinking too much about it. If it takes significant time to build (maybe tens of seconds or even minutes) this is an indicator that you should move this to a later stage so that other, faster checks can run first.
You can but don't have to put static code analysis into one of your existing (test, build and deploy) stages but there's no one stopping you from creating a dedicated stage in your pipeline for that (verification maybe?).
There's no reason to be dogmatic about this. It's valuable to experiment and see what works for you. Putting emphasis on fast feedback is a good rule of thumb to come up with a build pipeline that doesn't require you to watch the build for 20 minutes only to see that you made an indentation error on line 24.

TeamCity re run tests until find last working revision / revision which broke it

As we only run our unit tests once a day it can happen that multiple changes led to a failing test. We then go into the changes list and trigger the tests for each change until we find the one responsible for breaking the test.
How can we automate this? We want TeamCity to run the unit tests again for the different changes (some binary search logic would be a bonus) until it finds the culprit.
How would you call this feature? I'm looking at the options to enable this but haven't had any luck so far.
Thanks for input and pointers.
I've developed a TC plugin to deal with this. See https://github.com/sferencik/SinCity. Read the docs and see if it suits you. I'm happy to help you further if you need.
The docs also mention the only other alternative I'm aware of: https://github.com/tkirill/tc-bisect. That has the bisect functionality ("binary search logic") but I'm not sure what state it's in.

Skipping steps in Jenkins job

When you have a long running job in Jenkins which is composed of many steps,
and you are actively developing / debugging this job you need to be able to disable some of the steps to skip to a certain step which is been debugged.
How do you do that ?
Obviously you can try to delete the steps not interested in, but that is a pain because restoring these steps is error prone. Same goes for editing them to make them skip by giving them some parameter like -DskipTests.
Another alternative would be to copy the job, but then it's a pain again, because checkout for our relevantly large project takes ages. We can manually copy workspace but that is hard work as well.
What better solutions are there to this problem ?
Try the Conditional BuildStep Plugin, which requires the Run Condition Plugin With these two plugins, you can conditionalize any build step and skip anyone that you like.
Ok I suppose, that just commenting out in the .jenkins/jobs/JobName/config.xml the steps you want to skip should work?

How can I guarantee all unit tests pass before committing?

We've had problems recently where developers commit code to SVN that doesn't pass unit tests, fails to compile on all platforms, or even fails to compile on their own platform. While this is all picked up by our CI server (Cruise Control), and we've instituted processes to try to stop it from happening, we'd really like to be able to stop the rogue commits from happening in the first place.
Based on a few other questions around here, it seems to be a Bad Idea™ to force this as a pre-commit hook on the server side mostly due to the length of time required to build + run the tests. I did some Googling and found this (all devs use TortoiseSVN):
http://cf-bill.blogspot.com/2010/03/pre-commit-force-unit-tests-without.html
Which would solve at least two of the problems (it wouldn't build on Unix), but it doesn't reject the commit if it fails. So my questions:
Is there a way to make a pre-commit hook in TortoiseSVN cause the commit to fail?
Is there a better way to do what I'm trying to do in general?
There is absolutely no reason why your pre-commit hook can't run the Unit tests! All your pre-commit hook has to do is:
Checkout the code to a working directory
Compile everything
Run all the unit tests
Then fail the hook if the unit tests fail.
It's completely possible to do. And, afterwords, everyone in your development shop will hate your guts.
Remember that in a pre-commit hook, the entire hook has to complete before it can allow the commit to take place and control can be returned to the user.
How long does it take to do a build and run through the unit tests? 10 minutes? Imagine doing a commit and sitting there for 10 minutes waiting for your commit to take place. That's the reason why you're told not to do it.
Your continuous integration server is a great place to do your unit testing. I prefer Hudson or Jenkins over CruiseControl. They're easier to setup, and their webpage are more user friendly. Even better they have a variety of plugins that can help.
Developers don't like it to be known that they broke the build. Imagine if everyone in your group got an email stating you committed bad code. Wouldn't you make sure your code was good before you committed it?
Hudson/Jenkins have some nice graphs that show you the results of the unit testing, so you can see from the webpage what tests passed and failed, so it's very clear exactly what happened. (CruiseControl's webpage is harder for the average eye to parse, so these things aren't as obvious).
One of my favorite Hudson/Jenkins plugin is the Continuous Integration Game. In this plugin, users are given points for good builds, fixing unit tests, and creating more passed unit tests. They lose points for bad builds and breaking unit tests. There's a scoreboard that shows all the developer's points.
I was surprised how seriously developers took to it. Once they realized that their CI game scores were public, they became very competitive. They would complain when the build server itself failed for some odd reason, and they lost 10 points for a bad build. However, the number of failed unit tests dropped way, way down, and the number of unit tests that were written soared.
There are two approaches:
Discipline
Tools
In my experience, #1 can only get you so far.
So the solution is probably tools. In your case, the obstacle is Subversion. Replace it with a DVCS like Mercurial or Git. That will allow every developer to work on their own branch without the merge nightmares of Subversion.
Every once in a while, a developer will mark a feature or branch as "complete". That is the time to merge the feature branch into the main branch. Push that into a "staging" repository which your CI server watches. The CI server can then pull the last commit(s), compile and test them and only if this passes, push them to the main repository.
So the loop is: main repo -> developer -> staging -> main.
There are many answers here which give you the details. Start here: Mercurial workflow for ~15 developers - Should we use named branches?
[EDIT] So you say you don't have the time to solve the major problems in your development process ... I'll let you guess how that sounds to anyone... ;-)
Anyway ... Use hg convert to get a Mercurial repo out of your Subversion tree. If you have a standard setup, that shouldn't take much of your time (it will just need a lot of time on your computer but it's automatic).
Clone that repo to get a work repo. The process works like this:
Develop in your second clone. Create feature branches for that.
If you need changes from someone, convert into the first clone. Pull from that into your second clone (that way, you always have a "clean" copy from subversion just in case you mess up).
Now merge the Subversion branch (default) and your feature branch. That should work much better than with Subversion.
When the merge is OK (all the tests run for you), create a patch from a diff between the two branches.
Apply the patch to a local checkout from Subversion. It should apply without problems. If it doesn't, you can clean your local checkout and repeat. No chance to lose work here.
Commit the changes in subversion, convert them back into repo #1 and pull into repo #2.
This sounds like a lot of work but within a week, you'll come up with a script or two to do most of the work.
When you notice someone broke the build (tests aren't running for you anymore), undo the merge (hg clean -C) and continue to work on your working feature branch.
When your colleagues complain that someone broke the build, tell them that you don't have a problem. When people start to notice that your productivity is much better despite all the hoops that you've got to jump, mention "it would be much more simple if we would scratch SVN".
The best thing to do is to work to improve the culture of your team, so that each developer feels enough of a commitment to the process that they'd be ashamed to check in without making sure it works properly, in whatever ways you've all agreed.

Hudson - different build targets for different triggers

I would like to have different build targets for periodic builds and for those that are triggered by polling SCM.
More specific: The idea is that nightly builds should call 'mvn verify' which includes integration tests, while a normal build calls 'mvn test' that just executes Unit tests.
Any ideas how this can be achieved using Hudson?
Cheers
Chris
You could create two jobs - one scheduled and the other polled.
In the scheduled you can specify a different maven goal from the polled.
The answer by Raghuram is straight forward and correct. But you can also have three jobs. The first two do the triggering and pass the maven goal as a parameter into the third job. Sounds like a lot of clutter, and to a certain point it is. But it will help, if you have a lot of configuration to do (especially if the configuration needs to be changed regularly). It will help to have the configuration correct for both jobs. Configuration does not only include the build steps but also the harvesting of all reports, post build cleanup, notifications, triggering of downstream jobs, ... Another advantage is, that you don't need to synchronize the two jobs, so that they not run in parallel (if that causes problems).
Don't understand me wrong, my first impulse would go for two jobs, which has it's own advantages. The history for the nightly build will be contain the whole day (actually since the last nightly build) and not only for the time since the last build (which could be a triggered one. Integration tests usually need a more extensive setup or access to scarce resources. With two jobs you don't block these resources when you run the test goal. In addition I expect that more test results need to be harvested to be displayed and tracked over time by Hudson. You also might want to run more metrics against your code whose results should be displayed by Hudson. The disadvantage is that you of course need to keep the build steps basically the same all the time.
But in the end it is a case-by base decision if you go with 2 or 3 jobs.