When you have a long running job in Jenkins which is composed of many steps,
and you are actively developing / debugging this job you need to be able to disable some of the steps to skip to a certain step which is been debugged.
How do you do that ?
Obviously you can try to delete the steps not interested in, but that is a pain because restoring these steps is error prone. Same goes for editing them to make them skip by giving them some parameter like -DskipTests.
Another alternative would be to copy the job, but then it's a pain again, because checkout for our relevantly large project takes ages. We can manually copy workspace but that is hard work as well.
What better solutions are there to this problem ?
Try the Conditional BuildStep Plugin, which requires the Run Condition Plugin With these two plugins, you can conditionalize any build step and skip anyone that you like.
Ok I suppose, that just commenting out in the .jenkins/jobs/JobName/config.xml the steps you want to skip should work?
Related
When should we do coverity static analysis (no build, buildless capture since we don't use compiled language) in our CI lifecycle? We have stages like test, build, deploy. What are the pros and cons of different approaches?
This is for a django application which is deployed onto kubernetes.
test stage involves testing django end-points.
build stage involves building a docker container.
deploy stage involves rolling out the recently built docker image.
If I were to create a new stage when should it be done? Any convention followed while doing this?
Deciding where to put certain checks in your build pipeline is a matter of what you want to get out of those checks.
A build pipeline should give you fast feedback first and foremost. You want to know as quickly as possible if there's anything significant that should stop your build from going out to production. That's why you tend to move checks that run fast to the earlier stages of your pipeline. This way you quickly check whether it's worth it to move on to the slower, more cumbersome steps of your pipeline.
If your static code analysis detects issues, do you want to fail the build? If so, this might be an indicator to put this step early into your pipeline.
How long does your static code analysis take to analyse your codebase? If it's a matter of a few seconds, you can put it into an early stage into your pipeline without thinking too much about it. If it takes significant time to build (maybe tens of seconds or even minutes) this is an indicator that you should move this to a later stage so that other, faster checks can run first.
You can but don't have to put static code analysis into one of your existing (test, build and deploy) stages but there's no one stopping you from creating a dedicated stage in your pipeline for that (verification maybe?).
There's no reason to be dogmatic about this. It's valuable to experiment and see what works for you. Putting emphasis on fast feedback is a good rule of thumb to come up with a build pipeline that doesn't require you to watch the build for 20 minutes only to see that you made an indentation error on line 24.
I am using XCode for test driven development in C++.
It occurred to me that I would save a lot of time if XCode could automatically build and run my tests every time I save.
Is there any way to do this (by scripting XCode or otherwise)? Google doesn't seem to have a clue.
I have seen this workflow when using interpreted languages and it really does increase productivity.
Let's assume that my machine is fast enough to build and run tests in a few seconds.
If you're targeting C++, then you're probably out of luck.
With Objective-C, there's a project called «Injection»:
http://injectionforxcode.com/
It tracks the changes to your project files, and when a change occurs, it re-build the files as categories, placed inside a bundle.
The bundle is then loaded dynamically into the running app, and the contents from the categories replace the running code.
But it's Objective-C. C++ does not have such a runtime and capabilities.
Anyway, you may want to take a look at it... : )
automatically? no. you could write your own fsevent monitor agent. when a change occurs that requires a rebuild, do something appropriate.
the easy way around this: you can configure xcode to save when building. you don't need save explicitly, just hit run with this preference enabled. in that sense, hitting run is as simple as hitting save, and that performs a save, build and run in the correct order when you hit run. You may want an intermediate target or scheme for this.
another option would be to use a vc commit as a trigger for a build and run of your tests (saw your comment: use branches).
No, I don't think this can be done.
Most projects don't build and test in the fraction of a second it would require for it to be practical to do on every save anyway (i.e., whenever Xcode autosaves).
A lot of work has gone into the infrastructure for just getting Xcode's live errors and warnings. As long as your project isn't too weird those live errors ought to give a pretty good proxy for actually building it anyway.
For testing you might want to look into continuous integration if you don't already use it.
Grey-beards that grew up before autosave may have developed the habit of occasionally using a key command to save manually. Such users may be able to change that habit by substituting the key command that runs the tests for the key command they use to manually save.
It seems that a build system best-practice is to have a single script that can build all source and package the releases. See Joel Test #2
How do you account for non-portable dependencies? For example, if you code for .net 4, then you need .net 4 installed on the box. Standard MS release .net 4 is not xcopy deployable (unless I'm mistaken?). I can see a few avenues:
the dependencies are clearly stated in some resource file (wiki, txt, whatever). When you
call the build script, the build will fail if you don't have the dependency installed. This is an acceptable outcome.
The build script is responsible for setting up the environment. So if you require .net 4 and its not on the box then it installs it for you.
A flavor of #2 - instead of installing dependencies, the script spawns a pre-packaged image (virtual machine, Amazon EC2 AMI) that is setup with all dependencies
???
For implementing a build script you have to ask yourself, how much work you want/can spent on it. This leads to the question how often you have to set up the build environment. I can see #2 would be the perfect solution, but i would need a lot of work, since usually you have more than one non portable dependency.
So we use #1 one. And it works quite well. The most important thing is, that the build script is starting with some sort of self-test. It looks for everything which is needed to build the whole software and gives an error if something is not found. And it gives a clear error message, so that any new guy knows what to do to make it running. Of course as with a lot of software it is nearly never finished and gets extended by needs. The drawback that this test can take some seconds is insignificant when whole build process needs more than minutes.
A wiki (or even sth. else) with the setup solution was not a good solution for us, since after three month nobody knows where this was, but the build script is used every day.
The build script itself is a set of a lot of different things, which where chosen by needs. It is starting with a batch (we are using Windows) which invokes a lot of other things. Other batches, MSBuild, home grown tools. Each step by it self is checking for its own dependencies, to have the problem local and you can see three lines later why this special thing is needed.
Number 2 states "Can you make a build in one step?" As described this means for a development team to be effective the build process must be as simple as possible to reduce errors in the build process and insure consistency. This is especially important as a team gets larger. You want to make sure everyone is building the same thing. (What is done with that package should also be simple, but it is not as important IMHO.) Msbuild is great at this; they provide the facilities to set up a build server that access the source control system independently so the developers actions can't corrupt the build environment. I highly recommend setting up a build server using TFS -- many build issues will go away and you will have the 1-click build Joel describes.
As for your points about what that package does for deployment -- you have many options with MS, but the more "one click" you can make it the better. I believe this is slightly different than Joel's #2. In his example he describes changing what software he will use for the install not because one performs with fewer steps, but instead because one can be incorporated into a one step build.
We've had problems recently where developers commit code to SVN that doesn't pass unit tests, fails to compile on all platforms, or even fails to compile on their own platform. While this is all picked up by our CI server (Cruise Control), and we've instituted processes to try to stop it from happening, we'd really like to be able to stop the rogue commits from happening in the first place.
Based on a few other questions around here, it seems to be a Bad Idea™ to force this as a pre-commit hook on the server side mostly due to the length of time required to build + run the tests. I did some Googling and found this (all devs use TortoiseSVN):
http://cf-bill.blogspot.com/2010/03/pre-commit-force-unit-tests-without.html
Which would solve at least two of the problems (it wouldn't build on Unix), but it doesn't reject the commit if it fails. So my questions:
Is there a way to make a pre-commit hook in TortoiseSVN cause the commit to fail?
Is there a better way to do what I'm trying to do in general?
There is absolutely no reason why your pre-commit hook can't run the Unit tests! All your pre-commit hook has to do is:
Checkout the code to a working directory
Compile everything
Run all the unit tests
Then fail the hook if the unit tests fail.
It's completely possible to do. And, afterwords, everyone in your development shop will hate your guts.
Remember that in a pre-commit hook, the entire hook has to complete before it can allow the commit to take place and control can be returned to the user.
How long does it take to do a build and run through the unit tests? 10 minutes? Imagine doing a commit and sitting there for 10 minutes waiting for your commit to take place. That's the reason why you're told not to do it.
Your continuous integration server is a great place to do your unit testing. I prefer Hudson or Jenkins over CruiseControl. They're easier to setup, and their webpage are more user friendly. Even better they have a variety of plugins that can help.
Developers don't like it to be known that they broke the build. Imagine if everyone in your group got an email stating you committed bad code. Wouldn't you make sure your code was good before you committed it?
Hudson/Jenkins have some nice graphs that show you the results of the unit testing, so you can see from the webpage what tests passed and failed, so it's very clear exactly what happened. (CruiseControl's webpage is harder for the average eye to parse, so these things aren't as obvious).
One of my favorite Hudson/Jenkins plugin is the Continuous Integration Game. In this plugin, users are given points for good builds, fixing unit tests, and creating more passed unit tests. They lose points for bad builds and breaking unit tests. There's a scoreboard that shows all the developer's points.
I was surprised how seriously developers took to it. Once they realized that their CI game scores were public, they became very competitive. They would complain when the build server itself failed for some odd reason, and they lost 10 points for a bad build. However, the number of failed unit tests dropped way, way down, and the number of unit tests that were written soared.
There are two approaches:
Discipline
Tools
In my experience, #1 can only get you so far.
So the solution is probably tools. In your case, the obstacle is Subversion. Replace it with a DVCS like Mercurial or Git. That will allow every developer to work on their own branch without the merge nightmares of Subversion.
Every once in a while, a developer will mark a feature or branch as "complete". That is the time to merge the feature branch into the main branch. Push that into a "staging" repository which your CI server watches. The CI server can then pull the last commit(s), compile and test them and only if this passes, push them to the main repository.
So the loop is: main repo -> developer -> staging -> main.
There are many answers here which give you the details. Start here: Mercurial workflow for ~15 developers - Should we use named branches?
[EDIT] So you say you don't have the time to solve the major problems in your development process ... I'll let you guess how that sounds to anyone... ;-)
Anyway ... Use hg convert to get a Mercurial repo out of your Subversion tree. If you have a standard setup, that shouldn't take much of your time (it will just need a lot of time on your computer but it's automatic).
Clone that repo to get a work repo. The process works like this:
Develop in your second clone. Create feature branches for that.
If you need changes from someone, convert into the first clone. Pull from that into your second clone (that way, you always have a "clean" copy from subversion just in case you mess up).
Now merge the Subversion branch (default) and your feature branch. That should work much better than with Subversion.
When the merge is OK (all the tests run for you), create a patch from a diff between the two branches.
Apply the patch to a local checkout from Subversion. It should apply without problems. If it doesn't, you can clean your local checkout and repeat. No chance to lose work here.
Commit the changes in subversion, convert them back into repo #1 and pull into repo #2.
This sounds like a lot of work but within a week, you'll come up with a script or two to do most of the work.
When you notice someone broke the build (tests aren't running for you anymore), undo the merge (hg clean -C) and continue to work on your working feature branch.
When your colleagues complain that someone broke the build, tell them that you don't have a problem. When people start to notice that your productivity is much better despite all the hoops that you've got to jump, mention "it would be much more simple if we would scratch SVN".
The best thing to do is to work to improve the culture of your team, so that each developer feels enough of a commitment to the process that they'd be ashamed to check in without making sure it works properly, in whatever ways you've all agreed.
I would like to have different build targets for periodic builds and for those that are triggered by polling SCM.
More specific: The idea is that nightly builds should call 'mvn verify' which includes integration tests, while a normal build calls 'mvn test' that just executes Unit tests.
Any ideas how this can be achieved using Hudson?
Cheers
Chris
You could create two jobs - one scheduled and the other polled.
In the scheduled you can specify a different maven goal from the polled.
The answer by Raghuram is straight forward and correct. But you can also have three jobs. The first two do the triggering and pass the maven goal as a parameter into the third job. Sounds like a lot of clutter, and to a certain point it is. But it will help, if you have a lot of configuration to do (especially if the configuration needs to be changed regularly). It will help to have the configuration correct for both jobs. Configuration does not only include the build steps but also the harvesting of all reports, post build cleanup, notifications, triggering of downstream jobs, ... Another advantage is, that you don't need to synchronize the two jobs, so that they not run in parallel (if that causes problems).
Don't understand me wrong, my first impulse would go for two jobs, which has it's own advantages. The history for the nightly build will be contain the whole day (actually since the last nightly build) and not only for the time since the last build (which could be a triggered one. Integration tests usually need a more extensive setup or access to scarce resources. With two jobs you don't block these resources when you run the test goal. In addition I expect that more test results need to be harvested to be displayed and tracked over time by Hudson. You also might want to run more metrics against your code whose results should be displayed by Hudson. The disadvantage is that you of course need to keep the build steps basically the same all the time.
But in the end it is a case-by base decision if you go with 2 or 3 jobs.