I think everyone here would agree that in order to be considered a professional software house there are number fundamental things you must have in place.
There is no doubt that one of these things is a build server, the question is, how far do you need to go.
What are the minimum requirements for the build server? (Somewhere to just compile?)
What is the ultimate goal for your build server? (Scheduled, source control integration, auto deployment to test / live servers)
Where is a good place to start assuming you have nothing at the moment?
It would be great if we could list out a few simple tasks that an amateur developer could take on board in order to set them on the right track to a fully functional build server.
It would also be good to hear about people that feel they have a "complete" system setup that performs all the functionality they require and how they went about setting it all up from scratch.
You can start by looking into Cruise Control.
There's also CruiseControl.net if that's your poison.
Essentially though, you need the following ingredients:
A dedicated environment (Virtual Machine/server. Don't use a developer's machine, unless it's just you. Even then, run a VM if you can. Much easier to move it to a server when/if one becomes available in your organisation)
A source control system that supports labelled/tagged revisions (for example, Subversion+TortoiseSVN)
Build scripts. These can be batchfiles that start the devenv.exe or msbuild.exe applications with a command line, or you can use something like Ant or NAnt.
In this scenario, CruiseControl acts as the Continous Integration server, and can make sure that you have builds done as you check in your code. This means you know whether the build is broken quicker than if you just had nightly builds. You should probably also have nightly builds, though.
Hudson is a great CI.
We run farm locally, but we started by downloading hudson.war and doing
java -jar hudson.war
It integrates with SCM, bug trucking systems it is really awesome.
You'll need some disk space if you want to keep old build.
Enjoy it is most straightforward CI solution so far.
HTH,
Hubert.
If you're using Cruise Control, the place to start is an Ant build.xml that does the job manually.
You need a version control system that can do labeled check-outs.
You need JUnit tests to run using the Ant task and generate HTML reports.
Id say you'd have to start by implementing a build strategy so you can build your code in a structured way - I use NANT.
For a basic build server - use one of the CI offerings out there that monitors your source control and triggers a build whenever a change is detected. eg: cruiseControl.
Once you get the basic build together - add the running of your unit tests after a successfuly build.
The most successful system i've had in place had 3 different builds :-
- one that fired on a check in - all this did was build the code.
- an on demand one that would build the application, generate the installer and then put
the installer into a shared drive for the testers to pick up
- a daily build that fired at 10pm. This:
- ran some code generation to build DB and C# code from a UML model
- build the code
- created a new build verification test user on a test oracle instance
- ran the application schema into the db
- fired off a bunch of unit tests
- cleaned up the db user (if the tests were successful)
- ran coverage analysis to build a report of the unit code coverage
Software we used for this was NANT, CruiseControl.NET, a custom code generation system, custom app to build an oracle schema, and NCover for the code analysis.
Start by having a read of Martin Fowler's excellent paper on Continuous Integration.
We built such a system for a major project >2,000 kSLOC and it proved itself to be invaluable.
HTH
cheers,
Rob
Cruise, Maven, Hudson etc are all great but its always worth having a stopgap solution.
You should have a batch file, shell script or simply written instructions that will allow you to run a build from any machine. We have had build servers unavailable in the past and the ability to switch quickly to another machine was invaluable!
The spec of the build machine need not be important unless you have a monster project. We try and keep our build times down to 10 minutes (including unit tests) and we have a pretty big project.
Don't be tempted to create or write your own build system because "none of the tools out there are good enough". All modern build systems allow you to write plugins to do custom stuff.
I'm using Cruisecontrol.NET and an msbuild buildscript.
I can use the buildscript manually so that I can get the latest version of the codebase, built the codebase very easily using the commandline. (This is very interesting if you are working on an application that consists of multiple solutions).
Next to that, my CruiseControl.NET buildserver uses this buildscript as well. It checks on a regular interval if there have been changes committed to the source-control.
If that happens, CC.NET performs the 'get-latest' task that I've defined in the buildscript, builds everything, executes unit-tests and performs a statical code analysis (fxcop).
My 'buildserver' is just an old workstation. It's a PIV, 3Ghz with 1gb RAM, and it does its job perfectly.
One additional thing that I would find interesting, is to have the ability to automatically deploy a new version, or build a setup.
I haven't done that yet, since I'm not sure whether it is a good idea, nor have I found a good strategy yet to do so ...
I mean; is deploying a new version of some components into production for a mission-critical application a good idea ? I don't think so ...
I think this is a good place to start:
[http://confluence.public.thoughtworks.org/display/CC/Home;jsessionid=5201DA7E8D361EB164C40E519DA0F0DE][1]
At least, that's where I started looking when setting up my build server. :)
[1]: Home of CruiseControl
Roughly in order - minimal/least sophisticated through more sophisticated
able to get a specific set of source onto any machine
able to build that source (with no problems)
able to (schedule) build each night/or some other defined period with no user intervention
One (or more) dedicated build server (not shared as qa or dev machine)
able to do a build after each check-in/commit
Notify interested parties of the build status after a build
Provide build status at any time
Create installers as part of the build
ability to deploy/live if build is good
Run unit tests
Run tests on the product
Report the results of those tests
Static code analysis and reporting
...
And the list goes on and on
Don't be afraid to just start with batch files or shell scripts or other ad-hoc means. People made perfectly good software before the CI craze. there were plenty of good processes before Hudson and Cruise Control - ( I am not knocking those or others - I use Hudson among others) - but don't miss the point - these things are here to help you - not become overbearing process)
I couldn't give you all the details about how we set our build server up (I was only involved at the start), but:
We started with an in-house system, implemented in ASP.NET and a .NET Windows Service, using NAnt to do the actual builds. Actually, most of the workflow was implemented in NAnt (e.g. emailing people, copying stuff around, etc.).
We moved to JetBrains TeamCity (there's a free cut-down version available), which is still serving us well.
We use it for builds triggered by a commit: these just build the binaries and run the unit tests. From here, we can do a complete build, which does the MSI as well. From there, we have system test builds that run more in-depth tests, across an environment built with virtual machines (with a separate domain controller, SQL Server box, etc.). When the system tests pass, the build is made available to our QA department for manual testing and some regression tests that we've not automated yet.
In the java space I've tested most of the available build environments. The issue with automatic build is that you quite often end up spending a fair amount of time following it up. After we switched to the commercial bamboo from atlassian, we found that we have to spend a lot less time pampering the build box, which in our case turns out to be very good economy. Bamboo also supports clustering, so you can add inexpensive boxes as needs evolves.
Try & find something that fits in with your existing practices in terms of building - e.g. it's not going to be a good fit to try & use an Ant-based buildserver if you're using Maven, for instance!
Ideally, it should just be able to monitor your source-control system, checkout the code, build, run some tests & publish the results without you being aware of it, or at least not 'till it's reporting a failure. Personally, I'd suggest Hudson (https://hudson.dev.java.net/) as a good starting point as it's easy to get installed & running & has a decent UI.
We start by writing batch scripts that will run on the developers machine. Once we have all the processes automated, we move them to the build server.
On the tools side we are currently moving from Cruise Control to TFS.
Related
It seems that a build system best-practice is to have a single script that can build all source and package the releases. See Joel Test #2
How do you account for non-portable dependencies? For example, if you code for .net 4, then you need .net 4 installed on the box. Standard MS release .net 4 is not xcopy deployable (unless I'm mistaken?). I can see a few avenues:
the dependencies are clearly stated in some resource file (wiki, txt, whatever). When you
call the build script, the build will fail if you don't have the dependency installed. This is an acceptable outcome.
The build script is responsible for setting up the environment. So if you require .net 4 and its not on the box then it installs it for you.
A flavor of #2 - instead of installing dependencies, the script spawns a pre-packaged image (virtual machine, Amazon EC2 AMI) that is setup with all dependencies
???
For implementing a build script you have to ask yourself, how much work you want/can spent on it. This leads to the question how often you have to set up the build environment. I can see #2 would be the perfect solution, but i would need a lot of work, since usually you have more than one non portable dependency.
So we use #1 one. And it works quite well. The most important thing is, that the build script is starting with some sort of self-test. It looks for everything which is needed to build the whole software and gives an error if something is not found. And it gives a clear error message, so that any new guy knows what to do to make it running. Of course as with a lot of software it is nearly never finished and gets extended by needs. The drawback that this test can take some seconds is insignificant when whole build process needs more than minutes.
A wiki (or even sth. else) with the setup solution was not a good solution for us, since after three month nobody knows where this was, but the build script is used every day.
The build script itself is a set of a lot of different things, which where chosen by needs. It is starting with a batch (we are using Windows) which invokes a lot of other things. Other batches, MSBuild, home grown tools. Each step by it self is checking for its own dependencies, to have the problem local and you can see three lines later why this special thing is needed.
Number 2 states "Can you make a build in one step?" As described this means for a development team to be effective the build process must be as simple as possible to reduce errors in the build process and insure consistency. This is especially important as a team gets larger. You want to make sure everyone is building the same thing. (What is done with that package should also be simple, but it is not as important IMHO.) Msbuild is great at this; they provide the facilities to set up a build server that access the source control system independently so the developers actions can't corrupt the build environment. I highly recommend setting up a build server using TFS -- many build issues will go away and you will have the 1-click build Joel describes.
As for your points about what that package does for deployment -- you have many options with MS, but the more "one click" you can make it the better. I believe this is slightly different than Joel's #2. In his example he describes changing what software he will use for the install not because one performs with fewer steps, but instead because one can be incorporated into a one step build.
I'm currently using jenkins/hudson for continuous integration a large mostly C++ project. We have separate projects for trunk and every branch. Also, there are some related projects for the Java code, but the setup for those are fairly basic right now (we may do more later though). The C++ projects do the following:
Builds everything with options for whether to reconfigure, do a clean build, or use a fresh checkout
Optionally builds and runs all tests
Optionally runs all tests using Valgrind's memcheck
Runs cppcheck
Generates doxygen documentation
Publishes reports: unit tests, valgrind, cppcheck, compiler warnings, SLOC, open tasks, and code coverage (using gcov, gcovr, and the cobertura plugin)
Deploys code nightly or on demand to a test environment and a package repository
Everything is configurable for automatic builds and optional for on demand builds. Underneath, there's a bash script that controls much of this, which farther depends on our build system, which uses automake and autoconf along with custom bash scripts.
We started using Hudson (at the time) because that's what the Java guys were using and we just wanted nightly builds. Since then, we've added a lot more and continue to add more. In some ways Hudson is great, but certainly isn't ideal.
I've looked at other solutions and the only one that looks like it could be a replacement is buildbot. Would buildbot be better for this situation? Is the investment worth it since we're already using Hudson? Why?
EDIT: Someone asked why I haven't found Hudson/Jenkins to be ideal. The short answer is that everything can be improved. I'm simply wondering if Jenkins is the best current solution for my use case or whether there is something better (buildbot?) that would be easier to maintain in the long run even as new requirements come up.
Both are open source projects, but you do not need to change buildbot code to "extend" it, it is actually quite easy to import your own packages in its configuration in which you can sub-class most of the features with your own additions. Examples: your own compilation or test code, some parsing of outputs/errors to be given to the next steps, your own formating of alert emails etc. there are lots of possibilities.
Generally I would say that buildbot is the most "general purpose" automatic builds tools. Jenkins however might be the best related to running tests, especially for parsing and presenting results in nice ways (results, details, charts.. some clicks away), things that buildbot does not do "out-of-the-box". I'm actually thinking of using both to have sexier test result pages.. :-)
Also as a rule of thumb it should not be difficult to create a new tool's config: if the specification of what to do (configs, builds, tests) is too hard to switch from one tool to another, it is a (bad) sign that not enough configuration scripts are moved to the sources. Buildbot (or Jenkins) should only call simple commands. If it is simple to run tests, then developers will do it as well and this will improve the success rate, whereas if only the continuous integration system runs the tests, you will be running after it to fix the new code failures, and will loose its non-regression value, just my 0.02€ :-)
Hope it'll help.
The 'result integration' is also in jenkins/hudson, and you can relatively easily capture build products without having to 'copy them elsewhere'.
For our instance, the coverage reports and unit test metrics and javadoc for the java code is all integrated. For our C++ code, the plugins are a little lacking, but you can still get most of it.
we ran buildbot since pre 0.7, and are now running 0.8 and are only now seeing any real reason to switch, as buildbot 0.8 forgot about windows slaves for an extended period of time and the support was pretty poor.
There are many other solutions out there, besides Jenkins/Hudson/BuildBot:
TeamCity by Jetbrains
Bamboo by Atlassian
Go by Thoughtworks
Cruise Control
OpenMake Meister
The specifics about what you are doing are not so important, in fact, as long as the agents (aka nodes) that you are doing them on support those tasks.
The beauty of a CI server is noticing when the build changes to trigger a new build (and test), publish the artifacts, and publish test results.
When you compare CI tools like those we mentioned, consider features like the usability of its interface, how easy is branching (and features it might offer like automatic merging), notifications (like XMPP/jabber), or an information-radiator (like hooking up a monitor to always show status). Product support is another thing to consider - Jenkins' support is only as good as who is responding to community questions at the time you have questions.
My personal favorite is Bamboo, but it comes with a license fee.
I'm a long-time Jenkins user in the middle of evaluating Buildbot and would like to offer a few items for folks considering using Buildbot for multi-module solutions:
*) Buildbot doesn't have any out-of-the-box concept of file artifacts related to each build. It's not in the UI and it's not in any of the builtin "steps" modules as far as I can see:
http://docs.buildbot.net/current/manual/configuration/buildsteps.html
...and I see no third party plugin:
https://github.com/buildbot/buildbot/wiki/PluginList#steps
Buildbot does collect all the console output from a given build, but critically, you can't collect files related to it.
*) Given that artifacts are not supported, it's not easy to create "collector" projects that bring multiple modules into say, a single installer. Jenkins has a great feature that lets you parameterize a build with builds from other modules (the parameter type is a run).
*) Establishing dependencies between modules is trickier in Buildbot. Say you have a library that three binaries depend on, and you want those binaries to rebuild each time the library changes. Jenkins has triggers built into the UI. If you want to do triggers in Buildbot you have to script them using schedulers.Dependent, and it causes a lot of item congestion in the Schedulers UI.
*) When you're working in Buildbot, it seems that pretty much all of the configuration is done in master.cfg in code. This is awesome and frustrating.
*) Buildbot forces you to create a worker in addition to a master server. This is annoying for beginners and systems for which a single build server is sufficient.
My impression after two days of Buildbot evaluation is that we'll stick with Jenkins, primarily due to it having artifacts. Buildbot is a tool we'd only use if we had more extensive customization needs, and the time to do it.
On the subject of buildbot and artifacts -- I don't have enough user score to make a comment -- you can get artifacts from buildbot 2.x series pretty easy with built-in file/directory upload actions. However you rarely want to just move files. Typically you make a triggered buildstep that does deployment directly off the worker for best results. eg push to cloud storage, containers, thirdparty (steam uploads), etc.
This way you can get metrics on the uploads and conditionally control them better (or even mix and match artifacts across worker machines).
I would love to hear ideas on how to best move code from development server to production server.
A list of gotcha's, don't do this list would be helpful.
Any tools to help automate the steps of.
Make backups of existing code, given these list of files
Record the Deployment of these files from dev to production
Allow easier rollback if deployment or app fails in any way...
I have never worked at a company that had a deployment process, other than a very manual, ftp files from dev to production.
What have you done in your companies, departments, etc?
Thank you...
Yes, I am a coldfusion programmer, but files are files, and this should be language agnostic question.
OK, I'll bite. There's the technology aspect of this problem, which other answers have already covered. But the real issue is a process problem. Where the real focus should be ensuring a meaningful software development life cycle (SDLC) - planning, development, validation, and deployment. I'll cover each in turn. What you want is a repeatable activity at each phase.
Planning
Articulating and recording what's to be delivered. Often tickets or user stories are enough. Sometimes you do more, like a written requirements document, that a customer signs off on, that's translated into various artifacts such as written use cases - ultimately what you want though is something recorded in an electronic system where you can associate changes to code with it. Which leads me to...
Development
Remember that electronic system? Good. Now when you make changes to code (you're committing to source control right?) you associate those change with something in this electronic system - typically tickets. I like Trac, but have also heard good things about Atlassian's suite. This gives you traceability. So you can assert what's been done and how. Then you can use this system and source control to create a build - all the bits needed for whatever's changed - and tag that build in source control - that's your list of what's changed. Even better, have a build contain everything, so that it's standalone entity that can easily be deployed on it's own. The build is then delivered for...
Validation
Perhaps the most important step that many shops ignore - at their own peril. Defects found in production are exponentially more expensive to fix then when they're discovered earlier in the process. And validation is often the only step where this occurs in many shops - so make sure yours does it.
This should not be done by the programmer! That's like the fox watching the hen house. And whoever is doing is should be following some sort of plan. We use Test Link. This means each build is validated the same way, so you can identify regression bugs. And, this build should be deployed in the same way as you would into production.
If all goes well (we usually need a minimum of 3 builds) the build is validated. And this goes to...
Deployment
This should be a non-event, because you're taking a validated build following the same steps as you did in testing. Could be first it hits a staging server, where there's an automated copying process, but the point being is that is shouldn't be an issue at this point, because you validated with the same process.
Conclusion
In terms of knowing what's where, what you really want is a logical way to group changes together. This is where the idea of a build comes in. It's really the unit that should segue between steps in the SDLC. If you already have that, then the ability to understand the state of a given system becomes trivial.
Check out Ant or Maven - these are build and deployment tools used in the Java world which can help you copy / ftp files, backup and even check out code from SVN.
You can automate your deployment steps using these tools, for example Ant will allow you declare a set of tasks as part of your deployment. So you could, for example:
Check out a revision using SVNAnt or similar to a directory
Copy (and perhaps zip first) these files to a backup directory
FTP all the files to your web server(s)
Create a report to email to the team illustrating the deployment
Really you can do almost anything you wish to put time into using Ant. Maven is a little more strucutred (and newer) and you can see a discussion of the differences here.
Hope that helps!
In a nutshell...
You should start with some source control solution - probably Subversion or Git. Once that's in place you can create a script that generates a clean build of your source code and deploys it to your production server(s).
You could do this with a simple batch script or use something like Ant for more control. Here is a simple example of a batch file using Subversion:
svn copy svn://path/to/your/project/trunk -r HEAD svn://path/to/your/project/tags/%version%
svn checkout svn://path/to/your/project/trunk -r HEAD //path/to/target/directory
Ant makes it easy to do things like automatically run unit tests and sync directories. For example:
<sync todir="//path/to/target/directory" includeEmptyDirs="true" overwrite="true">
<fileset dir="${basedir}">
<exclude name="**/*.svn"/>
<exclude name="**/test/"/>
</fileset>
</sync>
This is really just a starting point. A next step might be a continuous integration solution like Hudson. I would also recommend reading "Pragmatic Project Automation: How to Build, Deploy, and Monitor Java Applications".
One ColdFusion specific gotcha is to make sure you clear the Application scope when required (to update any singleton components). A common approach here is to use a URL parameter that causes onRequestStart() to call onApplicationStart(). You may also have to clear the trusted cache.
We use a system called AnthillPro: http://www.anthillpro.com
It's commercial software, but it allows us to completely automate our deployment process across multiple servers and operating systems (We currently use it for both ColdFusion and Java, but it can be used for most languages. It has a ton of 3rd party integrations:
http://www.anthillpro.com/html/products/anthillpro/tool-integrations.html
In one of my Django projects I have a suite of unit tests that are based on TransactionalTestCase class (it takes much longer than TestCase). It is impossible to run tests after each change in code because it takes more than 0.5 hour to run all tests. We looked some time ago for some easy contiuous integration tool that could allow us to (at least) run tests on tests server and send emails with errors to the team members (we have of course code repository and we don't need auto deployment at the momment). Do you have some working solutions or ideas how to accomplish this?
We wrote some 'super extra simple CI server' which does nothing more than running tests and sending email reports (it is integrated with our code repository). But since we had some problems with our not-ideal simple tool recently I'm wondering now if you have sucessfully completed similar scenarios in your working enviroment?
I'm looking for something ligthweight, easy to install and use.
Disclaimer: I don't know Django. But I do know that I use Hudson as my continuous integration tool for a number of languages and platforms. I found it easy to install and confgure on both Windows and Linux (set & forget) and was impressed with the number of plugins available.
Basically, if what you want to do can be automated by a sctript file, then you can use Hudson. It really is worth checking out.
It took me only a few minutes to set it so that I get an email if, and only if, something goes wrong, although you might want to do somethinng else (for which there probably exists a plugin). Hudson also plays well with other tools like BigZilla, all major version control tools, etc
Have you considered having two kinds of tests - basic and advanced and adding additional django command, that would run only basic tests, that are fast? This way you can do basic testing on small changes and run the full test suite only when you are about to commit/push changes?
I've spent 4 years developing C++ using Visual Studio 2008 for a commercial company; it's now time for me to upgrade my development process.
Here's the problem: I dont have a 1 button build automation. I also dont have a CI server that automatically builds when a commit happens, and emails me whether a build is broken or not. Worse we dont even have a single unit test!!
Can someone please point to me how I can get started?
I have looked at many many tools and I think I might go with:
Visual Build (for build automation) (Note: I also considered Final Builder)
Cruise (for CI server)
I also now am just starting to practice TDD...so I will want to automate my unit tests as well. I chose Google Test/Mock for their extensive documentation. (Cant go wrong with Google brand can I? =p)
Price is not the issue, I want what's best and easiest to get started.
Can people that use real CI/automation tool for unmanaged MSVC++ tell me their tools and how I can go about starting?
Our source control is Subversion.
Last point: I'm also considering project management/tracking tool that integrates right into VSTD ..and thinking about using OnTime. VSTS costs too much. I tried FogBugz, but I think it's too simple. Any others?
I would take some time to seriously consider TeamCity. We used CruiseControl.NET for a while and TeamCity completely demolishes it. Plus it has built-in plugins for Boost and CppUnit, so your unit testing will come for free.
Best of all, the tool is free for < 20 users and gives you three build agents.
I just finished implementing our C++ product at work and it was fairly simple. We did it with msbuild and basically use the msbuild task to compile the solution. Other targets can be used to copy files, run unit tests, etc.
The last time I worked on an unmanaged MSVC++ project (which was moderately sized I might add), we used FinalBuilder to do the automated build & versioning (and even executing PCLint and other profiling tools as well).
Having said that, if you're willing to invest the time, MSBuild (or nAnt perhaps?) can do everything you need - even for unmanaged solutions.
Which brings us to the trade-off: Tools like Visual Build Pro and Final Builder get you up and running quickly. If you want something which offers a greater range of customization, you'll probably be spending a decent amount of time learning and understanding it - i.e. MSBuild, CIFactory, nAnt etc are no cake walk.
So if price isn't an issue - is time an issue? If time is at a premium, I'd investigate the GUI driven tools, they'll get you to where you want to go quickly. If you know you're going to need to extend on the simple one button build + unit tests + deploy scenario (which happens a lot!) then decide if you can invest the time into the more complex tools like MSBuild?
We use a combination of Boost.Build, NAnt, CPPUnit and either Cruise Control.NET or Hudson (we've used them both for various projects but are starting to prefer Hudson).
They are all good tools though we're considering replacing CPPUnit - the Google unit test system is pretty good from what I've seen.
If you're happy running on just Windows you can lose Boost.Build and just call out to Visual Studio from NAnt.
As for issue tracking/project management we settled on Vision Project after a long investigation. It's not well known (yet) but we've found it a very good fit in our environment. Fogbugz is great, a nice, clear interface but we came to the conclusion you did too; way too simple for our needs.
Although the .NET world is spoilt for these kinds of tools Continuous Integration is still pretty easy to set up for C++! I wouldn't think of starting a non-trivial project without putting these systems in place.
we use subversion + cruisecontrol + wix to accomplich CI automated builds outputting one-click installers. this combo has worked very well for us. we've created out own site for admin of svn user groups and permissioning and added the web interface to cc to it. we have a sql server storing all the collected stats from svn and cc and use them for custom reports available on our site. we are looking to add other tools to the mix for checking various attributes of the code stored in svn. this combo has worked very well for us.
At my company we use CruiseControl (http://cruisecontrol.sourceforge.net/). The Java version, not .NET, to build our wxWidgets application on Windows and OS X. Working great for us so far.