Go cross-platform tests - unit-testing

Problem description
I've developed a CLI in Go. My dev environment is Linux. I wrote unit-tests and only produce releases of executable files when tests pass. When I test or use my tool in a Linux environment, everything works fine.
My CI/CD pipeline is built around goreleaser to produce multi-platform executables. Since my app doesn't use exotic cross-platform functionalities, I was quite confident that windows executable should work as expected. But it didn't.
Long story short, always normalize paths with filepath.ToSlash(). But this is not my question.
Question
Hence my question is: "since behavior might change on different platforms for such little mistakes, is it possible to run go test for a list of os/architecture ?" I can't imagine rebooting on windows to test every commit manually and I don't think discipline is the answer. If it was, we wouldn't test things at all.
Search attempts
A fast search on Google and Stack Overflow for "golang cross-platform tests" didn't return any results. Am I missing something or is my approach to this problem wrong ?
Fist edit
Most comments pointed out that the only way to test the behavior of an executable on a given platform is... to test it on this platform (in a multi-stage CI/CD for example). This is so obvious that there might not be a way to achieve it otherwise, I know.
But triggering a parallel CI/CD job on every platform for every commit (of partially untested code) doesn't sound satisfying to me. It IS the only way to know for sure that the code behave as expected on every targeted platform but I'm wondering if anyone stumbled on this issue and found a pre-CI/CD solution to this problem.

Though it might be the only way to get conclusive test results, it implies triggering CI/CD with parallel tests on each platform. I was looking for some solution on the developer machine, before committing untested code
You can install a local CICD tool which would, on (local) commit, trigger those tests.
A Local GitLab, for instance, can run test in multiple platforms simultaneously (since GitLab 11.5)
But that implies at least a Docker image in order to test on Windows from your Linux dev environment.
With Go alone however, as mentioned in "Design and unit-test cross-platform application"
It's not possible to run go test for a target system that's different from the current system.

If you are trying to test a different CPU architecture, you should be able to do it with QEMU. This blog post explains how.
If you are trying to test a different OS, you can probably do everything you need to do in Docker containers. You could use a tool like VSCode’s remote development toolkit to easily dev/build/test a project in a specific container, or you could write a custom Makefile that calls the appropriate Docker commands when you run make test (allowing you to run tests in multiple OSes with one command).

Related

Implementing single script build - non-portable dependencies

It seems that a build system best-practice is to have a single script that can build all source and package the releases. See Joel Test #2
How do you account for non-portable dependencies? For example, if you code for .net 4, then you need .net 4 installed on the box. Standard MS release .net 4 is not xcopy deployable (unless I'm mistaken?). I can see a few avenues:
the dependencies are clearly stated in some resource file (wiki, txt, whatever). When you
call the build script, the build will fail if you don't have the dependency installed. This is an acceptable outcome.
The build script is responsible for setting up the environment. So if you require .net 4 and its not on the box then it installs it for you.
A flavor of #2 - instead of installing dependencies, the script spawns a pre-packaged image (virtual machine, Amazon EC2 AMI) that is setup with all dependencies
???
For implementing a build script you have to ask yourself, how much work you want/can spent on it. This leads to the question how often you have to set up the build environment. I can see #2 would be the perfect solution, but i would need a lot of work, since usually you have more than one non portable dependency.
So we use #1 one. And it works quite well. The most important thing is, that the build script is starting with some sort of self-test. It looks for everything which is needed to build the whole software and gives an error if something is not found. And it gives a clear error message, so that any new guy knows what to do to make it running. Of course as with a lot of software it is nearly never finished and gets extended by needs. The drawback that this test can take some seconds is insignificant when whole build process needs more than minutes.
A wiki (or even sth. else) with the setup solution was not a good solution for us, since after three month nobody knows where this was, but the build script is used every day.
The build script itself is a set of a lot of different things, which where chosen by needs. It is starting with a batch (we are using Windows) which invokes a lot of other things. Other batches, MSBuild, home grown tools. Each step by it self is checking for its own dependencies, to have the problem local and you can see three lines later why this special thing is needed.
Number 2 states "Can you make a build in one step?" As described this means for a development team to be effective the build process must be as simple as possible to reduce errors in the build process and insure consistency. This is especially important as a team gets larger. You want to make sure everyone is building the same thing. (What is done with that package should also be simple, but it is not as important IMHO.) Msbuild is great at this; they provide the facilities to set up a build server that access the source control system independently so the developers actions can't corrupt the build environment. I highly recommend setting up a build server using TFS -- many build issues will go away and you will have the 1-click build Joel describes.
As for your points about what that package does for deployment -- you have many options with MS, but the more "one click" you can make it the better. I believe this is slightly different than Joel's #2. In his example he describes changing what software he will use for the install not because one performs with fewer steps, but instead because one can be incorporated into a one step build.

buildbot vs hudson/jenkins for C++ continuous integration

I'm currently using jenkins/hudson for continuous integration a large mostly C++ project. We have separate projects for trunk and every branch. Also, there are some related projects for the Java code, but the setup for those are fairly basic right now (we may do more later though). The C++ projects do the following:
Builds everything with options for whether to reconfigure, do a clean build, or use a fresh checkout
Optionally builds and runs all tests
Optionally runs all tests using Valgrind's memcheck
Runs cppcheck
Generates doxygen documentation
Publishes reports: unit tests, valgrind, cppcheck, compiler warnings, SLOC, open tasks, and code coverage (using gcov, gcovr, and the cobertura plugin)
Deploys code nightly or on demand to a test environment and a package repository
Everything is configurable for automatic builds and optional for on demand builds. Underneath, there's a bash script that controls much of this, which farther depends on our build system, which uses automake and autoconf along with custom bash scripts.
We started using Hudson (at the time) because that's what the Java guys were using and we just wanted nightly builds. Since then, we've added a lot more and continue to add more. In some ways Hudson is great, but certainly isn't ideal.
I've looked at other solutions and the only one that looks like it could be a replacement is buildbot. Would buildbot be better for this situation? Is the investment worth it since we're already using Hudson? Why?
EDIT: Someone asked why I haven't found Hudson/Jenkins to be ideal. The short answer is that everything can be improved. I'm simply wondering if Jenkins is the best current solution for my use case or whether there is something better (buildbot?) that would be easier to maintain in the long run even as new requirements come up.
Both are open source projects, but you do not need to change buildbot code to "extend" it, it is actually quite easy to import your own packages in its configuration in which you can sub-class most of the features with your own additions. Examples: your own compilation or test code, some parsing of outputs/errors to be given to the next steps, your own formating of alert emails etc. there are lots of possibilities.
Generally I would say that buildbot is the most "general purpose" automatic builds tools. Jenkins however might be the best related to running tests, especially for parsing and presenting results in nice ways (results, details, charts.. some clicks away), things that buildbot does not do "out-of-the-box". I'm actually thinking of using both to have sexier test result pages.. :-)
Also as a rule of thumb it should not be difficult to create a new tool's config: if the specification of what to do (configs, builds, tests) is too hard to switch from one tool to another, it is a (bad) sign that not enough configuration scripts are moved to the sources. Buildbot (or Jenkins) should only call simple commands. If it is simple to run tests, then developers will do it as well and this will improve the success rate, whereas if only the continuous integration system runs the tests, you will be running after it to fix the new code failures, and will loose its non-regression value, just my 0.02€ :-)
Hope it'll help.
The 'result integration' is also in jenkins/hudson, and you can relatively easily capture build products without having to 'copy them elsewhere'.
For our instance, the coverage reports and unit test metrics and javadoc for the java code is all integrated. For our C++ code, the plugins are a little lacking, but you can still get most of it.
we ran buildbot since pre 0.7, and are now running 0.8 and are only now seeing any real reason to switch, as buildbot 0.8 forgot about windows slaves for an extended period of time and the support was pretty poor.
There are many other solutions out there, besides Jenkins/Hudson/BuildBot:
TeamCity by Jetbrains
Bamboo by Atlassian
Go by Thoughtworks
Cruise Control
OpenMake Meister
The specifics about what you are doing are not so important, in fact, as long as the agents (aka nodes) that you are doing them on support those tasks.
The beauty of a CI server is noticing when the build changes to trigger a new build (and test), publish the artifacts, and publish test results.
When you compare CI tools like those we mentioned, consider features like the usability of its interface, how easy is branching (and features it might offer like automatic merging), notifications (like XMPP/jabber), or an information-radiator (like hooking up a monitor to always show status). Product support is another thing to consider - Jenkins' support is only as good as who is responding to community questions at the time you have questions.
My personal favorite is Bamboo, but it comes with a license fee.
I'm a long-time Jenkins user in the middle of evaluating Buildbot and would like to offer a few items for folks considering using Buildbot for multi-module solutions:
*) Buildbot doesn't have any out-of-the-box concept of file artifacts related to each build. It's not in the UI and it's not in any of the builtin "steps" modules as far as I can see:
http://docs.buildbot.net/current/manual/configuration/buildsteps.html
...and I see no third party plugin:
https://github.com/buildbot/buildbot/wiki/PluginList#steps
Buildbot does collect all the console output from a given build, but critically, you can't collect files related to it.
*) Given that artifacts are not supported, it's not easy to create "collector" projects that bring multiple modules into say, a single installer. Jenkins has a great feature that lets you parameterize a build with builds from other modules (the parameter type is a run).
*) Establishing dependencies between modules is trickier in Buildbot. Say you have a library that three binaries depend on, and you want those binaries to rebuild each time the library changes. Jenkins has triggers built into the UI. If you want to do triggers in Buildbot you have to script them using schedulers.Dependent, and it causes a lot of item congestion in the Schedulers UI.
*) When you're working in Buildbot, it seems that pretty much all of the configuration is done in master.cfg in code. This is awesome and frustrating.
*) Buildbot forces you to create a worker in addition to a master server. This is annoying for beginners and systems for which a single build server is sufficient.
My impression after two days of Buildbot evaluation is that we'll stick with Jenkins, primarily due to it having artifacts. Buildbot is a tool we'd only use if we had more extensive customization needs, and the time to do it.
On the subject of buildbot and artifacts -- I don't have enough user score to make a comment -- you can get artifacts from buildbot 2.x series pretty easy with built-in file/directory upload actions. However you rarely want to just move files. Typically you make a triggered buildstep that does deployment directly off the worker for best results. eg push to cloud storage, containers, thirdparty (steam uploads), etc.
This way you can get metrics on the uploads and conditionally control them better (or even mix and match artifacts across worker machines).

Why should one use a build system over that which is included as part of an IDE?

I've heard more than one person say that if your build process is clicking the build button, than your build process is broken. Frequently this is accompanied with advice to use things like make, cmake, nmake, MSBuild, etc. What exactly do these tools offer that justifies manually maintaining a separate configuration file?
EDIT: I'm most interested in answers that would apply to a single developer working on a ~20k line C++ project, but I'm interested in the general case as well.
EDIT2: It doesn't look like there's one good answer to this question, so I've gone ahead and made it CW. In response to those talking about Continuous Integration, yes, I understand completely when you have many developers on a project having CI is nice. However, that's an advantage of CI, not of maintaining separate build scripts. They are orthogonal: For example, Team Foundation Build is a CI solution that uses Visual Studio's project files as it's configuration.
Aside from continuous integration needs which everyone else has already addressed, you may also simply want to automate some other aspects of your build process. Maybe it's something as simple as incrementing a version number on a production build, or running your unit tests, or resetting and verifying your test environment, or running FxCop or a custom script that automates a code review for corporate standards compliance. A build script is just a way to automate something in addition to your simple code compile. However, most of these sorts of things can also be accomplished via pre-compile/post-compile actions that nearly every modern IDE allows you to set up.
Truthfully, unless you have lots of developers committing to your source control system, or have lots of systems or applications relying on shared libraries and need to do CI, using a build script is probably overkill compared to simpler alternatives. But if you are in one of those aforementioned situations, a dedicated build server that pulls from source control and does automated builds should be an essential part of your team's arsenal, and the easiest way to set one up is to use make, MSBuild, Ant, etc.
One reason for using a build system that I'm surprised nobody else has mentioned is flexibility. In the past, I also used my IDE's built-in build system to compile my code. I ran into a big problem, however, when the IDE I was using was discontinued. My ability to compile my code was tied to my IDE, so I was forced to re-do my entire build system. The second time around, though, I didn't make the same mistake. I implemented my build system via makefiles so that I could switch compilers and IDEs at will without needing to re-implement the build system yet again.
I encountered a similar problem at work. We had an in-house utility that was built as a Visual Studio project. It's a fairly simple utility and hasn't needed updating for years, but we recently found a rare bug that needed fixing. To our dismay, we found out that the utility was built using a version of Visual Studio that was 5-6 versions older than what we currently have. The new VS wouldn't read the old-version project file correctly, and we had to re-create the project from scratch. Even though we were still using the same IDE, version differences broke our build system.
When you use a separate build system, you are completely in control of it. Changing IDEs or versions of IDEs won't break anything. If your build system is based on an open-source tool like make, you also don't have to worry about your build tools being discontinued or abandoned because you can always re-build them from source (plus fix bugs) if needed. Relying on your IDE's build system introduces a single point of failure (especially on platforms like Visual Studio that also integrate the compiler), and in my mind that's been enough of a reason for me to separate my build system and IDE.
On a more philosophical level, I'm a firm believer that it's not a good thing to automate away something that you don't understand. It's good to use automation to make yourself more productive, but only if you have a firm understanding of what's going on under the hood (so that you're not stuck when the automation breaks, if for no other reason). I used my IDE's built-in build system when I first started programming because it was easy and automatic. I later started to become more aware that I didn't really understand what was happening when I clicked the "compile" button. I did a little reading and started to put together a simple build script from scratch, comparing my output to that of the IDE's build system. After a while I realized that I now had the power to do all sorts of things that were difficult or impossible through the IDE. Customizing the compiler's command-line options beyond what the IDE provided, I was able to produce a smaller, slightly faster output. More importantly, I became a better programmer by having real knowledge of the entire development process from writing code all the way down through the generation of machine language. Understanding and controlling the entire end-to-end process allows me to optimize and customize all of it to the needs of whatever project I'm currently working on.
If you have a hands-off, continuous integration build process it's going to be driven by an Ant or make-style script. Your CI process will check the code out of version control when changes are detected onto a separate build machine, compile, test, package, deploy, and create a summary report.
Let's say you have 5 people working on the same set of code. Each of of those 5 people are making updates to the same set of files. Now you may click the build button and you know that you're code works, but what about when you integrate it with everyone else. The only you'll know is that if you get everyone else's and try. This is easy every once in a while, but it quickly becomes tiresome to do this over and over again.
With a build server that does it automatically, it checks if the code compiles for everyone all the time. Everyone always knows if the something is wrong with the build, and what the problem is, and no one has to do any work to figure it out. Small things add up, it may take a couple of minutes to pull down the latest code and try and compile it, but doing that 10-20 times a day quickly becomes a waste of time, especially if you have multiple people doing it. Sure you can get by without it, but it is so much easier to let an automated process do the same thing over and over again, then having a real person do it.
Here's another cool thing too. Our process is setup to test all the sql scripts as well. Can't do that with pressing the build button. It reloads snapshots of all the databases it needs to apply patches to and runs them to make sure that they all work, and run in the order they are supposed to. The build server is also smart enough to run all the unit tests/automation tests and return the results. Making sure it can compile is fine, but with an automation server, it can handle many many steps automatically that would take a person maybe an hour to do.
Taking this a step further, if you have an automated deployment process along with the build server, the deployment is automatic. Anyone who can press a button to run the process and deploy can move code to qa or production. This means that a programmer doesn't have to spend time doing it manually, which is error prone. When we didn't have the process, it was always a crap shoot as to whether or not everything would be installed correctly, and generally it was a network admin or a programmer who had to do it, because they had to know how to configure IIS and move the files. Now even our most junior qa person can refresh the server, because all they need to know is what button to push.
the IDE build systems I've used are all usable from things like Automated Build / CI tools so there is no need to have a separate build script as such.
However on top of that build system you need to automate testing, versioning, source control tagging, and deployment (and anything else you need to release your product).
So you create scripts that extend your IDE build and do the extras.
One practical reason why IDE-managed build descriptions are not always ideal has to do with version control and the need to integrate with changes made by other developers (ie. merge).
If your IDE uses a single flat file, it can be very hard (if not impossible) to merge two project files into one. It may be using a text-based format, like XML, but XML it notoriously hard with standard diff/merge tools. Just the fact that people are using a GUI to make edits makes it more likely that you end up with unnecessary changes in the project files.
With distributed, smaller build scripts (CMake files, Makefiles, etc.), it can be easier to reconcile changes to project structure just like you would merge two source files. Some people prefer IDE project generation (using CMake, for example) for this reason, even if everyone is working with the same tools on the same platform.

What is the purpose of a dedicated "Build Server"? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
This post was edited and submitted for review 1 year ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
I haven't worked for very large organizations and I've never worked for a company that had a "Build Server".
What is their purpose?
Why aren't the developers building the project on their local machines, or are they?
Are some projects so large that more powerful machines are needed to build it in a reasonable amount of time?
The only place I see a Build Server being useful is for continuous integration with the build server constantly building what is committed to the repository. Is it I have just not worked on projects large enough?
Someone, please enlighten me: What is the purpose of a build server?
The reason given is actually a huge benefit. Builds that go to QA should only ever come from a system that builds only from the repository. This way build packages are reproducible and traceable. Developers manually building code for anything except their own testing is dangerous. Too much risk of stuff not getting checked in, being out of date with other people's changes, etc. etc.
Joel Spolsky on this matter.
Build servers are important for several reasons.
They isolate the environment The local Code Monkey developer says "It compiles on my machine" when it won't compile on yours. This can mean out-of-sync check-ins or it could mean a dependent library is missing. Jar hell isn't near as bad as .dll hell; either way, using a build server is cheap insurance that your builds won't mysteriously fail or package the wrong libraries by mistake.
They focus the tasks associated with builds. This includes updating the build tag, creating any distribution packaging, running automated tests, creating and distributing build reports. Automation is the key.
They coordinate (distributed) development. The standard case is where multiple developers are working on the same code base. The version control system is the heart of this sort of distributed development but depending on the tool, the developers may not interact with each other's code much. Instead of forcing developers to risk bad builds or worry about merging code overly aggressively, design the build process where the automated build can see the appropriate code and processes the build artifacts in a predictable way. That way when a developer commits something with a problem, like not checking in a new file dependency, they can be notified quickly. Doing this in a staged area let's you flag the code that has built so that developers don't pull code that would break their local build. PVCS did this quite well using the idea of promotion groups. Clearcase could do it too using labels but would require more process administration than a lot of shops care to provide.
What is their purpose?
Take load of developer machines, provide a stable, reproducible environment for builds.
Why aren't the developers building the project on their local machines, or are they?
Because with complex software, amazingly many things can go wrong when just "compiling through". problems I have actually encountered:
incomplete dependency checks of different kinds, resulting in binaries not being updated.
Publish commands failing silently, the error message in the log ignored.
Build including local sources not yet commited to source control
(fortunately, no "damn customers" message boxes yet..).
When trying to avoid above problem by building from another folder, some files picked from the wrong folder.
Target folder where binaries are aggregated contains additional stale developer files that shoulkd not be included in release
We've got an amazing stability increase since all public releases start with a get from source control onto an empty folder. Before, there were lots of "funny problems" that "went away when Joe gave me a new DLL".
Are some projects so large that more powerful machines are needed to build it in a reasonable amount of time?
What's "reasonable"? If I run a batch build on my local machine, there are many things I can't do. Rather than pay developers for builds to complete, pay IT to buy a real build machine already.
Is it I have just not worked on projects large enough?
Size is certainly one factor, but not the only one.
A build server is a distinct concept to a Continuous Integration server. The CI server exists to build your projects when changes are made. By contrast a Build server exists to build the project (typically a release, against a tagged revision) on a clean environment. It ensures that no developer hacks, tweaks, unapproved config/artifact versions or uncommitted code makes it into the released code.
The build server is used to build everyone's code when it is checked in. Your code may compile locally, but you most likely won't have all the change made by everyone else all the time.
To add on what has already been said :
An ex-colleague worked on the Microsoft Office team and told me a complete build sometimes took 9 hours. That would suck to do it on YOUR machine, wouldn't it?
It's necessary to have a "clean" environment free of artifacts of previous versions (and configuration changes) in order to ensure that builds and tests work and don't depend on the artifacts. An effective way to isolate is to create a separate build server.
I agree with the answers so far in regards to stability, tracability, and reproducability. (Lots of 'ity's, right?). Having ONLY ever worked for large companies (Health Care, Finance) with MANY build servers, I would add that it's also about security. Ever seen the movie Office Space? If a disgruntled developer builds a banking application on his local machine and no one else looks at it or tests it... BOOM. Superman III.
These machines are used for several reasons, all trying to help you provide a superior product.
One use is to simulate a typical end user configuration. The product might work on your computer, with all your development tools and libraries set up, but the end user most likely won't have the same configuration as you. For that matter, other developers won't have the exact same setup as you either. If you have a hardcoded path somewhere in your code, it will probably work on your machine, but when Dev El O'per tries to build the same code, it won't work.
Also they can be used to monitor who broke the product last, with what update, and where the product regressed at. Whenever new code is checked in, the build server builds it, and if it fails, its clear that something is wrong and the user who committed last is at fault.
For consistent quality and to get the build 'off your machine' to spot environment errors and so that any files you forget to check in to source control also show up as build errors.
I also use it to create installers as these take a lot of time to do on the desktop with code signing etc.
We use one so that we know that the production/test boxes have the same libraries and versions of those libraries installed as what is available on the build server.
It's about management and testing for us. With a build server we always know that we can build our main "trunk" line from version control. We can create a master install with one-click and publish it to the web. We can run all of our unit tests each time code is checked in to make sure it works. By collecting all these tasks into a single machine it makes it easier to get it right repeatedly.
You are right that developers could build on their own machines.
But these are some of the things our build server buys us, and we're hardly sophisticated build makers:
Version control issues (some have been mentioned in earlier responses)
Efficiency. Devs don't have to stop to make builds locally. They can kick it off on the server and get on to the next task. If builds are large, then that is even more time the dev's machine is not occupied. For those doing continuous integration and automated testing, even better.
Centralization. Our build machine has scripts that make the build, distribute it to UAT environments, and even to production staging. Keeping them in one place reduces the hassle of keeping them in sync.
Security. We don't do much special here, but I'm sure a sysadmin can make it such that production migration tools can only be accessed on a build server by certain authorized entities.
Maybe i'm the only one...
I think everyone agrees that one should
use a file repository
do builds from the repository (and in a clean environment)
use a continous testing server (e.g. cruise control) to see if anything is broken after your "fixes"
But no one cares about automatically built versions.
When something was broken in an automatic build, but it's not anymore - who cares? It's a work in progress. Someone fixed it.
When you want to do a release version, you run a build from the repository. And i'm pretty sure you want to tag the version in the repository at that time and not every six hours when the server does it's work.
So, maybe a "build server" is just a misnomer and it's actually a "continous test server". Otherwise it sounds pretty much useless.
A build server gets you a sort of second opinion of your code. When you check it in, the code is checked. If it works, the code has a minimum quality.
Additionally, remember that low level languages take much longer to compile than high level languages. It's easy to think "Well look, my .Net project compiles in a couple of seconds! What's the big deal?" Awhile back I had to mess with some C code and I had forgotten how much longer it takes to compile.
A build server is used to schedule compile tasks (e.g. nightly builds) of usually large projects located in a repository that can sometimes take more than a couple of hours.
A build server also gives you a basis for escrow, being able to capture all the parts necessary to reproduce a build in the case that others may have rights to take ownership.

How do you get up and running with a build server?

I think everyone here would agree that in order to be considered a professional software house there are number fundamental things you must have in place.
There is no doubt that one of these things is a build server, the question is, how far do you need to go.
What are the minimum requirements for the build server? (Somewhere to just compile?)
What is the ultimate goal for your build server? (Scheduled, source control integration, auto deployment to test / live servers)
Where is a good place to start assuming you have nothing at the moment?
It would be great if we could list out a few simple tasks that an amateur developer could take on board in order to set them on the right track to a fully functional build server.
It would also be good to hear about people that feel they have a "complete" system setup that performs all the functionality they require and how they went about setting it all up from scratch.
You can start by looking into Cruise Control.
There's also CruiseControl.net if that's your poison.
Essentially though, you need the following ingredients:
A dedicated environment (Virtual Machine/server. Don't use a developer's machine, unless it's just you. Even then, run a VM if you can. Much easier to move it to a server when/if one becomes available in your organisation)
A source control system that supports labelled/tagged revisions (for example, Subversion+TortoiseSVN)
Build scripts. These can be batchfiles that start the devenv.exe or msbuild.exe applications with a command line, or you can use something like Ant or NAnt.
In this scenario, CruiseControl acts as the Continous Integration server, and can make sure that you have builds done as you check in your code. This means you know whether the build is broken quicker than if you just had nightly builds. You should probably also have nightly builds, though.
Hudson is a great CI.
We run farm locally, but we started by downloading hudson.war and doing
java -jar hudson.war
It integrates with SCM, bug trucking systems it is really awesome.
You'll need some disk space if you want to keep old build.
Enjoy it is most straightforward CI solution so far.
HTH,
Hubert.
If you're using Cruise Control, the place to start is an Ant build.xml that does the job manually.
You need a version control system that can do labeled check-outs.
You need JUnit tests to run using the Ant task and generate HTML reports.
Id say you'd have to start by implementing a build strategy so you can build your code in a structured way - I use NANT.
For a basic build server - use one of the CI offerings out there that monitors your source control and triggers a build whenever a change is detected. eg: cruiseControl.
Once you get the basic build together - add the running of your unit tests after a successfuly build.
The most successful system i've had in place had 3 different builds :-
- one that fired on a check in - all this did was build the code.
- an on demand one that would build the application, generate the installer and then put
the installer into a shared drive for the testers to pick up
- a daily build that fired at 10pm. This:
- ran some code generation to build DB and C# code from a UML model
- build the code
- created a new build verification test user on a test oracle instance
- ran the application schema into the db
- fired off a bunch of unit tests
- cleaned up the db user (if the tests were successful)
- ran coverage analysis to build a report of the unit code coverage
Software we used for this was NANT, CruiseControl.NET, a custom code generation system, custom app to build an oracle schema, and NCover for the code analysis.
Start by having a read of Martin Fowler's excellent paper on Continuous Integration.
We built such a system for a major project >2,000 kSLOC and it proved itself to be invaluable.
HTH
cheers,
Rob
Cruise, Maven, Hudson etc are all great but its always worth having a stopgap solution.
You should have a batch file, shell script or simply written instructions that will allow you to run a build from any machine. We have had build servers unavailable in the past and the ability to switch quickly to another machine was invaluable!
The spec of the build machine need not be important unless you have a monster project. We try and keep our build times down to 10 minutes (including unit tests) and we have a pretty big project.
Don't be tempted to create or write your own build system because "none of the tools out there are good enough". All modern build systems allow you to write plugins to do custom stuff.
I'm using Cruisecontrol.NET and an msbuild buildscript.
I can use the buildscript manually so that I can get the latest version of the codebase, built the codebase very easily using the commandline. (This is very interesting if you are working on an application that consists of multiple solutions).
Next to that, my CruiseControl.NET buildserver uses this buildscript as well. It checks on a regular interval if there have been changes committed to the source-control.
If that happens, CC.NET performs the 'get-latest' task that I've defined in the buildscript, builds everything, executes unit-tests and performs a statical code analysis (fxcop).
My 'buildserver' is just an old workstation. It's a PIV, 3Ghz with 1gb RAM, and it does its job perfectly.
One additional thing that I would find interesting, is to have the ability to automatically deploy a new version, or build a setup.
I haven't done that yet, since I'm not sure whether it is a good idea, nor have I found a good strategy yet to do so ...
I mean; is deploying a new version of some components into production for a mission-critical application a good idea ? I don't think so ...
I think this is a good place to start:
[http://confluence.public.thoughtworks.org/display/CC/Home;jsessionid=5201DA7E8D361EB164C40E519DA0F0DE][1]
At least, that's where I started looking when setting up my build server. :)
[1]: Home of CruiseControl
Roughly in order - minimal/least sophisticated through more sophisticated
able to get a specific set of source onto any machine
able to build that source (with no problems)
able to (schedule) build each night/or some other defined period with no user intervention
One (or more) dedicated build server (not shared as qa or dev machine)
able to do a build after each check-in/commit
Notify interested parties of the build status after a build
Provide build status at any time
Create installers as part of the build
ability to deploy/live if build is good
Run unit tests
Run tests on the product
Report the results of those tests
Static code analysis and reporting
...
And the list goes on and on
Don't be afraid to just start with batch files or shell scripts or other ad-hoc means. People made perfectly good software before the CI craze. there were plenty of good processes before Hudson and Cruise Control - ( I am not knocking those or others - I use Hudson among others) - but don't miss the point - these things are here to help you - not become overbearing process)
I couldn't give you all the details about how we set our build server up (I was only involved at the start), but:
We started with an in-house system, implemented in ASP.NET and a .NET Windows Service, using NAnt to do the actual builds. Actually, most of the workflow was implemented in NAnt (e.g. emailing people, copying stuff around, etc.).
We moved to JetBrains TeamCity (there's a free cut-down version available), which is still serving us well.
We use it for builds triggered by a commit: these just build the binaries and run the unit tests. From here, we can do a complete build, which does the MSI as well. From there, we have system test builds that run more in-depth tests, across an environment built with virtual machines (with a separate domain controller, SQL Server box, etc.). When the system tests pass, the build is made available to our QA department for manual testing and some regression tests that we've not automated yet.
In the java space I've tested most of the available build environments. The issue with automatic build is that you quite often end up spending a fair amount of time following it up. After we switched to the commercial bamboo from atlassian, we found that we have to spend a lot less time pampering the build box, which in our case turns out to be very good economy. Bamboo also supports clustering, so you can add inexpensive boxes as needs evolves.
Try & find something that fits in with your existing practices in terms of building - e.g. it's not going to be a good fit to try & use an Ant-based buildserver if you're using Maven, for instance!
Ideally, it should just be able to monitor your source-control system, checkout the code, build, run some tests & publish the results without you being aware of it, or at least not 'till it's reporting a failure. Personally, I'd suggest Hudson (https://hudson.dev.java.net/) as a good starting point as it's easy to get installed & running & has a decent UI.
We start by writing batch scripts that will run on the developers machine. Once we have all the processes automated, we move them to the build server.
On the tools side we are currently moving from Cruise Control to TFS.