Preventing build breaks - using a pre-commit build - build

What methods are available for preventing sloppy developers from breaking builds.
Are there any version control systems which have a system of preventing check-in of code which breaks the build.
Thanks

Microsoft TFS Build has something called "gated check-ins" which provides this, by performing a private check-in (called Shelving) which is promoted to a normal check-in if the build succeeds.
http://blogs.msdn.com/b/patcarna/archive/2009/06/29/an-introduction-to-gated-check-in.aspx
TeamCity has the concept of "delayed commit"
http://www.jetbrains.com/teamcity/features/delayed_commit.html
I can wholeheartedly recommend TeamCity!

Get a LART and beat developers who break the build.
Have the build status on a great big screen where everyone can see it with the a red/green background and the name of the last person to commit.
Have the build server send an e-mail to the whole dev team fingering the developer who broke the build.
Honestly... why do people get so hung up on "making developers do X". Tell them it's the process, and fire them if they don't follow it.
EDIT: because the following was too long for a comment.
I work on a team with 12 or so developers. Some people consider that large, some consider it small
We have a large screen (32" flat screen TV on a 6 foot stand) that everyone can see that tells us all sorts of information - including (in the biggest box on the screen) the state of the "commit build".
Our process is to update from SVN and run the commit build locally (about 2-3 mins) before committing. If it passes, ship it. If not, fix it locally and repeat. Since we do TDD, this generally only happens if something you pulled out of SVN broke something you were working on.
If it fails in CI you either ignored the process or your commit collided with someone else's in a bad way. The screen goes red, someone shouts at you, you fix it and move on. This generally only happens to us about once a week or so; mostly it goes red because people try and cut corners ;-)
Nobody needs to "force developers" to do anything. We're creative, artistic individuals who are generally adult and professional enough to follow a process if that process makes sense. In this case, build locally and only commit if that passes.
Who cares if the build breaks in CI, as long as it's fixed quickly so it doesn't prevent the team from working?

Related

When should I make a repository for my project?

I've been looking into using Mercurial for version management and wanted to try it out. I'm planning on doing a project on my own to help me learn C++ coming from Java and thought it would be a good idea to try version management with it. It would probably be a game or some sort of simple application, not sure with a GUI or not.
The problem is I don't know much about version management. When in a project is a good time to make the repository? Should I just do it right away? How often should I make commits?
I'll be using Netbeans for my development since it seems to be pretty good, also has Mercurial controls built in, but what should I set in the .hgignore for my project? I assume the repository would be the whole folder Netbeans creates but what should I make it ignore?
Bonus questions: I'll be using Bitbucket for hosting/backup, should I make the project public? Why/why not? Also would that make it open source? Should I bother with attaching a license? Because I have no idea how the licenses work and such. Then again, I sincerely doubt I'll be making anything anyone will want to copy too badly.
On repos:
You should use a VCS repository as soon as possible. Not only does it allow for off-site backups, but it allows you to track changes. To find out when a bug was introduced.
Having a VCS, particularly a distributed VCS like Mercurial, will change the way you code. You won't have to be careful not to break things, because you always have the old version that you can revert to. If, after a few weeks or months, you decide that a particular course of development was a bad idea, you can rewind all the way back to some prior point.
You should generally commit either every day, or every time you finish a task. Like, you might have a 4-hour task of "write this class and get it working". You commit after that. When you're done for the day, you commit. I wouldn't suggest committing in a non-compiling state, though, so you should try to stop for the day only when everything still builds.
As for ignoring, you should ignore things you don't intend to commit. Things that you wouldn't want in the repo. Generated files, temporary files, the stuff that you wouldn't need.
What should be in the repo is 100% of the information necessary to build the project from scratch (minus external dependencies).
On licenses:
I would say that it is very rude to put up a public repository without some kind of license attached. Without a license, that means that anyone even pulling from the repo (which is what you're inviting by making it public) is a violation of your copyright without direct, explicit permission from you.
So either look at how copyright works and pick a license to release your stuff under, or keep your repos private.
The instant you conceive of it, usually. There is no penalty for putting everything you have into a VCS right away, and who knows, you may want that super-duper prototype version later on down the road. Unless you're going to be checking in large media files (don't), my answer is: right now.
As for how often to commit, that's a personal preference. I use git, and as far as I'm concerned there is no problem with checking in every time you make some unit of significant change. The more snapshots along the way you have, the easier it becomes to track down bugs later on.
Your .hgignore is up to you. You need to decide what to version and what not to. As a rule of thumb, any files generated during compilation probably shouldn't be checked in.
Also, if you release it to the public, please attach a license. It doesn't matter what it is, but it makes all the bosses and lawyers in the world happy when they know what they can and can't do with your code. Finally, please don't release something to the public unless you think someone else will benefit from it. The internet already suffers from information overload.
Commit early, and commit often.
With distributed VCS like hg or git there's really no reason not to start a repo before you even write any code. If you decide to scrap your project, you can always delete your repo before you post it to whatever hosting site you're using. The more often you commit, the easier it will be to hone in on where you messed up later on (you will).
Stuff to put in your ignore file would be build files and anything that your IDE generates periodically, like user preferences and tag caches. You don't want to foist your preferences on someone else and you don't want to have to create a commit every time you change the position of a window in the IDE. What's in the repo should be the minimum that someone would need to be able to work on your project.
If you're just doing a small project to learn, then I wouldn't worry about licenses or having a private repo. You can always add a license later. Alternatively, just pick a permissive license and go with it.
Based on my experience with git, I would recommend the following:
Create repository along with project.
Commit as often as you can with detailed comments
When you want to test a new idea - create a branch. If the idea is successful, merge your code to the master branch. If the idea is not successful, you can just discard the branch. Probably this is the most important idea.
Put the names of object files, executables and editor backup files in the ignore list.

How can I guarantee all unit tests pass before committing?

We've had problems recently where developers commit code to SVN that doesn't pass unit tests, fails to compile on all platforms, or even fails to compile on their own platform. While this is all picked up by our CI server (Cruise Control), and we've instituted processes to try to stop it from happening, we'd really like to be able to stop the rogue commits from happening in the first place.
Based on a few other questions around here, it seems to be a Bad Idea™ to force this as a pre-commit hook on the server side mostly due to the length of time required to build + run the tests. I did some Googling and found this (all devs use TortoiseSVN):
http://cf-bill.blogspot.com/2010/03/pre-commit-force-unit-tests-without.html
Which would solve at least two of the problems (it wouldn't build on Unix), but it doesn't reject the commit if it fails. So my questions:
Is there a way to make a pre-commit hook in TortoiseSVN cause the commit to fail?
Is there a better way to do what I'm trying to do in general?
There is absolutely no reason why your pre-commit hook can't run the Unit tests! All your pre-commit hook has to do is:
Checkout the code to a working directory
Compile everything
Run all the unit tests
Then fail the hook if the unit tests fail.
It's completely possible to do. And, afterwords, everyone in your development shop will hate your guts.
Remember that in a pre-commit hook, the entire hook has to complete before it can allow the commit to take place and control can be returned to the user.
How long does it take to do a build and run through the unit tests? 10 minutes? Imagine doing a commit and sitting there for 10 minutes waiting for your commit to take place. That's the reason why you're told not to do it.
Your continuous integration server is a great place to do your unit testing. I prefer Hudson or Jenkins over CruiseControl. They're easier to setup, and their webpage are more user friendly. Even better they have a variety of plugins that can help.
Developers don't like it to be known that they broke the build. Imagine if everyone in your group got an email stating you committed bad code. Wouldn't you make sure your code was good before you committed it?
Hudson/Jenkins have some nice graphs that show you the results of the unit testing, so you can see from the webpage what tests passed and failed, so it's very clear exactly what happened. (CruiseControl's webpage is harder for the average eye to parse, so these things aren't as obvious).
One of my favorite Hudson/Jenkins plugin is the Continuous Integration Game. In this plugin, users are given points for good builds, fixing unit tests, and creating more passed unit tests. They lose points for bad builds and breaking unit tests. There's a scoreboard that shows all the developer's points.
I was surprised how seriously developers took to it. Once they realized that their CI game scores were public, they became very competitive. They would complain when the build server itself failed for some odd reason, and they lost 10 points for a bad build. However, the number of failed unit tests dropped way, way down, and the number of unit tests that were written soared.
There are two approaches:
Discipline
Tools
In my experience, #1 can only get you so far.
So the solution is probably tools. In your case, the obstacle is Subversion. Replace it with a DVCS like Mercurial or Git. That will allow every developer to work on their own branch without the merge nightmares of Subversion.
Every once in a while, a developer will mark a feature or branch as "complete". That is the time to merge the feature branch into the main branch. Push that into a "staging" repository which your CI server watches. The CI server can then pull the last commit(s), compile and test them and only if this passes, push them to the main repository.
So the loop is: main repo -> developer -> staging -> main.
There are many answers here which give you the details. Start here: Mercurial workflow for ~15 developers - Should we use named branches?
[EDIT] So you say you don't have the time to solve the major problems in your development process ... I'll let you guess how that sounds to anyone... ;-)
Anyway ... Use hg convert to get a Mercurial repo out of your Subversion tree. If you have a standard setup, that shouldn't take much of your time (it will just need a lot of time on your computer but it's automatic).
Clone that repo to get a work repo. The process works like this:
Develop in your second clone. Create feature branches for that.
If you need changes from someone, convert into the first clone. Pull from that into your second clone (that way, you always have a "clean" copy from subversion just in case you mess up).
Now merge the Subversion branch (default) and your feature branch. That should work much better than with Subversion.
When the merge is OK (all the tests run for you), create a patch from a diff between the two branches.
Apply the patch to a local checkout from Subversion. It should apply without problems. If it doesn't, you can clean your local checkout and repeat. No chance to lose work here.
Commit the changes in subversion, convert them back into repo #1 and pull into repo #2.
This sounds like a lot of work but within a week, you'll come up with a script or two to do most of the work.
When you notice someone broke the build (tests aren't running for you anymore), undo the merge (hg clean -C) and continue to work on your working feature branch.
When your colleagues complain that someone broke the build, tell them that you don't have a problem. When people start to notice that your productivity is much better despite all the hoops that you've got to jump, mention "it would be much more simple if we would scratch SVN".
The best thing to do is to work to improve the culture of your team, so that each developer feels enough of a commitment to the process that they'd be ashamed to check in without making sure it works properly, in whatever ways you've all agreed.

Why should one use a build system over that which is included as part of an IDE?

I've heard more than one person say that if your build process is clicking the build button, than your build process is broken. Frequently this is accompanied with advice to use things like make, cmake, nmake, MSBuild, etc. What exactly do these tools offer that justifies manually maintaining a separate configuration file?
EDIT: I'm most interested in answers that would apply to a single developer working on a ~20k line C++ project, but I'm interested in the general case as well.
EDIT2: It doesn't look like there's one good answer to this question, so I've gone ahead and made it CW. In response to those talking about Continuous Integration, yes, I understand completely when you have many developers on a project having CI is nice. However, that's an advantage of CI, not of maintaining separate build scripts. They are orthogonal: For example, Team Foundation Build is a CI solution that uses Visual Studio's project files as it's configuration.
Aside from continuous integration needs which everyone else has already addressed, you may also simply want to automate some other aspects of your build process. Maybe it's something as simple as incrementing a version number on a production build, or running your unit tests, or resetting and verifying your test environment, or running FxCop or a custom script that automates a code review for corporate standards compliance. A build script is just a way to automate something in addition to your simple code compile. However, most of these sorts of things can also be accomplished via pre-compile/post-compile actions that nearly every modern IDE allows you to set up.
Truthfully, unless you have lots of developers committing to your source control system, or have lots of systems or applications relying on shared libraries and need to do CI, using a build script is probably overkill compared to simpler alternatives. But if you are in one of those aforementioned situations, a dedicated build server that pulls from source control and does automated builds should be an essential part of your team's arsenal, and the easiest way to set one up is to use make, MSBuild, Ant, etc.
One reason for using a build system that I'm surprised nobody else has mentioned is flexibility. In the past, I also used my IDE's built-in build system to compile my code. I ran into a big problem, however, when the IDE I was using was discontinued. My ability to compile my code was tied to my IDE, so I was forced to re-do my entire build system. The second time around, though, I didn't make the same mistake. I implemented my build system via makefiles so that I could switch compilers and IDEs at will without needing to re-implement the build system yet again.
I encountered a similar problem at work. We had an in-house utility that was built as a Visual Studio project. It's a fairly simple utility and hasn't needed updating for years, but we recently found a rare bug that needed fixing. To our dismay, we found out that the utility was built using a version of Visual Studio that was 5-6 versions older than what we currently have. The new VS wouldn't read the old-version project file correctly, and we had to re-create the project from scratch. Even though we were still using the same IDE, version differences broke our build system.
When you use a separate build system, you are completely in control of it. Changing IDEs or versions of IDEs won't break anything. If your build system is based on an open-source tool like make, you also don't have to worry about your build tools being discontinued or abandoned because you can always re-build them from source (plus fix bugs) if needed. Relying on your IDE's build system introduces a single point of failure (especially on platforms like Visual Studio that also integrate the compiler), and in my mind that's been enough of a reason for me to separate my build system and IDE.
On a more philosophical level, I'm a firm believer that it's not a good thing to automate away something that you don't understand. It's good to use automation to make yourself more productive, but only if you have a firm understanding of what's going on under the hood (so that you're not stuck when the automation breaks, if for no other reason). I used my IDE's built-in build system when I first started programming because it was easy and automatic. I later started to become more aware that I didn't really understand what was happening when I clicked the "compile" button. I did a little reading and started to put together a simple build script from scratch, comparing my output to that of the IDE's build system. After a while I realized that I now had the power to do all sorts of things that were difficult or impossible through the IDE. Customizing the compiler's command-line options beyond what the IDE provided, I was able to produce a smaller, slightly faster output. More importantly, I became a better programmer by having real knowledge of the entire development process from writing code all the way down through the generation of machine language. Understanding and controlling the entire end-to-end process allows me to optimize and customize all of it to the needs of whatever project I'm currently working on.
If you have a hands-off, continuous integration build process it's going to be driven by an Ant or make-style script. Your CI process will check the code out of version control when changes are detected onto a separate build machine, compile, test, package, deploy, and create a summary report.
Let's say you have 5 people working on the same set of code. Each of of those 5 people are making updates to the same set of files. Now you may click the build button and you know that you're code works, but what about when you integrate it with everyone else. The only you'll know is that if you get everyone else's and try. This is easy every once in a while, but it quickly becomes tiresome to do this over and over again.
With a build server that does it automatically, it checks if the code compiles for everyone all the time. Everyone always knows if the something is wrong with the build, and what the problem is, and no one has to do any work to figure it out. Small things add up, it may take a couple of minutes to pull down the latest code and try and compile it, but doing that 10-20 times a day quickly becomes a waste of time, especially if you have multiple people doing it. Sure you can get by without it, but it is so much easier to let an automated process do the same thing over and over again, then having a real person do it.
Here's another cool thing too. Our process is setup to test all the sql scripts as well. Can't do that with pressing the build button. It reloads snapshots of all the databases it needs to apply patches to and runs them to make sure that they all work, and run in the order they are supposed to. The build server is also smart enough to run all the unit tests/automation tests and return the results. Making sure it can compile is fine, but with an automation server, it can handle many many steps automatically that would take a person maybe an hour to do.
Taking this a step further, if you have an automated deployment process along with the build server, the deployment is automatic. Anyone who can press a button to run the process and deploy can move code to qa or production. This means that a programmer doesn't have to spend time doing it manually, which is error prone. When we didn't have the process, it was always a crap shoot as to whether or not everything would be installed correctly, and generally it was a network admin or a programmer who had to do it, because they had to know how to configure IIS and move the files. Now even our most junior qa person can refresh the server, because all they need to know is what button to push.
the IDE build systems I've used are all usable from things like Automated Build / CI tools so there is no need to have a separate build script as such.
However on top of that build system you need to automate testing, versioning, source control tagging, and deployment (and anything else you need to release your product).
So you create scripts that extend your IDE build and do the extras.
One practical reason why IDE-managed build descriptions are not always ideal has to do with version control and the need to integrate with changes made by other developers (ie. merge).
If your IDE uses a single flat file, it can be very hard (if not impossible) to merge two project files into one. It may be using a text-based format, like XML, but XML it notoriously hard with standard diff/merge tools. Just the fact that people are using a GUI to make edits makes it more likely that you end up with unnecessary changes in the project files.
With distributed, smaller build scripts (CMake files, Makefiles, etc.), it can be easier to reconcile changes to project structure just like you would merge two source files. Some people prefer IDE project generation (using CMake, for example) for this reason, even if everyone is working with the same tools on the same platform.

Is anyone actually successfully using MSTest across the team?

I've been using MSTest so far for my unit-tests, and found that it would sometimes randomly break my builds for no reason. The builds would fail in VS but compile fine in MSBuild - with error like 'option strict does not allow IFoo to cast to type IFoo'. I believe I have finally fixed it, but after the bug coming back and struggling to make it go away again, and little help from MS, it left a bad taste in my mouth. I also noticed when looking at this forum and other blogs and such, that most people are using NUnit, xUnit, or MBUnit.. We are on VS2008 at work BTW.. So now I am looking to explore other options..
I'm working on moving our team to start doing TDD and real unit testing and have some training planned, but first would like to come up with a set of standard tools & best practices. To this end I've been looking online to come up with the right infrastructure for both a build server and dev machines...I was looking at the typemock website as I've heard great things about their mocking framework, and noticed that it seems like they promote MSTest, and even have some links of people moving TO MSTest from NUnit..
This is making me re-think my decision.. so I guess I'm asking - is anyone using MSTest as part of their TDD infrastructure? Any known limitiations it has, if I want to integrate with a build / CI server, or code coverage or any other kind of TDD tool I may need? I did search these forums and mostly find people comparing the 3rd party frameworks to eachother and not even giving MSTest much of a chance... Is there a good reason why.. ?
Thanks for the advice
EDIT: Thanks to the replies in this thread, I've confirmed MSTest works for my purposes and integreated gracefully with CI tools and build servers.
But does anyone have any experience with FinalBuilder?? This is the tool that I'd like us to use for the build scripts to prevent having to write a ton of XML compared to other build tools. Any limitiations here that I should be aware of before committing to MS Test?
I should also note - we are using VSS =(. I'm hoping we can ax this soon - hopefully as part of, maybe even the first step, of setting up all of this infrastructure.
At Safewhere we currently use MSTest for TDD, and it works out okay.
Personally, I love the IDE integration, but dislike the API. If it ever becomes possible to integrate xUnit.NET with the VS test runner, we will migrate very soon thereafter.
At least with TFS, MSTest works pretty well as part of our CI.
All in all I find that MSTest works adequately for me, but I don't cling to it.
If you are evaluating mock libraries, take a look at this comparison.
I've been using MS Test since VS 2008 came out, but I haven't managed to strong-arm anything like TDD or CI here at work, although I've messed with Cruise Control a little in an attempt to build a CI server on my local box.
In general I've found MS Test to be pretty decent for testing locally, but there are some pain points for institutional use.
First, MS Test adds quite a few things that probably don't belong in source control. The .VSMDI files are particularly annoying; just running MS Test creates anywhere from 1 to 5 of them and adds them to the solution file. Which means churn on your .SLN in source control, and churn of that sort is bad.
I understand the supposed point behind these extra files -- tracking test run history and such -- but I don't find them particularly useful for anything but a single developer. You should use your build service and CI for that sort of thing!
Second, you either must have Team Foundation Server to run your unit tests as part of CI, or you have to have a copy of Visual Studio installed on your build server if you use, for example, Cruise Control.NET. See this Stack Overflow question for details.
In general, there's nothing wrong with MS Test. But going CI will not be as smooth as it could be.
I have been using MSTest very successfully in our company. We are currently setting up standardised build processes within our company and so far, we have had good success with TeamCity. For Continuous integration, we use out the box TeamCity configurations. For the actual release builds, we set up large msbuild scripts that automate the entire process.
I really like mstest because of the IDE integration and also that all our devs automatically can use it without installing any 3rd party dependencies. I would not recommend switching just because of the problem you are experiencing. I have come full circle, where we went over to nunit and then came back again. These frameworks are all the same at the end of the day so pick the one that is easiest for most your devs to get access to and start using.
What I suspect your problem might be... sounds like an obscure problem I have had before where incorrect references of dll's (eg: adding explicit references (via browse) to projects in your solution, and not using the project reference) leads to out-of-date problems that only come up after clean checkouts or builds.
The other really suspect issue that I have found before is if you have some visual component or control that has a public property of some custom type that is being serialised in the forms .resx file. I typically need to flag them with an attribute that says SerializationVisibility.Hidden. This means that the IDE will not try to generate setters for the property value (which is typically some object graph). Just a thought. Could be way out.
I trust the tools and they don't really lie about there being a genuine problem. They only misrepresent them or report them as something completely obscure. It sounds to me like you have this. I suspect this because the error message doesn't make sense if all is in order, but it does make sense if some piece of code has loaded up an out of date or modified version of the dll at that point.
I have successfully deployed several FinalBuilder installations and the customers have been very happy with the outcome. I can highly recommend it.

What is the purpose of a dedicated "Build Server"? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
This post was edited and submitted for review 1 year ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
I haven't worked for very large organizations and I've never worked for a company that had a "Build Server".
What is their purpose?
Why aren't the developers building the project on their local machines, or are they?
Are some projects so large that more powerful machines are needed to build it in a reasonable amount of time?
The only place I see a Build Server being useful is for continuous integration with the build server constantly building what is committed to the repository. Is it I have just not worked on projects large enough?
Someone, please enlighten me: What is the purpose of a build server?
The reason given is actually a huge benefit. Builds that go to QA should only ever come from a system that builds only from the repository. This way build packages are reproducible and traceable. Developers manually building code for anything except their own testing is dangerous. Too much risk of stuff not getting checked in, being out of date with other people's changes, etc. etc.
Joel Spolsky on this matter.
Build servers are important for several reasons.
They isolate the environment The local Code Monkey developer says "It compiles on my machine" when it won't compile on yours. This can mean out-of-sync check-ins or it could mean a dependent library is missing. Jar hell isn't near as bad as .dll hell; either way, using a build server is cheap insurance that your builds won't mysteriously fail or package the wrong libraries by mistake.
They focus the tasks associated with builds. This includes updating the build tag, creating any distribution packaging, running automated tests, creating and distributing build reports. Automation is the key.
They coordinate (distributed) development. The standard case is where multiple developers are working on the same code base. The version control system is the heart of this sort of distributed development but depending on the tool, the developers may not interact with each other's code much. Instead of forcing developers to risk bad builds or worry about merging code overly aggressively, design the build process where the automated build can see the appropriate code and processes the build artifacts in a predictable way. That way when a developer commits something with a problem, like not checking in a new file dependency, they can be notified quickly. Doing this in a staged area let's you flag the code that has built so that developers don't pull code that would break their local build. PVCS did this quite well using the idea of promotion groups. Clearcase could do it too using labels but would require more process administration than a lot of shops care to provide.
What is their purpose?
Take load of developer machines, provide a stable, reproducible environment for builds.
Why aren't the developers building the project on their local machines, or are they?
Because with complex software, amazingly many things can go wrong when just "compiling through". problems I have actually encountered:
incomplete dependency checks of different kinds, resulting in binaries not being updated.
Publish commands failing silently, the error message in the log ignored.
Build including local sources not yet commited to source control
(fortunately, no "damn customers" message boxes yet..).
When trying to avoid above problem by building from another folder, some files picked from the wrong folder.
Target folder where binaries are aggregated contains additional stale developer files that shoulkd not be included in release
We've got an amazing stability increase since all public releases start with a get from source control onto an empty folder. Before, there were lots of "funny problems" that "went away when Joe gave me a new DLL".
Are some projects so large that more powerful machines are needed to build it in a reasonable amount of time?
What's "reasonable"? If I run a batch build on my local machine, there are many things I can't do. Rather than pay developers for builds to complete, pay IT to buy a real build machine already.
Is it I have just not worked on projects large enough?
Size is certainly one factor, but not the only one.
A build server is a distinct concept to a Continuous Integration server. The CI server exists to build your projects when changes are made. By contrast a Build server exists to build the project (typically a release, against a tagged revision) on a clean environment. It ensures that no developer hacks, tweaks, unapproved config/artifact versions or uncommitted code makes it into the released code.
The build server is used to build everyone's code when it is checked in. Your code may compile locally, but you most likely won't have all the change made by everyone else all the time.
To add on what has already been said :
An ex-colleague worked on the Microsoft Office team and told me a complete build sometimes took 9 hours. That would suck to do it on YOUR machine, wouldn't it?
It's necessary to have a "clean" environment free of artifacts of previous versions (and configuration changes) in order to ensure that builds and tests work and don't depend on the artifacts. An effective way to isolate is to create a separate build server.
I agree with the answers so far in regards to stability, tracability, and reproducability. (Lots of 'ity's, right?). Having ONLY ever worked for large companies (Health Care, Finance) with MANY build servers, I would add that it's also about security. Ever seen the movie Office Space? If a disgruntled developer builds a banking application on his local machine and no one else looks at it or tests it... BOOM. Superman III.
These machines are used for several reasons, all trying to help you provide a superior product.
One use is to simulate a typical end user configuration. The product might work on your computer, with all your development tools and libraries set up, but the end user most likely won't have the same configuration as you. For that matter, other developers won't have the exact same setup as you either. If you have a hardcoded path somewhere in your code, it will probably work on your machine, but when Dev El O'per tries to build the same code, it won't work.
Also they can be used to monitor who broke the product last, with what update, and where the product regressed at. Whenever new code is checked in, the build server builds it, and if it fails, its clear that something is wrong and the user who committed last is at fault.
For consistent quality and to get the build 'off your machine' to spot environment errors and so that any files you forget to check in to source control also show up as build errors.
I also use it to create installers as these take a lot of time to do on the desktop with code signing etc.
We use one so that we know that the production/test boxes have the same libraries and versions of those libraries installed as what is available on the build server.
It's about management and testing for us. With a build server we always know that we can build our main "trunk" line from version control. We can create a master install with one-click and publish it to the web. We can run all of our unit tests each time code is checked in to make sure it works. By collecting all these tasks into a single machine it makes it easier to get it right repeatedly.
You are right that developers could build on their own machines.
But these are some of the things our build server buys us, and we're hardly sophisticated build makers:
Version control issues (some have been mentioned in earlier responses)
Efficiency. Devs don't have to stop to make builds locally. They can kick it off on the server and get on to the next task. If builds are large, then that is even more time the dev's machine is not occupied. For those doing continuous integration and automated testing, even better.
Centralization. Our build machine has scripts that make the build, distribute it to UAT environments, and even to production staging. Keeping them in one place reduces the hassle of keeping them in sync.
Security. We don't do much special here, but I'm sure a sysadmin can make it such that production migration tools can only be accessed on a build server by certain authorized entities.
Maybe i'm the only one...
I think everyone agrees that one should
use a file repository
do builds from the repository (and in a clean environment)
use a continous testing server (e.g. cruise control) to see if anything is broken after your "fixes"
But no one cares about automatically built versions.
When something was broken in an automatic build, but it's not anymore - who cares? It's a work in progress. Someone fixed it.
When you want to do a release version, you run a build from the repository. And i'm pretty sure you want to tag the version in the repository at that time and not every six hours when the server does it's work.
So, maybe a "build server" is just a misnomer and it's actually a "continous test server". Otherwise it sounds pretty much useless.
A build server gets you a sort of second opinion of your code. When you check it in, the code is checked. If it works, the code has a minimum quality.
Additionally, remember that low level languages take much longer to compile than high level languages. It's easy to think "Well look, my .Net project compiles in a couple of seconds! What's the big deal?" Awhile back I had to mess with some C code and I had forgotten how much longer it takes to compile.
A build server is used to schedule compile tasks (e.g. nightly builds) of usually large projects located in a repository that can sometimes take more than a couple of hours.
A build server also gives you a basis for escrow, being able to capture all the parts necessary to reproduce a build in the case that others may have rights to take ownership.