Distributed build tools in Visual C++? - c++

We're working on pretty large middleware software, and it takes 10-20 minutes each time we need to re-build the whole solution in VS2008 (quad-core parallel build on a single machine). I've heard there are rather expensive VS extensions like Incredibuild that make use of other machines in your network (we have about 10 machines). Have you ever used or heard about those tools? Do they make the build process so faster and smarter to be worth the money? (e.g. re-use object files that could be cached on different machines)
Thanks in advance

Well we had similar problems except that in our case it took over 40 minutes to build.
With Visual Studio 2008 we used Xoreax Incredibuild ( see Xoreax web site ) and it dramatically reduced the build time (i.e. to half the time or better). So I would say yes, such a tool can help. And, in my opinion it is not so expensive (i.e. between 250 and 500 USD per seat depending on extra options and there are some discounts for volume purchases).
Now we are at VS 2010 and at the time we upgraded there was no support for 2010 in Incredibuild so we rely on the VS2010 build. All developers have quad cores and the build in VS 2010 makes good use of that so the build times are acceptable. We also reviewed and improved our includes and made better use of precompiled headers and so on.
So you coud try Xoreax Incredibuild (they have a fully functional 30 days trial version that can be extended when it expires) to be able to determine if the price / benefit ratio is ok for your case, or revise and improve your includes and/or upgrade to VS2010 if you have multi core machines.

Agree with previous answer. We had similar experiences; build times do go down quite significantly. We first went with the simple Incredibuild setup; using spare capacity on other machines. The benefits were clear, but we recognized we could get even quicker builds by adding a dedicated build server (we do have far more developers, though - more than 100 clients - so YMMV)
As for the costs, it's quite likely that Incredibuild will increase your developer productivity by at least 1%.

Microsoft now has BuildXL, but you have to separately build the project file for it.
FastBuild is another option, but again, separate project format. msfastbuild can build Visual Studio projects with FastBuild, but... you have to adjust it for your build-chain.
Still at trying to get one of them running...

Related

Is Embarcadero C++ Builder a good choice as an IDE? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
As we are (me and people I work with) more and more frustrated while working with C++ projects 250 000+ LOC in VS2010 sp1 (the slowness of this IDE is just unbelievable), in my company we were talking about migrating our code to some different IDE. We did some research, and a strong candidate seems to be Embarcadero C++ builder 2011 XE. Any thoughts on it? Is it any good? How does it compares to VS2010 ultimate?
I've been using C++ Builder since 1.0 and I hate it with a passion. You would think after all these years, simple little annoyances would be fixed by now but they are not. Here is a list of issues I have with C++ Builder IDE.
Your layout or personality never is maintained. You create one, save it and it only applies to certain things. For example the debugger window will not maintain its position nor will the message window. If you detach the project explorer it will sometimes dissappear. Most of the time reloading your personality doesn't fix this either. You are stuck dragging your windows back into place.
The debugger will sometimes work and sometimes not work. In a debug build if you set a break point and begin stepping through code, you can hover over a variable to inspect it. Sometimes this works and sometimes it doesn't work on the exact same variable. Crazy!
Eclipse looks for code mistakes like if you forget to put a semi-colon at the end of your statement, it puts a little ? mark in the margin. C++ Builder doesn't do anything like this. It gives you a cryptic compile time error message.
Recent versions of C++ Builder use a makefile similar to VS; it's an XML mess. Eclipse works with CMake and Makefiles. I've read in places that the CMake maintainers are looking for a C++Builder generator but last I checked this doesn't exist. I do embedded and cross compiling so sometimes my C++ Builder code is copied to my embedded development environment or shared with it and I wind up maintaining two build environments.
Not really an IDE but C++Builder does not take advantage of multiple CPUs to compile code. There is, however, a 3rd party tool you can spend more money on to get this. It's called TwineCompile (http://www.jomitech.com/twine.php). With Eclipse, they call out to whatever compiler you're using (gcc, etc...) and those compilers and make support -j option.
C++Builder comes with a limited version of AQTime which is a dynamic code profiler. Spend more and you get the more advanced version. Eclipse supports many dynamic and static code analysis (which also cost $$) but at least the plugins are there. We use Klockworx.
C++ Builder has no support, that I'm aware of, for external source control like GIT. Eclipse does. C++ Builder comes with subversion, I think, built-in. If it supports GIT, I could never get it to work. It tells me it doesn't understand the URL scheme when I give it a git path.
Certain template code I write causes the compiler to segfault and have to completely restart the IDE. This is nuts to me. You have a compiler that is 10+ years old and it's still segfaulting. I have a piece of C++ template code that when I take it to my work computer running exact same version of C++ Builder, it compiles OK, but on my home machine it segfaults. I'm absolutely sure there are no adverse factors at play like viruses, etc...
While compiling a large project that may take a long time, you are unable to browse code with the IDE. Sometimes you may see a compiler warning scroll by and you have to either wait for the compile job to complete to inspect the mentioned line or use an alternate means to open the file.
C++ Builder IDE has a concept of a Project Group with sub projects that are more/less self contained. The Project Group has no concept of a project group include/link path like the sub-projects have. Sub-projects have a base, debug, release paths where debug and release can inherit or block from the base but you don't have this at the project group level. The IDE has global settings which can be inherited but it's for everything you do in the IDE. So there is no way to modify for a given project group, just the include/linker paths for a set of sub-projects. I just think they could have done a better job with this.
C++ Builder's Build output is not color coded to, for example, show errors in red and warnings in some other color. Everything is black and white. VC and Eclipse color code and give option to change colors for various warnings and errors. The output tab in C++ Builder is same way. On big projects, it's very difficult to investigate compiler warnings with the other noise. In C++ Builder's IDE you can select level of warnings but this only affects output in the Output tab and you still get other stupid noise like letting me know its deleting linker state files "CleanLinkerStateFiles."
Unless you're doing Windows desktop GUI development, stay away from Embarcadero/C++ Builder. I started using C++ Builder version 1 back in the Borland days and have a few large projects that are heavily invested in the VCL so I'm stuck with it for those projects but all my new projects, I've been using Eclipse.
On a positive note about C++ Builder, the VCL is quite nice. It's not multi-threaded but it's nice for creating a desktop GUI app really quick. I think it's much faster to get a C++ based GUI app up in CBuilder than it is in VS. And there appears to be a ton of free and paid GUI components for CBuilder; again with a C++ focus. I know C# + VS has a wealth of GUI controls.
UPDATE:
I just ran into a problem today that is same as the one mentioned in this forum:
http://qc.embarcadero.com/wc/qcmain.aspx?d=57631
[ILINK32 Warning] Warning: Error detected (ILI4536)
Make up your mind. Is it a warning or a god dam error?
Scroll all the way to the end where you find individuals modifying ILINK32.EXE to get it working again. As of this morning, our builds stop working. We're dead in the water as we scramble to understand and find out what to do about this.
Is this the kind of compiler/IDE you want to depend on? Again, this product has been around for more than a decade and it still has issues like this. I find this completely unacceptable. Crap product from a company that doesn't give a shit.
Not actually an answer, but I'll just leave it here:
It costs money (yes, VS too, but you already own that, don't you?)
It will be not too easy to migrate a big enough project to new IDE (and compiler), not to say about the people you work with and their habits (I would just quit probably).
There's a new compiler too, with its brand bugs and caveats to learn about. And it's much less widely used than VC++. However, it's based on Clang, which should support standards better than VC++, and be easier to port existing C++ code to.
The difficulty of migrating hugely depends on the nature of your project (is it GUI based, how deeply does it rely on MS VC++ being the compiler?)
There is nothing positive about Embarcadero XE, neither their aging IDE neither their aging compiler. Only use it if you're bound to it (legacy software) or if you want to do Delphi.
For C++, do yourself a favor and join 21st century : stick with something more powerful, versatile and modern such as VC++ or Qt.
This question is really a matter of personal opinion.
I personally HATE Visual Studio with a passion, I avoid it like the plague. My exposure to Eclipse has been limited to Java, but even then I've had a hard time working with it.
I have been using C++Builder for 15 years, since v3.0 all the way up to the latest XE6. Yes, it has quirks and limitations, but I still find it the easiest IDE for me to work with and be productive with, once you know how to work with (or around) them. Maybe my experience with it is hindering my ability to work with other IDEs, but so be it. I still prefer C++Builder over any other. But I only use it for Windows development (the VCL is very mature and robust), I don't do cross-platform development with it yet (FireMonkey still has a ways to go to evolve and mature). And I do use plenty of open-source projects with it. Yes, sometimes I have to tweak their projects and/or code to make them compile, but that it usually a one-time deal and then they work fine.
I'd suggest Eclipse.
As an IDE, it takes a little while to get used too, but it is well
worth the effort.
It's available for Mac OS, Linux and Windows.
You need to have Java installed on your computer, but that's
really a non - issue.
It supports Cygwin, MinGW, and the MicrosoftVisual C++ toolchains. The build in CDT Builder is pretty good too.
You can use it to develop for languages other than C++ (Java , JavaScript, PHP ..)
You can extend it's functionality by installing plugins
IT'S FREE!
Did I mention that it has a built in Web Browser ? Really useful for referring to online documentation, while coding.
1.
We have a solution over 1M LOC and VS2010 handles it ok. We especially like /MP switch for compiling on all available CPU cores.
You did not specify your hardware. If you don't yet run on at least i7-2600 + fast SSD, I suggest trying hardware upgrade first.
2.
I used to use Borland tools a lot in the past. Delphi was rather stable; C++ Builder was much more buggy. Couple of years ago I helped to upgrade old Delphi projects to newer Delphi IDE with some service packs installed. And it had bugs even in the basic File IO APIs which have worked since Turbo Pascal. We had to downgrade to a previous version. I expect that quality of C++ Builder won't be much better than of VS2010.
3.
You did not specify what exactly is slow. You may want to convert some projects into components compiled separately. Also make sure you use PCH.
Also it worth investigating if you abuse C++ inclusion model by including a lot of unneeded header files in each and every unit. If, after preprocessing, Intellisense and compiler have to deal with huge amount of code, no IDE can help.
I have not used Visual Studio 2010 Ultimate for C++, rather C# and C# web services development. That being said, as a test between VS 2010 Ultimate and C++Builder XE, I have created a simple VS C++ Windows Forms application to click a button and show "Hello World" through an event handler. Getting the button onto the VS Window Designer is okay, as long as you remember to access View | Toolbox. If not, it will take some time to track down where the visual components are hanging out.
For reasons that do not make any language sense, the button click event handler has a signature that looks like:
System::Void button1_Click(System::Object^ sender, System::EventArgs^ e) {
}
and it goes to the header file as one would expect. The ^ symbol makes little sense. Does using it tie into the CLI/CLR better? I expected a * to indicate a pointer.
After using the default Form1 (only header file created) and subsequently adding a new windows form, I finally obtained the respective cpp file. Maybe the C++ Windows Form Wizard has a bug. Who knows? Anyhow, when adding the button click event by double clicking the button in the designer, the cpp does not obtain the method in either cpp form I tested. Maybe this is normal, I do not know. The end result of this is that after trying to use the MessageBox function within the cpp, it only caused compilation errors. I am sure there is yet another header file that has to be in the include path. I spent no time tracking this down. Trying to set a label component text property caused compilation errors too. About 20 minutes later, I went to C++Builder XE3 in frustration.
In C++Builder, I have tested VCL Forms, FireMonkey Desktop, and FireMonkey Metropolis application creation from the project wizard. Sure enough, I have three different applications saying, "Hello World," in about three minutes total, all calling C++Builder's built in global shortcut function called ShowMessage("insert message here"). The timing could have been slightly different as I did not time it with a stop watch. It took longer to save files with meaningful names than the code itself: one line of typing in the respective click event body in each cpp (not the header).
The other main daily use gotcha with VS, for those of us who love the Brief key map, is that VS is highly challenging to configure into Brief. When doing heavy development in C#, I use C++Builder's editor in Brief mode, saving files as often as I want. VS does correctly detect file updates as you click back to the VS IDE.
On the slowness mentioned by the OP above, I suggest also looking very closely at the hardware platform relative to running Visual Studio. I have noticed that if the .Net framework is out of date, VS will be slow within the IDE. It does not seem to matter which language the project is in either. I use Visual Studio 2010 Ultimate on Parallels with Windows XP Pro, with 2 virtual cores. Generally, VS responds normally within the IDE. While using it, I am NOT thinking, "VS is soooo slow."
Regarding migrating a quarter million lines to C++Builder from VS, I am not sure whether VS event handlers will convert by some wizard or other migration tool. The ^ symbol, if consistently used in all event handlers, may not be a big deal for a regular expression conversion that is custom written. If the project is very thin on the user interface layer and heavy in business rules and data, converting to C++Builder should be relatively easy. I would expect some new coding for the new user interface click events passing the user interaction into the other layers. For prototyping, using data aware components are likely your best bet. In normal application running, expect to have the business rules layer use the STL and built in C++Builder data structures (even the AnsiString c_str() method) to interact with non data aware components. The performance and user experience will likely improve.
Start Edit
A big knock on C++Builder XE3 (note this is two releases behind the current one of five) is that the 64-bit Windows support is only for console applications. The knock is more from not being frequently broadcast on how to use the Add Platform sub-menu that appears when right clicking the mouse over the Target Platforms choice in the Project tree view. This quick method to add more platforms to a project after it may first be targeting 32-bit Windows is virtually painless. A new sub-dialog appears after clicking the sole sub-menu choice and a drop down box appears to select the new operating system and respective 32-bit or 64-bit versions. In my opinion, Embarcadero is not demonstrating often enough how simple it is to add other target platforms. So, to ease all developer's pain if this is not known in advance, I have found three web pages on the Embarcadero site. The first one has pretty pictures of creating a FireMonkey desktop application. Step 5 has the screen capture of the Target Platforms | Add Platform sub-menu choice for adding the Mac OS X platform. It is here titled Creating Your First FireMonkey Application for Desktop Platforms (C++): http://docwiki.embarcadero.com/RADStudio/XE2/en/Creating_Your_First_FireMonkey_Application_for_Desktop_Platforms_%28C%2B%2B%29
The more terse and no picture procedure is here titled Steps in Creating Cross-Platform Applications:
http://docwiki.embarcadero.com/RADStudio/XE2/en/Steps_in_Creating_Cross-Platform_Applications
The Windows centric procedure and a small screen capture is here titled 64-bit Cross-Platform Application Development for Windows:
http://docwiki.embarcadero.com/RADStudio/XE3/en/64-bit_Cross-Platform_Application_Development_for_Windows
I have found on an Embarcadero forum post that an upgrade to the Update 1 XE3 release from the original XE3 release has a Target Platform selection issue. There can be an internal path setting or two that is incorrect, and possibly having to change an original XE3 project file (.cbproj) to enable Win64. Apparently the original release project file has this set to false.
XE5 (note version five as of December 2013) is supposed to have 64-bit Windows support for both console and forms applications (e.g. VCL, FireMonkey Desktop, FireMonkey Metropolis), OS X, iOS (Android coming sometime soon). For the complete list, review the C++Builder feature matrix pdf for all of the XE5 details:
http://www.embarcadero.com/products/cbuilder/cbuilder-feature-matrix.pdf
Since the XE3 Update 1 has been shown to resolve Target Platform selection issues, when compared to the original XE3, there should not be any weird behaviors. I have also come across an Embarcadero post that states from a TeamB member that for mobile applications, the Target Platform choices are filtered such that mixing a desktop platform project with a mobile one is not allowed. So, if one wanted to try creating a desktop application and then with a mouse click force it into an iPhone, some other development tool will have to be used. C++Builder and/or Delphi will not attempt to squeeze desktop components onto a mobile device. You have to start with a mobile application project. Here is the forum link:
https://forums.embarcadero.com/thread.jspa?threadID=96371
(End Edit)
If curious about my overall background, I have used C++Builder since version one, Visual Studio .NET (C# 1.0) and Visual Studio 2010 Ultimate. It seems like Visual Studio concentrates on C# more than any other language. There are eighteen C# projects and fifteen C++ projects when selecting File | New Project. To reach the Visual Studio C++ project area, make sure to reach it by opening the "Other Languages" sub-tree.
In recent Internet posts between Visual Studio latest and greatest and C++Builder latest and greatest, purchase prices vary in the thousands of dollars. Even if never ever having an installation to upgrade either tool, C++Builder remains a bargain compared to Visual Studio. Please conduct thorough research before spending your hard earned cash. Hopefully both tools have 30-day trial installations to compare side by side, as your mileage may vary.

Continuous build infrastructure recommendations for primarily C++; GreenHills Integrity

I need your recommendations for continuous build products for a large (1-2MLOC) software development project. Characteristics:
ClearCase revision control
Approx 80% C++; 15% Java; 5% script or low-level
Compiles for Green Hills Integrity OS, but also some windows and JVM chunks
Mostly an embedded system; also includes some UI pieces and some development support (simulation tools, config tools, etc...)
Each notional "version" of the deliverable includes deployment images for a number of boards, UI machines, etc... (~10 separate images; 5 distinct operating systems)
Need to maintain/track many simultaneous versions which, notably, are built for a variety of different board support packages
Build cycle time is a major issue on the project, need support for whatever features help address this (mostly need to manage a large farm of build machines, I guess..)
Operates in a secure environment (this is a gov't program) (Edited to add: This is a classified program; outsourcing the build infrastructure is a non-starter.)
Interested in any best practices or peripheral guidance you might offer. The build automation issues is one of several overlapping best practices that appear to be missing on the program, but try to keep your answers focused on build infrastructure piece and observations directly related.
Cost is not the driving concern. Scalability and ease of retrofitting onto an existing infrastructure are key.
(Edited to address #Dan's comment. ;-)
From my experience with similar systems, there are approximately two parts to this problem:
A repeatable method for checking out sources, building the software, and testing it (if you want to do continual testing as well as building), using a small number of command-line invocations.
A means of calling these command lines on various servers in the build farm.
For the latter, we've been using BuildBot, which seems to work pretty well.
For the former, we have a homegrown solution that started out as a simple bash shell script and grew ... rather substantially. From experience, I'd suggest starting out in python rather than bash -- you'll spend far more code in handling setup and configuration than in actually invoking programs. (Also, it's probably easier to run it on Windows if you're doing that.)
The things I've found to be really key in our script's usefulness are:
Ironclad repeatability. We have a standard set of build tools, and the scripts start out by scrubbing environment variables. There are very few command-line options; everything goes into configuration files, and those go in version control.
Logging. We produce a log of every command that the build script executes.
Configuration file inheritance. Each variant of our software gets a configuration file, and those files can include more-general settings (which include even-more-general settings).
Extensibility. When we add a new source component, it's pretty easy to add a set of instructions for building that component (and the instructions can be arbitrary bash code). The "can be arbitrary code" part is probably key here; no way is a pre-existing product going to be able to do all of the quirky things that you need for a large complex real-world system.
You can get started with a reasonably simple script and let it grow organically as the need arises; honestly, although ours is a bit messy, I think we got a much more usable result that way than we would have with heavy top-down design.
Cost isn't an object? I've worked for GreenHills, and they've solved these issues for their in-house build/test farms. Ask them to do the same for you.
When I see emphasis on things like scalability and security in a build system, I start thinking that you might be a candidate for the enterprise class build systems / CI systems. Conveniently, it sounds like you can afford them as well. A year old SD Times article provides a basic breakdown between the enterprise and team level build tools.
My company makes AnthillPro and we've worked with a number of companies on large embedded projects as well as highly secure projects. IBM is probably the largest other player in the space with BuildForge.
AnthillPro puts some extra emphasis on what you do with the images in the minutes/hours/days post build (do you install them onto simulators / hardware and run automated tests? stage them? promote them?) but we also see folks using it for just build.

Why should one use a build system over that which is included as part of an IDE?

I've heard more than one person say that if your build process is clicking the build button, than your build process is broken. Frequently this is accompanied with advice to use things like make, cmake, nmake, MSBuild, etc. What exactly do these tools offer that justifies manually maintaining a separate configuration file?
EDIT: I'm most interested in answers that would apply to a single developer working on a ~20k line C++ project, but I'm interested in the general case as well.
EDIT2: It doesn't look like there's one good answer to this question, so I've gone ahead and made it CW. In response to those talking about Continuous Integration, yes, I understand completely when you have many developers on a project having CI is nice. However, that's an advantage of CI, not of maintaining separate build scripts. They are orthogonal: For example, Team Foundation Build is a CI solution that uses Visual Studio's project files as it's configuration.
Aside from continuous integration needs which everyone else has already addressed, you may also simply want to automate some other aspects of your build process. Maybe it's something as simple as incrementing a version number on a production build, or running your unit tests, or resetting and verifying your test environment, or running FxCop or a custom script that automates a code review for corporate standards compliance. A build script is just a way to automate something in addition to your simple code compile. However, most of these sorts of things can also be accomplished via pre-compile/post-compile actions that nearly every modern IDE allows you to set up.
Truthfully, unless you have lots of developers committing to your source control system, or have lots of systems or applications relying on shared libraries and need to do CI, using a build script is probably overkill compared to simpler alternatives. But if you are in one of those aforementioned situations, a dedicated build server that pulls from source control and does automated builds should be an essential part of your team's arsenal, and the easiest way to set one up is to use make, MSBuild, Ant, etc.
One reason for using a build system that I'm surprised nobody else has mentioned is flexibility. In the past, I also used my IDE's built-in build system to compile my code. I ran into a big problem, however, when the IDE I was using was discontinued. My ability to compile my code was tied to my IDE, so I was forced to re-do my entire build system. The second time around, though, I didn't make the same mistake. I implemented my build system via makefiles so that I could switch compilers and IDEs at will without needing to re-implement the build system yet again.
I encountered a similar problem at work. We had an in-house utility that was built as a Visual Studio project. It's a fairly simple utility and hasn't needed updating for years, but we recently found a rare bug that needed fixing. To our dismay, we found out that the utility was built using a version of Visual Studio that was 5-6 versions older than what we currently have. The new VS wouldn't read the old-version project file correctly, and we had to re-create the project from scratch. Even though we were still using the same IDE, version differences broke our build system.
When you use a separate build system, you are completely in control of it. Changing IDEs or versions of IDEs won't break anything. If your build system is based on an open-source tool like make, you also don't have to worry about your build tools being discontinued or abandoned because you can always re-build them from source (plus fix bugs) if needed. Relying on your IDE's build system introduces a single point of failure (especially on platforms like Visual Studio that also integrate the compiler), and in my mind that's been enough of a reason for me to separate my build system and IDE.
On a more philosophical level, I'm a firm believer that it's not a good thing to automate away something that you don't understand. It's good to use automation to make yourself more productive, but only if you have a firm understanding of what's going on under the hood (so that you're not stuck when the automation breaks, if for no other reason). I used my IDE's built-in build system when I first started programming because it was easy and automatic. I later started to become more aware that I didn't really understand what was happening when I clicked the "compile" button. I did a little reading and started to put together a simple build script from scratch, comparing my output to that of the IDE's build system. After a while I realized that I now had the power to do all sorts of things that were difficult or impossible through the IDE. Customizing the compiler's command-line options beyond what the IDE provided, I was able to produce a smaller, slightly faster output. More importantly, I became a better programmer by having real knowledge of the entire development process from writing code all the way down through the generation of machine language. Understanding and controlling the entire end-to-end process allows me to optimize and customize all of it to the needs of whatever project I'm currently working on.
If you have a hands-off, continuous integration build process it's going to be driven by an Ant or make-style script. Your CI process will check the code out of version control when changes are detected onto a separate build machine, compile, test, package, deploy, and create a summary report.
Let's say you have 5 people working on the same set of code. Each of of those 5 people are making updates to the same set of files. Now you may click the build button and you know that you're code works, but what about when you integrate it with everyone else. The only you'll know is that if you get everyone else's and try. This is easy every once in a while, but it quickly becomes tiresome to do this over and over again.
With a build server that does it automatically, it checks if the code compiles for everyone all the time. Everyone always knows if the something is wrong with the build, and what the problem is, and no one has to do any work to figure it out. Small things add up, it may take a couple of minutes to pull down the latest code and try and compile it, but doing that 10-20 times a day quickly becomes a waste of time, especially if you have multiple people doing it. Sure you can get by without it, but it is so much easier to let an automated process do the same thing over and over again, then having a real person do it.
Here's another cool thing too. Our process is setup to test all the sql scripts as well. Can't do that with pressing the build button. It reloads snapshots of all the databases it needs to apply patches to and runs them to make sure that they all work, and run in the order they are supposed to. The build server is also smart enough to run all the unit tests/automation tests and return the results. Making sure it can compile is fine, but with an automation server, it can handle many many steps automatically that would take a person maybe an hour to do.
Taking this a step further, if you have an automated deployment process along with the build server, the deployment is automatic. Anyone who can press a button to run the process and deploy can move code to qa or production. This means that a programmer doesn't have to spend time doing it manually, which is error prone. When we didn't have the process, it was always a crap shoot as to whether or not everything would be installed correctly, and generally it was a network admin or a programmer who had to do it, because they had to know how to configure IIS and move the files. Now even our most junior qa person can refresh the server, because all they need to know is what button to push.
the IDE build systems I've used are all usable from things like Automated Build / CI tools so there is no need to have a separate build script as such.
However on top of that build system you need to automate testing, versioning, source control tagging, and deployment (and anything else you need to release your product).
So you create scripts that extend your IDE build and do the extras.
One practical reason why IDE-managed build descriptions are not always ideal has to do with version control and the need to integrate with changes made by other developers (ie. merge).
If your IDE uses a single flat file, it can be very hard (if not impossible) to merge two project files into one. It may be using a text-based format, like XML, but XML it notoriously hard with standard diff/merge tools. Just the fact that people are using a GUI to make edits makes it more likely that you end up with unnecessary changes in the project files.
With distributed, smaller build scripts (CMake files, Makefiles, etc.), it can be easier to reconcile changes to project structure just like you would merge two source files. Some people prefer IDE project generation (using CMake, for example) for this reason, even if everyone is working with the same tools on the same platform.

What is the purpose of a dedicated "Build Server"? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
This post was edited and submitted for review 1 year ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
I haven't worked for very large organizations and I've never worked for a company that had a "Build Server".
What is their purpose?
Why aren't the developers building the project on their local machines, or are they?
Are some projects so large that more powerful machines are needed to build it in a reasonable amount of time?
The only place I see a Build Server being useful is for continuous integration with the build server constantly building what is committed to the repository. Is it I have just not worked on projects large enough?
Someone, please enlighten me: What is the purpose of a build server?
The reason given is actually a huge benefit. Builds that go to QA should only ever come from a system that builds only from the repository. This way build packages are reproducible and traceable. Developers manually building code for anything except their own testing is dangerous. Too much risk of stuff not getting checked in, being out of date with other people's changes, etc. etc.
Joel Spolsky on this matter.
Build servers are important for several reasons.
They isolate the environment The local Code Monkey developer says "It compiles on my machine" when it won't compile on yours. This can mean out-of-sync check-ins or it could mean a dependent library is missing. Jar hell isn't near as bad as .dll hell; either way, using a build server is cheap insurance that your builds won't mysteriously fail or package the wrong libraries by mistake.
They focus the tasks associated with builds. This includes updating the build tag, creating any distribution packaging, running automated tests, creating and distributing build reports. Automation is the key.
They coordinate (distributed) development. The standard case is where multiple developers are working on the same code base. The version control system is the heart of this sort of distributed development but depending on the tool, the developers may not interact with each other's code much. Instead of forcing developers to risk bad builds or worry about merging code overly aggressively, design the build process where the automated build can see the appropriate code and processes the build artifacts in a predictable way. That way when a developer commits something with a problem, like not checking in a new file dependency, they can be notified quickly. Doing this in a staged area let's you flag the code that has built so that developers don't pull code that would break their local build. PVCS did this quite well using the idea of promotion groups. Clearcase could do it too using labels but would require more process administration than a lot of shops care to provide.
What is their purpose?
Take load of developer machines, provide a stable, reproducible environment for builds.
Why aren't the developers building the project on their local machines, or are they?
Because with complex software, amazingly many things can go wrong when just "compiling through". problems I have actually encountered:
incomplete dependency checks of different kinds, resulting in binaries not being updated.
Publish commands failing silently, the error message in the log ignored.
Build including local sources not yet commited to source control
(fortunately, no "damn customers" message boxes yet..).
When trying to avoid above problem by building from another folder, some files picked from the wrong folder.
Target folder where binaries are aggregated contains additional stale developer files that shoulkd not be included in release
We've got an amazing stability increase since all public releases start with a get from source control onto an empty folder. Before, there were lots of "funny problems" that "went away when Joe gave me a new DLL".
Are some projects so large that more powerful machines are needed to build it in a reasonable amount of time?
What's "reasonable"? If I run a batch build on my local machine, there are many things I can't do. Rather than pay developers for builds to complete, pay IT to buy a real build machine already.
Is it I have just not worked on projects large enough?
Size is certainly one factor, but not the only one.
A build server is a distinct concept to a Continuous Integration server. The CI server exists to build your projects when changes are made. By contrast a Build server exists to build the project (typically a release, against a tagged revision) on a clean environment. It ensures that no developer hacks, tweaks, unapproved config/artifact versions or uncommitted code makes it into the released code.
The build server is used to build everyone's code when it is checked in. Your code may compile locally, but you most likely won't have all the change made by everyone else all the time.
To add on what has already been said :
An ex-colleague worked on the Microsoft Office team and told me a complete build sometimes took 9 hours. That would suck to do it on YOUR machine, wouldn't it?
It's necessary to have a "clean" environment free of artifacts of previous versions (and configuration changes) in order to ensure that builds and tests work and don't depend on the artifacts. An effective way to isolate is to create a separate build server.
I agree with the answers so far in regards to stability, tracability, and reproducability. (Lots of 'ity's, right?). Having ONLY ever worked for large companies (Health Care, Finance) with MANY build servers, I would add that it's also about security. Ever seen the movie Office Space? If a disgruntled developer builds a banking application on his local machine and no one else looks at it or tests it... BOOM. Superman III.
These machines are used for several reasons, all trying to help you provide a superior product.
One use is to simulate a typical end user configuration. The product might work on your computer, with all your development tools and libraries set up, but the end user most likely won't have the same configuration as you. For that matter, other developers won't have the exact same setup as you either. If you have a hardcoded path somewhere in your code, it will probably work on your machine, but when Dev El O'per tries to build the same code, it won't work.
Also they can be used to monitor who broke the product last, with what update, and where the product regressed at. Whenever new code is checked in, the build server builds it, and if it fails, its clear that something is wrong and the user who committed last is at fault.
For consistent quality and to get the build 'off your machine' to spot environment errors and so that any files you forget to check in to source control also show up as build errors.
I also use it to create installers as these take a lot of time to do on the desktop with code signing etc.
We use one so that we know that the production/test boxes have the same libraries and versions of those libraries installed as what is available on the build server.
It's about management and testing for us. With a build server we always know that we can build our main "trunk" line from version control. We can create a master install with one-click and publish it to the web. We can run all of our unit tests each time code is checked in to make sure it works. By collecting all these tasks into a single machine it makes it easier to get it right repeatedly.
You are right that developers could build on their own machines.
But these are some of the things our build server buys us, and we're hardly sophisticated build makers:
Version control issues (some have been mentioned in earlier responses)
Efficiency. Devs don't have to stop to make builds locally. They can kick it off on the server and get on to the next task. If builds are large, then that is even more time the dev's machine is not occupied. For those doing continuous integration and automated testing, even better.
Centralization. Our build machine has scripts that make the build, distribute it to UAT environments, and even to production staging. Keeping them in one place reduces the hassle of keeping them in sync.
Security. We don't do much special here, but I'm sure a sysadmin can make it such that production migration tools can only be accessed on a build server by certain authorized entities.
Maybe i'm the only one...
I think everyone agrees that one should
use a file repository
do builds from the repository (and in a clean environment)
use a continous testing server (e.g. cruise control) to see if anything is broken after your "fixes"
But no one cares about automatically built versions.
When something was broken in an automatic build, but it's not anymore - who cares? It's a work in progress. Someone fixed it.
When you want to do a release version, you run a build from the repository. And i'm pretty sure you want to tag the version in the repository at that time and not every six hours when the server does it's work.
So, maybe a "build server" is just a misnomer and it's actually a "continous test server". Otherwise it sounds pretty much useless.
A build server gets you a sort of second opinion of your code. When you check it in, the code is checked. If it works, the code has a minimum quality.
Additionally, remember that low level languages take much longer to compile than high level languages. It's easy to think "Well look, my .Net project compiles in a couple of seconds! What's the big deal?" Awhile back I had to mess with some C code and I had forgotten how much longer it takes to compile.
A build server is used to schedule compile tasks (e.g. nightly builds) of usually large projects located in a repository that can sometimes take more than a couple of hours.
A build server also gives you a basis for escrow, being able to capture all the parts necessary to reproduce a build in the case that others may have rights to take ownership.

Continuous Integration: Unmanaged C++ on Visual Studio 2008

I've spent 4 years developing C++ using Visual Studio 2008 for a commercial company; it's now time for me to upgrade my development process.
Here's the problem: I dont have a 1 button build automation. I also dont have a CI server that automatically builds when a commit happens, and emails me whether a build is broken or not. Worse we dont even have a single unit test!!
Can someone please point to me how I can get started?
I have looked at many many tools and I think I might go with:
Visual Build (for build automation) (Note: I also considered Final Builder)
Cruise (for CI server)
I also now am just starting to practice TDD...so I will want to automate my unit tests as well. I chose Google Test/Mock for their extensive documentation. (Cant go wrong with Google brand can I? =p)
Price is not the issue, I want what's best and easiest to get started.
Can people that use real CI/automation tool for unmanaged MSVC++ tell me their tools and how I can go about starting?
Our source control is Subversion.
Last point: I'm also considering project management/tracking tool that integrates right into VSTD ..and thinking about using OnTime. VSTS costs too much. I tried FogBugz, but I think it's too simple. Any others?
I would take some time to seriously consider TeamCity. We used CruiseControl.NET for a while and TeamCity completely demolishes it. Plus it has built-in plugins for Boost and CppUnit, so your unit testing will come for free.
Best of all, the tool is free for < 20 users and gives you three build agents.
I just finished implementing our C++ product at work and it was fairly simple. We did it with msbuild and basically use the msbuild task to compile the solution. Other targets can be used to copy files, run unit tests, etc.
The last time I worked on an unmanaged MSVC++ project (which was moderately sized I might add), we used FinalBuilder to do the automated build & versioning (and even executing PCLint and other profiling tools as well).
Having said that, if you're willing to invest the time, MSBuild (or nAnt perhaps?) can do everything you need - even for unmanaged solutions.
Which brings us to the trade-off: Tools like Visual Build Pro and Final Builder get you up and running quickly. If you want something which offers a greater range of customization, you'll probably be spending a decent amount of time learning and understanding it - i.e. MSBuild, CIFactory, nAnt etc are no cake walk.
So if price isn't an issue - is time an issue? If time is at a premium, I'd investigate the GUI driven tools, they'll get you to where you want to go quickly. If you know you're going to need to extend on the simple one button build + unit tests + deploy scenario (which happens a lot!) then decide if you can invest the time into the more complex tools like MSBuild?
We use a combination of Boost.Build, NAnt, CPPUnit and either Cruise Control.NET or Hudson (we've used them both for various projects but are starting to prefer Hudson).
They are all good tools though we're considering replacing CPPUnit - the Google unit test system is pretty good from what I've seen.
If you're happy running on just Windows you can lose Boost.Build and just call out to Visual Studio from NAnt.
As for issue tracking/project management we settled on Vision Project after a long investigation. It's not well known (yet) but we've found it a very good fit in our environment. Fogbugz is great, a nice, clear interface but we came to the conclusion you did too; way too simple for our needs.
Although the .NET world is spoilt for these kinds of tools Continuous Integration is still pretty easy to set up for C++! I wouldn't think of starting a non-trivial project without putting these systems in place.
we use subversion + cruisecontrol + wix to accomplich CI automated builds outputting one-click installers. this combo has worked very well for us. we've created out own site for admin of svn user groups and permissioning and added the web interface to cc to it. we have a sql server storing all the collected stats from svn and cc and use them for custom reports available on our site. we are looking to add other tools to the mix for checking various attributes of the code stored in svn. this combo has worked very well for us.
At my company we use CruiseControl (http://cruisecontrol.sourceforge.net/). The Java version, not .NET, to build our wxWidgets application on Windows and OS X. Working great for us so far.