How to optimize building speed in visual studio 2008 - c++

Could someone give me tips to increase building speed in visual studio 2008?
I have a large project with many module with full source. Every single time it's built, every files are rebuilt, some of them were not changed. Can i prevent these file to be rebuilt?
I turned the property "Enable Minimum rebuild" /Gm on but the compiler threw this warning
Command line warning D9030 : '/Gm' is incompatible with multiprocessing; ignoring /MP switch
Every tips to increase building speed will help me much.
Thanks,

C++ compile times are a battle for every project above a certain size. Luckily, with so many people writing large C++ projects, there are a variety of solutions:
The classic cheap solution is the "unity build". This just means using #include to put all of your .cpp files into a single file for compilation. "Unity build" has come up in a number of questions here on stackoverflow, here is the most prominent one that I'm aware of. This screencast demonstrates how to set up such a build in Visual Studio.
My understanding is that unity builds are much faster than classic builds because they effectively cache work done by the preprocessor and linker. One drawback to the unity build is that if you touch one cpp file you'll have to recompile your "big" cpp file. You can work around this by breaking the cpp file you're iterating on out of the unity build and compiling it on its own.
Beyond Unity builds, here's a list of my best practices:
Use #include only when neccessary, prefer forward declarations
Use the pimpl idiom to keep class implementation out of commonly included header files. Doing so allows you to add members to an implementation without suffering a long recompile
Make use of precompiled headers (pch) for commonly included header files that change rarely
Make sure that your build system is using all of the cores available on the local hardware
Keep the list of directories the preprocessor has to search minimal, use precise paths in #include statements
Use #pragma once at the top of header files instead of #ifndef __FOO_H #define __FOO_H ... #endif, if you use the #ifndef trick the compiler will have to open the header file each time it is included, #pragma once allows the compiler to be more efficient
If you're doing all that (the unity build will make the biggest difference, in my experience), the last option is distributed building. distcc is the best free solution I'm aware of, incredibuild is the proprietary industry standard. I'm of the opinion that distributed computing is going to be the only way to get great iteration times out of the messy C++ compilation process. If you have access to a reasonably large number of machines (say, 10-20) this is totally worth looking into. I should mention that unity builds and distributed builds are not totally symbiotic because a traditional compile can be split into smaller chunks of work than a unity build. If you want to go distributed, it's probably not worth setting up a unity build.

Enable Minimal Build:/Gm is incompatible with Build with Multiple Processes:/MP{<cores>}
Thus you can only use one of these two at a time.
/Gm > Project's Properties : Configuration Properties > C/C++ > CodeGeneration
or
/MP{n} > Project's Properties : Configuration Properties > C/C++ > Command Line
Also to prevent unnecessary builds (of untouched files) - structure your code properly; Follow this rule:
In the [.h] header files, place only what's needed by the contents of the header file itself;
and all that you need to share across multiple execution units.
The rest goes in the implementation [.c / .cpp] files.

One simple way is to compile in debug mode(aka zero optimizations), this of course is only for internal testing.
you can also use precompiled headers* to speed up processing, or break off 'unchanging' segments into static libs, removing those from the recompile.
*with /MP you need to create the precompiled header before doing multiprocess compilation, as /MP can read but not write according to MSDN

Can you provide more information on the structure of your project and which files are being rebuilt?
Unchanged C++ files may be rebuilt because they include header files that have been changed, in that case /Gm option will not help

Rebuilding all files after changing one header file is a common problem. The solution is to closely examine what #include your header files use and remove all that can be removed. If your code only uses pointers and references to a class, you can replace
#include "foo.h"
with
class Foo;

Related

Do I have to recompile C++ code everytime during development?

Let say we have a large code base and we are doing development in C++. Do we have to recompile everytime in order to test the code?
If yes then it is going to take ages to do development.
What's the solution to this problem?
Yes, you'll definitely need to compile C++ code if you want to test it. C++ code can't be executed without being compiled.
However, if you organize your project smartly, compilation could take only a few seconds, or maybe up to a minute, even if there are thousand (or even more) of files.
By default, your build system will run an incremental build, except if you explicitly request a "rebuild" or did a "clean" previously. It will then invoke the compiler/linker accordingly and make sure it compiles/links only what needs to be (if a cpp file did not change, no need to compile it, this is all based on file timestamps, if the "object" file (generated) is older than the cpp file (source), the build system knows it's up-to-date and won't generate it again. If you use Visual Studio and/or CMake or whatever IDE, build system, they all support that!
Additionally, you can follow some guidelines to make this even faster:
Firstly, organize your project in modules (libraries), ideally with dynamic link. Then when a file from a library is changed, only this library needs to be compiled (other libraries or programs using the modified library won't have to be compiled again).
When you'll modify only an implementation file (cpp file), only this file + link of the module using it will be needed.
When you'll modify an header file (h file), all cpp files including it will need to be recompiled, so you must be careful to optimize your includes. Prefer forward declaration (see why here)to includes whenever it's possible (else, your header becomes a dependency of all cpp files using the other header file including yours...as a cascade, modifying this header file will end up requiring compilation of tones of cpp files). Don't include files you don't need (because it will fire a new useless build when the header file changes). Possibly use precompiled headers to speedup compilation.
Note: As commented, there are apparently some tools that can interpret C++ without compiling it...but that's not what C++ was designed for at the first time. And I doubt they will be fast as compiled code at runtime....so you'll probably save 20sec of incremental build time and then loose minutes at runtime....

Visual Studio 2010, Intellisense and PCH: what are the alternatives to ugly stdafx.h?

I recently switched to Visual Studio 2010 and for Intellisense not to take half a minute to show up when using boost libraries, Microsoft's suggestion seems to use precompiled headers.
Except that I never used them before (except when forced to by Ugly ATL Wizards (TM)), so I searched around to figure out how they work.
Basically, the Big Centralized stdafx.h approach seems plain wrong. I never want to include (even cheaply) a whole bunch of header files in all my sources. Since I don't use windows libraries (I make C++/CLI higher level wrappers, then use .NET for talking to the outside world), I don't have "a whole truckload of non-changing enormous headers". Just boost and standard library headers scattered around.
There is an interesting approach to this problem, but I can't quite figure out how to make this work. It seems that each source file must be compiled twice (please correct me if I'm wrong): once with /Yc and once with /Yu. This adds burden on the developper which must manually tweak the build system.
I was hoping to find some "automatically generate one precompiled header for each source file" trick, or at least some "best practices", but most people seem happy with including the world into stdafx.h.
What are the options available to me to use precompiled headers on a per source file basis ? I don't really care about build times (as long as they don't skyrocket), I just want intellisense to work fast.
For starters, you are reading the article wrong. Every file is NOT compiled twice. The file stdafx.cpp gets compiled once with /Yc (c, for create) before anything else and then every other file in your project gets compiled once with /Yu (u, for use) and imports the result of the previously created saved state from stdafx.cpp.
Secondly, the article is 7 years old and is talking about VC++ 6, so you should start off distrusting it. But even assuming the information in it still applies to VC++ 2008 or 2010, it seems like bad advice. The approach it recommends using /pragma hdrstop is solution looking for a problem. If you have headers that contain things you don't want in every file, then they simply shouldn't go in your pre-compiled header.
Your problem basically seems to be that Intellisense is slow for Boost in VS2010? I don't have a direct solution for this problem, but could Visual Assist X be an option for you? I have used it in various versions of Visual Studio now and with great pleasure. Not a direct solution, but it might work for you.
Precompiled headers aren't too bad if you use them properly.
Don't use them as a replacement for proper and precise #includes, but as a way to speed things up. Achieve this by making the precompiled header do nothing in release builds, only speeding stuff up in debug.
You are wrong, each file is only compiled once. You have one .cpp file that is compiled with /Yc and the rest are compiled with /Yu. The file with /Yc, which is stdafx.cpp by default, contains one line, #include "myMainHeader.h" (changed the name from the default) All other .cpp files must start with #include "myMainHeader.h" When your /Yc file is compiled, the entire internal state of the compiler is saved. That file is loaded when each of your other files is compiled. That is why you must start with including the PCH, so that the /Yu option doesn't change the result of compilation, only the time. Xcode does not make this requirement and will use a PCH regardless of if your .cpp file starts with the right include directive. I have used libraries that relied on this and could not be built without PCH.

Sharing Pre-compiled Headers efficiently

I have a framework which is being used by several projects (which includes several samples to show how the framework works). The framework has components such as the core, graphics, physics, gui etc. Each one is a separate library. There are several configurations as well.
A main solution file compiles the complete project with all the possible configurations so that the projects can use the libraries. Since the framework is rarely recompiled, especially by someone (including me) working on a project that utilizes the framework, it makes sense to pre-compile the many headers.
Initially I had each project/sample have its own pre-compiled header used for the whole project. Each time I would have to rebuild the same pch (for example, Debug), So I decided that a shared PCH would reduce the redundant PCH compilation. So far so good. I have a project that compiles the PCH along with the libraries. All the subsequent projects/samples are now using the same PCH. This has worked wonderfully.
The only problem is that I have seen an increase in file size. This is not a roadblock, as if a project that uses the framework is intended to be released, it can sever itself from the shared PCH and make its own. I have done this for the sake of rapid development (I have actually created a tool which creates the VS project files and source files for a new project/sample ready to be built as well as facilitate in upgrading a previous project that was using an older version of the framework).
Anyway, (I am presuming that) the increase in file size is because the independant VS project file that is creating the shared PCH is including all the headers from all the libraries. My question is whether I can use conditional compilation (#ifndef) to reduce the size of the final executable? Or maybe share multiple PCH files somehow (as far I know though, that is not possible, but I maybe wrong) If I am not making sense, please say so (in kind words :) ) as my knowledge of PCH files is very limited.
Thanks!
Note: To re-iterate and make it clear, so far, I have one solution file that is compiling all the libraries including the shared PCH. Now if I recompile all the samples and projects, they compile in a couple of seconds or more at most. Before, each project would recreate a PCH file. Also, initially I wanted a PCH for each library, but then I found out that a source file cannot use multiple PCH files, so this option was not feasible. Another option is to compile all possible combinations of PCH files, but that is too time consuming and cumbersome and error prone.
It sounds like the size problem is coming from using headers you don't actually need, but that it still makes sense to use these headers when developing because of the faster turn around.
On using #ifndefs: Precompilation is crude. You lose the ability to share the precompilation work at the point where there is a difference. If using #ifndefs to make different variants of what you include, I.e. if you have
#ifndef FOO
Then the precompiled header must stop before the point where FOO is defined differently in two files that use that precompiled header. So #ifndef is not going to solve the problem. The net result is that FOO must be the same, or you're back to separate pch files for the different projects. Neither solves things.
As to sharing multiple .pch files: A fundamental limitation of .pch files is that each .obj can only use one. Of course .pch files can have arbitrary combinations of headers. You could have a .pch for core+graphics, a .pch for core+physics, core+ai etc. This would work just dandy if none of the source files needed to 'talk' to more than core+one-module at a time. That does not sound realistic to me. Such a scheme and variants on it sound like a lot of restructuring work for no real gain. You don't want to be building zillions of combinations and keeping track of them all. It's possible, but it is not going to save you time.
In my view you're doing exactly the right thing by sacrificing executable size for fast turn-around during development/debugging, and then having a slower but leaner way of building for the actual release.
In the past I've found that you quite quickly run into diminishing returns as you put more in the precompiled headers, so if you're trying to put more in to make it more useful in a larger number of projects then it will hit a point that it slows down. On our projects the PCH files take longer than most source files to compile, but still only a few seconds maximum. I would suggest making the PCH files specific to each project you are using. You are right in saying that a source file can only refer to a single PCH file, but one way of getting around this is to use the 'force include' option (in the Advanced tab I think) to ensure that all files include the PCH file for that project.

Precompiled headers question

I am right now reorganizing my project and what recently was a simple application now became a pair of C++ projects - static library and real application.
I would like to share one precompiled header between two projects, but face some troubles with setting up the .pdb file paths.
Assume my first project is called Library and builds it's .lib file with a corresponding Library.pdb file. Now, the second project is called Application and builds everything into the same folder (.exe and another Application.pdb file).
Right now my both projects create their own precompiled headers file (Library.pch and Application.pch) based on one actual header file. It works, but I think it's a waste of time and I also think there should be a way to share one precompiled header between two projects.
If in my Application project I try to set the Use Precompiled Header (/Yu) option and set it to Library.pch, it wouldn't work, because of the following error:
error C2858: command-line option 'program database name "Application.pdb" inconsistent with precompiled header, which used "Library.pdb".
So, does anyone know some trick or way to share one precompiled header between two projects preserving proper debug information?
The question is, why do you want to share the precompiled header (PCH) files. Generally I woul d say, that does not make sense. PCH are used to speed up compiling not to share any information between different projects.
Since you also write about the PDB file, you probably want to debug the library code with your applications. This can be achieved by setting the /Fd parameter when compiling the library. When you link the library in your application and the linker finds the corresponding PDB file, you get full debug support.
This sounds complicated and cumbersome to set up. More than that, it may not be possible at all.
Instead, you can include the precompiled header from one application into the second. It will still be compiled once for the second project, but maintenance becomes easy and you do not have to redefine the dependencies in the second project (just include them).

Why does stdafx.h work the way it does?

As usual, when my brain's messing with something I can't figure out myself, I come to you guys for help :)
This time I've been wondering why stdafx.h works the way it does? To my understanding it does 2 things:
Includes standard headers which we
might (?) use and which are rarely changed
Work as a compiler-bookmark for when
code is no longer precompiled.
Now, these 2 things seems like two very different tasks to me, and I wonder why they didn't do two seperate steps to take care of them? To me it seems reasonable to have a #pragma-command do the bookmarking stuff and to optionally have a header-file a long the lines of windows.h to do the including of often-used headers... Which brings me to my next point: Why are we forced to include often-used headers through stdafx.h? Personally, I'm not aware of any often used headers I use that I'm not already doing my own includes for - but maybe these headers are necessary for .dll generation?
Thx in advance
stdafx.h is ONE way of having Visual studio do precompiled headers. It's a simple to use, easy to automatically generate, approach that works well for smaller apps but can cause problems for larger more complex apps where the fact that it encourages, effectively, the use of a single header file it can cause coupling across components that are otherwise independent. If used just for system headers it tends to be OK, but as a project grows in size and complexity it's tempting to throw other headers in there and then suddenly changing any header file results in the recompilation of everything in the project.
See here: Is there a way to use pre-compiled headers in VC++ without requiring stdafx.h? for details of an alternative approach.
You are not forced to use "stdafx.h". You can check off the Use precompiled headers in project properties (or when creating the project) and you won't need stdafx.h anymore.
The compiler uses it as a clue to be able to precompile most used headers separately in a .pch file to reduce compilation time (don't have to compile it every time).
It keeps the compile time down, as the stuff in it are always compiled first (see details in quote below):
stdafx.h is a file
that describes both standard system
and project specific include files
that are used frequently but hardly
ever changed.
Compatible compilers will
pre-compile this file to reduce
overall compile times. Visual C++ will
not compile anything before the
#include "stdafx.h" in the source file, unless the compile option
/Yu'stdafx.h' is unchecked (by
default); it assumes all code in the
source up to and including that line
is already compiled.
It will help reduce long compilations.