Sharing Pre-compiled Headers efficiently - c++

I have a framework which is being used by several projects (which includes several samples to show how the framework works). The framework has components such as the core, graphics, physics, gui etc. Each one is a separate library. There are several configurations as well.
A main solution file compiles the complete project with all the possible configurations so that the projects can use the libraries. Since the framework is rarely recompiled, especially by someone (including me) working on a project that utilizes the framework, it makes sense to pre-compile the many headers.
Initially I had each project/sample have its own pre-compiled header used for the whole project. Each time I would have to rebuild the same pch (for example, Debug), So I decided that a shared PCH would reduce the redundant PCH compilation. So far so good. I have a project that compiles the PCH along with the libraries. All the subsequent projects/samples are now using the same PCH. This has worked wonderfully.
The only problem is that I have seen an increase in file size. This is not a roadblock, as if a project that uses the framework is intended to be released, it can sever itself from the shared PCH and make its own. I have done this for the sake of rapid development (I have actually created a tool which creates the VS project files and source files for a new project/sample ready to be built as well as facilitate in upgrading a previous project that was using an older version of the framework).
Anyway, (I am presuming that) the increase in file size is because the independant VS project file that is creating the shared PCH is including all the headers from all the libraries. My question is whether I can use conditional compilation (#ifndef) to reduce the size of the final executable? Or maybe share multiple PCH files somehow (as far I know though, that is not possible, but I maybe wrong) If I am not making sense, please say so (in kind words :) ) as my knowledge of PCH files is very limited.
Thanks!
Note: To re-iterate and make it clear, so far, I have one solution file that is compiling all the libraries including the shared PCH. Now if I recompile all the samples and projects, they compile in a couple of seconds or more at most. Before, each project would recreate a PCH file. Also, initially I wanted a PCH for each library, but then I found out that a source file cannot use multiple PCH files, so this option was not feasible. Another option is to compile all possible combinations of PCH files, but that is too time consuming and cumbersome and error prone.

It sounds like the size problem is coming from using headers you don't actually need, but that it still makes sense to use these headers when developing because of the faster turn around.
On using #ifndefs: Precompilation is crude. You lose the ability to share the precompilation work at the point where there is a difference. If using #ifndefs to make different variants of what you include, I.e. if you have
#ifndef FOO
Then the precompiled header must stop before the point where FOO is defined differently in two files that use that precompiled header. So #ifndef is not going to solve the problem. The net result is that FOO must be the same, or you're back to separate pch files for the different projects. Neither solves things.
As to sharing multiple .pch files: A fundamental limitation of .pch files is that each .obj can only use one. Of course .pch files can have arbitrary combinations of headers. You could have a .pch for core+graphics, a .pch for core+physics, core+ai etc. This would work just dandy if none of the source files needed to 'talk' to more than core+one-module at a time. That does not sound realistic to me. Such a scheme and variants on it sound like a lot of restructuring work for no real gain. You don't want to be building zillions of combinations and keeping track of them all. It's possible, but it is not going to save you time.
In my view you're doing exactly the right thing by sacrificing executable size for fast turn-around during development/debugging, and then having a slower but leaner way of building for the actual release.

In the past I've found that you quite quickly run into diminishing returns as you put more in the precompiled headers, so if you're trying to put more in to make it more useful in a larger number of projects then it will hit a point that it slows down. On our projects the PCH files take longer than most source files to compile, but still only a few seconds maximum. I would suggest making the PCH files specific to each project you are using. You are right in saying that a source file can only refer to a single PCH file, but one way of getting around this is to use the 'force include' option (in the Advanced tab I think) to ensure that all files include the PCH file for that project.

Related

gather actually used C++ code lines

Is there a possibility (most likely by using gcc / g++ itself?) to determine which code lines of all files included by a single compilation are actually used?
I use much third-party includes and would like to strip them to the actually used code to speed up compilation.
Maybe g++ can output this by some combination of it's many options?
The best advice I can offer is to break apart huge include files into smaller pieces. This allows developers to include only the files they need to resolve symbols.
My favorite example is windows.h. The mega include file declares the entire Windows API, whether you need them or not. If you only want the APIs for processing files, you get the APIs for dialg boxes as well.
Some shops like the monster include files because they only have to include the one file in their sources. One draw back is that every source file now depends on mega include file. If I change one of the include files, the entire system will be rebuilt instead of a few modules that depend on the header file that I changed.
In my projects, only a couple of files are compiled at a time; usually less than 5. This makes the average turnaround time (modify then build) very fast. The entire system is rebuilt overnight by a server or by developers. There is no need to rebuild source files that have not changed.
So split up your modules, so you are not rebuilding your system every time.

How to optimize building speed in visual studio 2008

Could someone give me tips to increase building speed in visual studio 2008?
I have a large project with many module with full source. Every single time it's built, every files are rebuilt, some of them were not changed. Can i prevent these file to be rebuilt?
I turned the property "Enable Minimum rebuild" /Gm on but the compiler threw this warning
Command line warning D9030 : '/Gm' is incompatible with multiprocessing; ignoring /MP switch
Every tips to increase building speed will help me much.
Thanks,
C++ compile times are a battle for every project above a certain size. Luckily, with so many people writing large C++ projects, there are a variety of solutions:
The classic cheap solution is the "unity build". This just means using #include to put all of your .cpp files into a single file for compilation. "Unity build" has come up in a number of questions here on stackoverflow, here is the most prominent one that I'm aware of. This screencast demonstrates how to set up such a build in Visual Studio.
My understanding is that unity builds are much faster than classic builds because they effectively cache work done by the preprocessor and linker. One drawback to the unity build is that if you touch one cpp file you'll have to recompile your "big" cpp file. You can work around this by breaking the cpp file you're iterating on out of the unity build and compiling it on its own.
Beyond Unity builds, here's a list of my best practices:
Use #include only when neccessary, prefer forward declarations
Use the pimpl idiom to keep class implementation out of commonly included header files. Doing so allows you to add members to an implementation without suffering a long recompile
Make use of precompiled headers (pch) for commonly included header files that change rarely
Make sure that your build system is using all of the cores available on the local hardware
Keep the list of directories the preprocessor has to search minimal, use precise paths in #include statements
Use #pragma once at the top of header files instead of #ifndef __FOO_H #define __FOO_H ... #endif, if you use the #ifndef trick the compiler will have to open the header file each time it is included, #pragma once allows the compiler to be more efficient
If you're doing all that (the unity build will make the biggest difference, in my experience), the last option is distributed building. distcc is the best free solution I'm aware of, incredibuild is the proprietary industry standard. I'm of the opinion that distributed computing is going to be the only way to get great iteration times out of the messy C++ compilation process. If you have access to a reasonably large number of machines (say, 10-20) this is totally worth looking into. I should mention that unity builds and distributed builds are not totally symbiotic because a traditional compile can be split into smaller chunks of work than a unity build. If you want to go distributed, it's probably not worth setting up a unity build.
Enable Minimal Build:/Gm is incompatible with Build with Multiple Processes:/MP{<cores>}
Thus you can only use one of these two at a time.
/Gm > Project's Properties : Configuration Properties > C/C++ > CodeGeneration
or
/MP{n} > Project's Properties : Configuration Properties > C/C++ > Command Line
Also to prevent unnecessary builds (of untouched files) - structure your code properly; Follow this rule:
In the [.h] header files, place only what's needed by the contents of the header file itself;
and all that you need to share across multiple execution units.
The rest goes in the implementation [.c / .cpp] files.
One simple way is to compile in debug mode(aka zero optimizations), this of course is only for internal testing.
you can also use precompiled headers* to speed up processing, or break off 'unchanging' segments into static libs, removing those from the recompile.
*with /MP you need to create the precompiled header before doing multiprocess compilation, as /MP can read but not write according to MSDN
Can you provide more information on the structure of your project and which files are being rebuilt?
Unchanged C++ files may be rebuilt because they include header files that have been changed, in that case /Gm option will not help
Rebuilding all files after changing one header file is a common problem. The solution is to closely examine what #include your header files use and remove all that can be removed. If your code only uses pointers and references to a class, you can replace
#include "foo.h"
with
class Foo;

Precompiled headers question

I am right now reorganizing my project and what recently was a simple application now became a pair of C++ projects - static library and real application.
I would like to share one precompiled header between two projects, but face some troubles with setting up the .pdb file paths.
Assume my first project is called Library and builds it's .lib file with a corresponding Library.pdb file. Now, the second project is called Application and builds everything into the same folder (.exe and another Application.pdb file).
Right now my both projects create their own precompiled headers file (Library.pch and Application.pch) based on one actual header file. It works, but I think it's a waste of time and I also think there should be a way to share one precompiled header between two projects.
If in my Application project I try to set the Use Precompiled Header (/Yu) option and set it to Library.pch, it wouldn't work, because of the following error:
error C2858: command-line option 'program database name "Application.pdb" inconsistent with precompiled header, which used "Library.pdb".
So, does anyone know some trick or way to share one precompiled header between two projects preserving proper debug information?
The question is, why do you want to share the precompiled header (PCH) files. Generally I woul d say, that does not make sense. PCH are used to speed up compiling not to share any information between different projects.
Since you also write about the PDB file, you probably want to debug the library code with your applications. This can be achieved by setting the /Fd parameter when compiling the library. When you link the library in your application and the linker finds the corresponding PDB file, you get full debug support.
This sounds complicated and cumbersome to set up. More than that, it may not be possible at all.
Instead, you can include the precompiled header from one application into the second. It will still be compiled once for the second project, but maintenance becomes easy and you do not have to redefine the dependencies in the second project (just include them).

Sharing precompiled headers between projects in Visual Studio

I have a solution with many Visual C++ projects, all using PCH, but some have particular compiler switches turned on for project-specific needs.
Most of these projects share the same set of headers in their respective stdafx.h (STL, boost, etc). I'm wondering if it's possible to share PCH between projects, so that instead of compiling every PCH per-project I could maybe have one common PCH that most projects in the solution could just use.
It seems possible to specify the location of the PCH as a shared location in the project settings, so I have a hunch this could work. I'm also assuming that all source files in all projects that use a shared PCH would have to have the same compiler settings, or else the compiler would complain about inconsistencies between the PCH and the source file being compiled.
Has anyone tried this? Does it work?
A related question: should such a shard PCH be overly inclusive, or would that hurt overall build time? For example, a shared PCH could include many STL headers that are widely used, but some projecst might only need <string> and <vector>. Would the time saved by using a shared PCH have to be paid back at a later point in the build process when the optimizer would have to discard all the unused stuff dragged into the project by the PCH?
Yes it is possible and I can assure you, the time savings are significant. When you compile your PCH, you have to copy the .pdb and .idb files from the project that is creating the PCH file. In my case, I have a simple two file project that is creating a PCH file. The header will be your PCH header and the source will be told to create the PCH under project settings - this is similar to what you would do normally in any project. As you mentioned, you have to have the same compile settings for each configuration otherwise a discrepancy will arise and the compiler will complain.
Copying the above mentioned files every time there is a rebuild or every time the PCH is recompiled is going to be a pain, so we will automate it. To automate copying, perform a pre-build event where the above mentioned files are copied over to the appropriate directory. For example, if you are compiling Debug and Release builds of your PCH, copy the files from Debug of your PCH project over to your dependent project's Debug. So a copy command would look like this
copy PchPath\Debug*.pdb Debug\ /-Y
Note the /-Y at the end. After the first build, each subsequent build is incrementally compiled, therefore if you replace the files again, Visual Studio will complain about corrupted symbols. If they do get corrupted, you can always perform a rebuild, which will copy the files again (this time it won't skip them as they no longer exist - the cleanup deletes the files).
I hope this helps. It took me quite some time to be able to do this, but it was worth it. I have several projects that depend on one big framework, and the PCH needs to be compiled only once. All the dependent projects now compile very quickly.
EDIT: Along with several other people, I have tested this under VS2010
and VS2012 and it does appear to work properly.
While this is an old question I want to give a new answer which works in Visual Studio 2017 and does not involve any copying. Only disadvantage: Edit and continue doesn't work anymore.
Basically you have to create a new project for the precompiled header and have all other project depend on it. Here is what I did:
Step by step:
Create a new project withnin your solution which includes the header (called pch.h from hereon) and a one line cpp file which includes pch.h. The project should create a static lib. Setup the new project to create a precompiled header. The output file needs to be accessible by all projects. for me this relative to IntDir, but for default settings it could be relative to $(SolutionDir). The pch project must only have defines all others projects have too.
Have all other projects depend on this new project. Otherwise the build order might be wrong.
Setup all other projects to use the pch.h. See, how the output file parameters are the same as in the pch project. Additional include directories also need to point to the pch.h directory. Optionally you can force include the pch file in every cpp (or you include it manually in the first line of every cpp file).
Setup all projects (including the pch project) to use the same compiler symbol file (the linker symbol file is not affected). Again, in my example this is OutDir but in your solution this might vary. It has to point to the same file on disk. The Debug Information Format needs to be set to C7 (see screenshot above), otherwise Visual Studio will not be able to compile projects in parallel.
I hope I didn't forget anything. For my solution (130k loc, 160 projects) this lead to a compile time of ~2:30mins instead of ~3:30mins.
It seems it's not possible because each source file has to be compiled against the same PDB against which the PCH was compiled. darn it.
Samaursa's answer worked for me.
I also saw this link that works (look for Reginald's answer near the bottom).
This one uses copy while Reginald's uses xcopy (I prefer xcopy). Either way, thanks--this sped up my builds considerably.
This sounds like a case of "diminishing returns" to me. Suppose including the common headers directly wastes 1 second per .cpp file, and each target (DLL/EXE) has 10 .cpp files. By using a .pch per target, you save 10 seconds per target. If your whole project has 10 targets, you save 1.5 minutes on the whole build, which is good.
But by reducing it to one .pch for the whole project, you'd only save a further 9 seconds. Is it worth it? The extra effort (which may be a lot more fiddly to set up, being a non-standard configuration unsupported by VS wizards) produces only a 10th of the saving.
On 2012, you can use a PDB and just build the pch from a lib project that only builds the pch and the main project that depends on the pch lib into the same directory (no copying), unfortunately this doesn't work with 2013+ except via a long winded work-a-round.

Why does stdafx.h work the way it does?

As usual, when my brain's messing with something I can't figure out myself, I come to you guys for help :)
This time I've been wondering why stdafx.h works the way it does? To my understanding it does 2 things:
Includes standard headers which we
might (?) use and which are rarely changed
Work as a compiler-bookmark for when
code is no longer precompiled.
Now, these 2 things seems like two very different tasks to me, and I wonder why they didn't do two seperate steps to take care of them? To me it seems reasonable to have a #pragma-command do the bookmarking stuff and to optionally have a header-file a long the lines of windows.h to do the including of often-used headers... Which brings me to my next point: Why are we forced to include often-used headers through stdafx.h? Personally, I'm not aware of any often used headers I use that I'm not already doing my own includes for - but maybe these headers are necessary for .dll generation?
Thx in advance
stdafx.h is ONE way of having Visual studio do precompiled headers. It's a simple to use, easy to automatically generate, approach that works well for smaller apps but can cause problems for larger more complex apps where the fact that it encourages, effectively, the use of a single header file it can cause coupling across components that are otherwise independent. If used just for system headers it tends to be OK, but as a project grows in size and complexity it's tempting to throw other headers in there and then suddenly changing any header file results in the recompilation of everything in the project.
See here: Is there a way to use pre-compiled headers in VC++ without requiring stdafx.h? for details of an alternative approach.
You are not forced to use "stdafx.h". You can check off the Use precompiled headers in project properties (or when creating the project) and you won't need stdafx.h anymore.
The compiler uses it as a clue to be able to precompile most used headers separately in a .pch file to reduce compilation time (don't have to compile it every time).
It keeps the compile time down, as the stuff in it are always compiled first (see details in quote below):
stdafx.h is a file
that describes both standard system
and project specific include files
that are used frequently but hardly
ever changed.
Compatible compilers will
pre-compile this file to reduce
overall compile times. Visual C++ will
not compile anything before the
#include "stdafx.h" in the source file, unless the compile option
/Yu'stdafx.h' is unchecked (by
default); it assumes all code in the
source up to and including that line
is already compiled.
It will help reduce long compilations.