What techniques can be used to speed up C++ compilation times? - c++

What techniques can be used to speed up C++ compilation times?
This question came up in some comments to Stack Overflow question C++ programming style, and I'm interested to hear what ideas there are.
I've seen a related question, Why does C++ compilation take so long?, but that doesn't provide many solutions.

Language techniques
Pimpl Idiom
Take a look at the Pimpl idiom here, and here, also known as an opaque pointer or handle classes. Not only does it speed up compilation, it also increases exception safety when combined with a non-throwing swap function. The Pimpl idiom lets you reduce the dependencies between headers and reduces the amount of recompilation that needs to be done.
Forward Declarations
Wherever possible, use forward declarations. If the compiler only needs to know that SomeIdentifier is a struct or a pointer or whatever, don't include the entire definition, forcing the compiler to do more work than it needs to. This can have a cascading effect, making this way slower than they need to be.
The I/O streams are particularly known for slowing down builds. If you need them in a header file, try #including <iosfwd> instead of <iostream> and #include the <iostream> header in the implementation file only. The <iosfwd> header holds forward declarations only. Unfortunately the other standard headers don't have a respective declarations header.
Prefer pass-by-reference to pass-by-value in function signatures. This will eliminate the need to #include the respective type definitions in the header file and you will only need to forward-declare the type. Of course, prefer const references to non-const references to avoid obscure bugs, but this is an issue for another question.
Guard Conditions
Use guard conditions to keep header files from being included more than once in a single translation unit.
#pragma once
#ifndef filename_h
#define filename_h
// Header declarations / definitions
#endif
By using both the pragma and the ifndef, you get the portability of the plain macro solution, as well as the compilation speed optimization that some compilers can do in the presence of the pragma once directive.
Reduce interdependency
The more modular and less interdependent your code design is in general, the less often you will have to recompile everything. You can also end up reducing the amount of work the compiler has to do on any individual block at the same time, by virtue of the fact that it has less to keep track of.
Compiler options
Precompiled Headers
These are used to compile a common section of included headers once for many translation units. The compiler compiles it once, and saves its internal state. That state can then be loaded quickly to get a head start in compiling another file with that same set of headers.
Be careful that you only include rarely changed stuff in the precompiled headers, or you could end up doing full rebuilds more often than necessary. This is a good place for STL headers and other library include files.
ccache is another utility that takes advantage of caching techniques to speed things up.
Use Parallelism
Many compilers / IDEs support using multiple cores/CPUs to do compilation simultaneously. In GNU Make (usually used with GCC), use the -j [N] option. In Visual Studio, there's an option under preferences to allow it to build multiple projects in parallel. You can also use the /MP option for file-level paralellism, instead of just project-level paralellism.
Other parallel utilities:
Incredibuild
Unity Build
distcc
Use a Lower Optimization Level
The more the compiler tries to optimize, the harder it has to work.
Shared Libraries
Moving your less frequently modified code into libraries can reduce compile time. By using shared libraries (.so or .dll), you can reduce linking time as well.
Get a Faster Computer
More RAM, faster hard drives (including SSDs), and more CPUs/cores will all make a difference in compilation speed.

I work on the STAPL project which is a heavily-templated C++ library. Once in a while, we have to revisit all the techniques to reduce compilation time. In here, I have summarized the techniques we use. Some of these techniques are already listed above:
Finding the most time-consuming sections
Although there is no proven correlation between the symbol lengths and compilation time, we have observed that smaller average symbol sizes can improve compilation time on all compilers. So your first goals it to find the largest symbols in your code.
Method 1 - Sort symbols based on size
You can use the nm command to list the symbols based on their sizes:
nm --print-size --size-sort --radix=d YOUR_BINARY
In this command the --radix=d lets you see the sizes in decimal numbers (default is hex). Now by looking at the largest symbol, identify if you can break the corresponding class and try to redesign it by factoring the non-templated parts in a base class, or by splitting the class into multiple classes.
Method 2 - Sort symbols based on length
You can run the regular nm command and pipe it to your favorite script (AWK, Python, etc.) to sort the symbols based on their length. Based on our experience, this method identifies the largest trouble making candidates better than method 1.
Method 3 - Use Templight
"Templight is a Clang-based tool to profile the time and memory consumption of template instantiations and to perform interactive debugging sessions to gain introspection into the template instantiation process".
You can install Templight by checking out LLVM and Clang (instructions) and applying the Templight patch on it. The default setting for LLVM and Clang is on debug and assertions, and these can impact your compilation time significantly. It does seem like Templight needs both, so you have to use the default settings. The process of installing LLVM and Clang should take about an hour or so.
After applying the patch you can use templight++ located in the build folder you specified upon installation to compile your code.
Make sure that templight++ is in your PATH. Now to compile add the following switches to your CXXFLAGS in your Makefile or to your command line options:
CXXFLAGS+=-Xtemplight -profiler -Xtemplight -memory -Xtemplight -ignore-system
Or
templight++ -Xtemplight -profiler -Xtemplight -memory -Xtemplight -ignore-system
After compilation is done, you will have a .trace.memory.pbf and .trace.pbf generated in the same folder. To visualize these traces, you can use the Templight Tools that can convert these to other formats. Follow these instructions to install templight-convert. We usually use the callgrind output. You can also use the GraphViz output if your project is small:
$ templight-convert --format callgrind YOUR_BINARY --output YOUR_BINARY.trace
$ templight-convert --format graphviz YOUR_BINARY --output YOUR_BINARY.dot
The callgrind file generated can be opened using kcachegrind in which you can trace the most time/memory consuming instantiation.
Reducing the number of template instantiations
Although there are no exact solution for reducing the number of template instantiations, there are a few guidelines that can help:
Refactor classes with more than one template arguments
For example, if you have a class,
template <typename T, typename U>
struct foo { };
and both of T and U can have 10 different options, you have increased the possible template instantiations of this class to 100. One way to resolve this is to abstract the common part of the code to a different class. The other method is to use inheritance inversion (reversing the class hierarchy), but make sure that your design goals are not compromised before using this technique.
Refactor non-templated code to individual translation units
Using this technique, you can compile the common section once and link it with your other TUs (translation units) later on.
Use extern template instantiations (since C++11)
If you know all the possible instantiations of a class you can use this technique to compile all cases in a different translation unit.
For example, in:
enum class PossibleChoices = {Option1, Option2, Option3}
template <PossibleChoices pc>
struct foo { };
We know that this class can have three possible instantiations:
template class foo<PossibleChoices::Option1>;
template class foo<PossibleChoices::Option2>;
template class foo<PossibleChoices::Option3>;
Put the above in a translation unit and use the extern keyword in your header file, below the class definition:
extern template class foo<PossibleChoices::Option1>;
extern template class foo<PossibleChoices::Option2>;
extern template class foo<PossibleChoices::Option3>;
This technique can save you time if you are compiling different tests with a common set of instantiations.
NOTE : MPICH2 ignores the explicit instantiation at this point and always compiles the instantiated classes in all compilation units.
Use unity builds
The whole idea behind unity builds is to include all the .cc files that you use in one file and compile that file only once. Using this method, you can avoid reinstantiating common sections of different files and if your project includes a lot of common files, you probably would save on disk accesses as well.
As an example, let's assume you have three files foo1.cc, foo2.cc, foo3.cc and they all include tuple from STL. You can create a foo-all.cc that looks like:
#include "foo1.cc"
#include "foo2.cc"
#include "foo3.cc"
You compile this file only once and potentially reduce the common instantiations among the three files. It is hard to generally predict if the improvement can be significant or not. But one evident fact is that you would lose parallelism in your builds (you can no longer compile the three files at the same time).
Further, if any of these files happen to take a lot of memory, you might actually run out of memory before the compilation is over. On some compilers, such as GCC, this might ICE (Internal Compiler Error) your compiler for lack of memory. So don't use this technique unless you know all the pros and cons.
Precompiled headers
Precompiled headers (PCHs) can save you a lot of time in compilation by compiling your header files to an intermediate representation recognizable by a compiler. To generate precompiled header files, you only need to compile your header file with your regular compilation command. For example, on GCC:
$ g++ YOUR_HEADER.hpp
This will generate a YOUR_HEADER.hpp.gch file (.gch is the extension for PCH files in GCC) in the same folder. This means that if you include YOUR_HEADER.hpp in some other file, the compiler will use your YOUR_HEADER.hpp.gch instead of YOUR_HEADER.hpp in the same folder before.
There are two issues with this technique:
You have to make sure that the header files being precompiled is stable and is not going to change (you can always change your makefile)
You can only include one PCH per compilation unit (on most of compilers). This means that if you have more than one header file to be precompiled, you have to include them in one file (e.g., all-my-headers.hpp). But that means that you have to include the new file in all places. Fortunately, GCC has a solution for this problem. Use -include and give it the new header file. You can comma separate different files using this technique.
For example:
g++ foo.cc -include all-my-headers.hpp
Use unnamed or anonymous namespaces
Unnamed namespaces (a.k.a. anonymous namespaces) can reduce the generated binary sizes significantly. Unnamed namespaces use internal linkage, meaning that the symbols generated in those namespaces will not be visible to other TU (translation or compilation units). Compilers usually generate unique names for unnamed namespaces. This means that if you have a file foo.hpp:
namespace {
template <typename T>
struct foo { };
} // Anonymous namespace
using A = foo<int>;
And you happen to include this file in two TUs (two .cc files and compile them separately). The two foo template instances will not be the same. This violates the One Definition Rule (ODR). For the same reason, using unnamed namespaces is discouraged in the header files. Feel free to use them in your .cc files to avoid symbols showing up in your binary files. In some cases, changing all the internal details for a .cc file showed a 10% reduction in the generated binary sizes.
Changing visibility options
In newer compilers you can select your symbols to be either visible or invisible in the Dynamic Shared Objects (DSOs). Ideally, changing the visibility can improve compiler performance, link time optimizations (LTOs), and generated binary sizes. If you look at the STL header files in GCC you can see that it is widely used. To enable visibility choices, you need to change your code per function, per class, per variable and more importantly per compiler.
With the help of visibility you can hide the symbols that you consider them private from the generated shared objects. On GCC you can control the visibility of symbols by passing default or hidden to the -visibility option of your compiler. This is in some sense similar to the unnamed namespace but in a more elaborate and intrusive way.
If you would like to specify the visibilities per case, you have to add the following attributes to your functions, variables, and classes:
__attribute__((visibility("default"))) void foo1() { }
__attribute__((visibility("hidden"))) void foo2() { }
__attribute__((visibility("hidden"))) class foo3 { };
void foo4() { }
The default visibility in GCC is default (public), meaning that if you compile the above as a shared library (-shared) method, foo2 and class foo3 will not be visible in other TUs (foo1 and foo4 will be visible). If you compile with -visibility=hidden then only foo1 will be visible. Even foo4 would be hidden.
You can read more about visibility on GCC wiki.

I'd recommend these articles from "Games from Within, Indie Game Design And Programming":
Physical Structure and C++ – Part 1: A First Look
Physical Structure and C++ – Part 2: Build Times
Even More Experiments with Includes
How Incredible Is Incredibuild?
The Care and Feeding of Pre-Compiled Headers
The Quest for the Perfect Build System
The Quest for the Perfect Build System (Part 2)
Granted, they are pretty old - you'll have to re-test everything with the latest versions (or versions available to you), to get realistic results. Either way, it is a good source for ideas.

One technique which worked quite well for me in the past: don't compile multiple C++ source files independently, but rather generate one C++ file which includes all the other files, like this:
// myproject_all.cpp
// Automatically generated file - don't edit this by hand!
#include "main.cpp"
#include "mainwindow.cpp"
#include "filterdialog.cpp"
#include "database.cpp"
Of course this means you have to recompile all of the included source code in case any of the sources changes, so the dependency tree gets worse. However, compiling multiple source files as one translation unit is faster (at least in my experiments with MSVC and GCC) and generates smaller binaries. I also suspect that the compiler is given more potential for optimizations (since it can see more code at once).
This technique breaks in various cases; for instance, the compiler will bail out in case two or more source files declare a global function with the same name. I couldn't find this technique described in any of the other answers though, that's why I'm mentioning it here.
For what it's worth, the KDE Project used this exact same technique since 1999 to build optimized binaries (possibly for a release). The switch to the build configure script was called --enable-final. Out of archaeological interest I dug up the posting which announced this feature: http://lists.kde.org/?l=kde-devel&m=92722836009368&w=2

I will just link to my other answer: How do YOU reduce compile time, and linking time for Visual C++ projects (native C++)?. Another point I want to add, but which causes often problems is to use precompiled headers. But please, only use them for parts which hardly ever change (like GUI toolkit headers). Otherwise, they will cost you more time than they save you in the end.
Another option is, when you work with GNU make, to turn on -j<N> option:
-j [N], --jobs[=N] Allow N jobs at once; infinite jobs with no arg.
I usually have it at 3 since I've got a dual core here. It will then run compilers in parallel for different translation units, provided there are no dependencies between them. Linking cannot be done in parallel, since there is only one linker process linking together all object files.
But the linker itself can be threaded, and this is what the GNU gold ELF linker does. It's optimized threaded C++ code which is said to link ELF object files a magnitude faster than the old ld (and was actually included into binutils).

There's an entire book on this topic, which is titled Large-Scale C++ Software Design (written by John Lakos).
The book pre-dates templates, so to the contents of that book add "using templates, too, can make the compiler slower".

Once you have applied all the code tricks above (forward declarations, reducing header inclusion to the minimum in public headers, pushing most details inside the implementation file with Pimpl...) and nothing else can be gained language-wise, consider your build system. If you use Linux, consider using distcc (distributed compiler) and ccache (cache compiler).
The first one, distcc, executes the preprocessor step locally and then sends the output to the first available compiler in the network. It requires the same compiler and library versions in all the configured nodes in the network.
The latter, ccache, is a compiler cache. It again executes the preprocessor and then check with an internal database (held in a local directory) if that preprocessor file has already been compiled with the same compiler parameters. If it does, it just pops up the binary and output from the first run of the compiler.
Both can be used at the same time, so that if ccache does not have a local copy it can send it trough the net to another node with distcc, or else it can just inject the solution without further processing.

Here are some:
Use all processor cores by starting a multiple-compile job (make -j2 is a good example).
Turn off or lower optimizations (for example, GCC is much faster with -O1 than -O2 or -O3).
Use precompiled headers.

When I came out of college, the first real production-worthy C++ code I saw had these arcane #ifndef ... #endif directives in between them where the headers were defined. I asked the guy who was writing the code about these overarching things in a very naive fashion and was introduced to world of large-scale programming.
Coming back to the point, using directives to prevent duplicate header definitions was the first thing I learned when it came to reducing compiling times.

More RAM.
Someone talked about RAM drives in another answer. I did this with a 80286 and Turbo C++ (shows age) and the results were phenomenal. As was the loss of data when the machine crashed.

You could use Unity Builds.
​​

Use
#pragma once
at the top of header files, so if they're included more than once in a translation unit, the text of the header will only get included and parsed once.

Use forward declarations where you can. If a class declaration only uses a pointer or reference to a type, you can just forward declare it and include the header for the type in the implementation file.
For example:
// T.h
class Class2; // Forward declaration
class T {
public:
void doSomething(Class2 &c2);
private:
Class2 *m_Class2Ptr;
};
// T.cpp
#include "Class2.h"
void Class2::doSomething(Class2 &c2) {
// Whatever you want here
}
Fewer includes means far less work for the preprocessor if you do it enough.

Just for completeness: a build might be slow because the build system is being stupid as well as because the compiler is taking a long time to do its work.
Read Recursive Make Considered Harmful (PDF) for a discussion of this topic in Unix environments.

Not about the compilation time, but about the build time:
Use ccache if you have to rebuild the same files when you are working
on your buildfiles
Use ninja-build instead of make. I am currently compiling a project
with ~100 source files and everything is cached by ccache. make needs
5 minutes, ninja less than 1.
You can generate your ninja files from cmake with -GNinja.

Upgrade your computer
Get a quad core (or a dual-quad system)
Get LOTS of RAM.
Use a RAM drive to drastically reduce file I/O delays. (There are companies that make IDE and SATA RAM drives that act like hard drives).
Then you have all your other typical suggestions
Use precompiled headers if available.
Reduce the amount of coupling between parts of your project. Changing one header file usually shouldn't require recompiling your entire project.

I had an idea about using a RAM drive. It turned out that for my projects it doesn't make that much of a difference after all. But then they are pretty small still. Try it! I'd be interested in hearing how much it helped.

Dynamic linking (.so) can be much much faster than static linking (.a). Especially when you have a slow network drive. This is since you have all of the code in the .a file which needs to be processed and written out. In addition, a much larger executable file needs to be written out to the disk.

Where are you spending your time? Are you CPU bound? Memory bound? Disk bound? Can you use more cores? More RAM? Do you need RAID? Do you simply want to improve the efficiency of your current system?
Under gcc/g++, have you looked at ccache? It can be helpful if you are doing make clean; make a lot.

Starting with Visual Studio 2017 you have the capability to have some compiler metrics about what takes time.
Add those parameters to C/C++ -> Command line (Additional Options) in the project properties window:
/Bt+ /d2cgsummary /d1reportTime
You can have more informations in this post.

Faster hard disks.
Compilers write many (and possibly huge) files to disk. Work with SSD instead of typical hard disk and compilation times are much lower.

On Linux (and maybe some other *NIXes), you can really speed the compilation by NOT STARING at the output and changing to another TTY.
Here is the experiment: printf slows down my program

Networks shares will drastically slow down your build, as the seek latency is high. For something like Boost, it made a huge difference for me, even though our network share drive is pretty fast. Time to compile a toy Boost program went from about 1 minute to 1 second when I switched from a network share to a local SSD.

If you have a multicore processor, both Visual Studio (2005 and later) as well as GCC support multi-processor compiles. It is something to enable if you have the hardware, for sure.

First of all, we have to understand what so different about C++ that sets it apart from other languages.
Some people say it's that C++ has many too features. But hey, there are languages that have a lot more features and they are nowhere near that slow.
Some people say it's the size of a file that matters. Nope, source lines of code don't correlate with compile times.
But wait, how can it be? More lines of code should mean longer compile times, what's the sorcery?
The trick is that a lot of lines of code is hidden in preprocessor directives. Yes. Just one #include can ruin your module's compilation performance.
You see, C++ doesn't have a module system. All *.cpp files are compiled from scratch. So having 1000 *.cpp files means compiling your project a thousand times. You have more than that? Too bad.
That's why C++ developers hesitate to split classes into multiple files. All those headers are tedious to maintain.
So what can we do other than using precompiled headers, merging all the cpp files into one, and keeping the number of headers minimal?
C++20 brings us preliminary support of modules! Eventually, you'll be able to forget about #include and the horrible compile performance that header files bring with them. Touched one file? Recompile only that file! Need to compile a fresh checkout? Compile in seconds rather than minutes and hours.
The C++ community should move to C++20 as soon as possible. C++ compiler developers should put more focus on this, C++ developers should start testing preliminary support in various compilers and use those compilers that support modules. This is the most important moment in C++ history!

Although not a "technique", I couldn't figure out how Win32 projects with many source files compiled faster than my "Hello World" empty project. Thus, I hope this helps someone like it did me.
In Visual Studio, one option to increase compile times is Incremental Linking (/INCREMENTAL). It's incompatible with Link-time Code Generation (/LTCG) so remember to disable incremental linking when doing release builds.

Using dynamic linking instead of static one make you compiler faster that can feel.
If you use t Cmake, active the property:
set(BUILD_SHARED_LIBS ON)
Build Release, using static linking can get more optimize.

From Microsoft: https://devblogs.microsoft.com/cppblog/recommendations-to-speed-c-builds-in-visual-studio/
Specific recommendations include:
DO USE PCH for projects
DO include commonly used system, runtime and third party headers in
PCH
DO include rarely changing project specific headers in PCH
DO NOT include headers that change frequently
DO audit PCH regularly to keep it up to date with product churn
DO USE /MP
DO Remove /Gm in favor of /MP
DO resolve conflict with #import and use /MP
DO USE linker switch /incremental
DO USE linker switch /debug:fastlink
DO consider using a third party build accelerator

Related

Why put function declarations and definitions in separate files?

In codeacademy's lessons on functions, they teach you to use three files if you are going to call functions in your program:
the int main() file, which i've found through trial and error is an indispensable part of the program part of a c++ program (i guess...), with a .cpp file extension
a header file for DECLARING functions, with a .hpp file extension.
a separate file with function DEFINITIONS, with a .cpp file extension
Would it work to both declare and define functions within the header file by itself and simply include them above int main()? To me having seperate files for declarations and definitions just seems like it would confuse matters in a larger project.
In a large project you often only need type and function declarations, not the definitions. For example in other header files. If all definitions were in the headers, then the combined result of including multiple other headers and their transitive includes would lead to huge compilation units. This significantly hurts compile times since the amount of code the compiler needs to process would explode to orders of magnitudes more than needed. It would also hurt link times since the linker would have more work to do in discarding duplicates included in many more object files.
You also easily run into ODR (One Definition Rule) issues unless everything is marked inline.
In large projects, function declarations may be needed by many files, but the function definition should only be compiled once. It is combined with all the places that need it at link time.
A small C++ program can be (and often is made of) a single translation unit of e.g. a few thousand lines of C++ code. In that case, you could have a single myprog.cc C++ source files (with several #include-s inside).
But when you work on a larger program, in teams, it is convenient to have several C++ source files.
Some C++ files are generated by another program (this is called metaprogramming or source to source compilation) and could have a million lines of C++ lines. ANTLR or GNU Bison or TypeScript2Cxx are capable of generating C++ code.
But if you work in a team of people like Alice and Bob, it is convenient to decide that Alice is responsible of alice.cc and Bob is writing bob.cc, and both cooperate on a common header file header.hh which is #include-d in both alice.cc and bob.cc. That header.hh would practically define the API of the software project.
Read more about version control systems (I prefer git) and build automation tools (such as ninja or make).
Look for inspiration inside the C++ code of existing open source projects on gitlab or github or elsewhere (in particular, inside the source code of Clang and of GCC, both being major C++ compilers).
FWIW, in GCC 10.1 (of may, 2020) the gcc/go/gofrontend/expressions.cc file is handwritten and has 19711 lines of C++ code, so nearly twenty thousands lines. They are compiled daily. I do know the people working on that, they are brilliant and nice professionals. The biggest file of FTLK 1.4 is its src/Fl_Text_Display.cxx with 4175 C++ lines.
By personal experience, you might have a single C++ function of several dozen thousands lines of C++ (this makes practical sense only when that C++ code is generated), but then the compilation time by an optimizing compiler is dissuasive. You could adapt my manydl.c program to generate C++ files (it currently generates "random" C files with functions of "tunable" size) of arbitrary size. But C++ code generated by Fluid or Qt Designer might be quite large, and C++ code generated for GUI is often made of long but conceptually simple functions.
Nothing in the C++11 standard (see n3337) requires several translation units. You might have (see sqlite for an example) a single C++ file foo.cc of a million lines. And you could generate some of the C++ source code. The Qt project, the GCC compiler. Jacques Pitrat's book on Artificial Beings: the conscience of a conscious machine ISBN 978-1848211018 explain in many pages why such an approach is worthwhile.
There's kind of two answers to this question
why would YOU the programmer want to split things out into multiple .h/.hpp and .cpp files?
I believe the answer here is it can help organization when your .cpp files become very large with a lot of code that may not be relevant to someone who needs to provide the functionality provided by the file. Here's an example:
Let's say you have some c++ code that that display images on a screen. You as the person who wants to use that code are likely interested in the functions/classes exposed by that code which let you control that functionality. Maybe the code exposes the following helpful functions:
WriteImageToScreen(int position_x, int position_y)
ClearScreen()
It can be much easier to look through a header file which only tells you what you are allowed to use rather than how all of that is implemented. It's very possible that implementing those two functions so that you can call them requires 1000s of lines of code and a bunch of variables and statements you don't care about. Not having to read that helps you focus on the important part of the code. The pieces you want to interact with.
I've presented this example as if you were calling someone else's code but the same applies for your own code. As your projects get bigger, it can be convenient to have summaries of what each functionality a file exposes.
Now that all being said, not everyone agrees this is the correct way to do things or that it is helpful.
why is it necessary for the compiler for things to be split out into multiple .h/.hpp and .cpp files?
Just in case you aren't familiar with the term, a compiler is the program which turns your source code text into a program which your computer can execute.
So why does the compiler need separate .hpp/.cpp files? Others have pretty much already hit the nail on the head with this one, but c++ compilers get confused if something is defined multiple times. If you put everything in a header file, then when you include that header in multiple files it will be defined multiple times. So in essence this circles back to the organizational question.
I have seen programmers who just have a single main file and then all code is included directly to that main file during compilation
#include "SomeFile.cpp"
#include "AnotherFile.cpp"
// ...
#include "SoManyFiles.cpp"
int main()
{
DoStuff();
}
I believe this is called a monolith build and it's not recommended.
If you have a toy project, you can.
If you have a 1,000,000 lines of code, your build times will be horrid.
C++20 introduces modules, which should make the whole issue go away.
Other languages have tools that can extract an interface from a "module".
Hopefully, when C++20 arrives, tools will become available.
The only good reason to split an interface from an implementation is if there
are multiple implementations for one interface. E.g. VHDL, and will be available in C++20 modules. The pragmatic reasons are compilation speed and legibility.

very long compilation times

I'm working on a large solution that has thousands of source files, some of them can have over a thousand includes due to the use of Boost and dependency problems. While compiling in parallel on a 12 core Xeon E5-2690 v2 machine (windows 7) it takes up to 4 hours to rebuild the solution using Waf 1.7.13. What can I do to speed things up?
A few things that come to mind:
Use forward declarations and PIMPL.
Review your code base to see if you have created unnecessary templates when normal classes or functions would have been sufficient.
Try to implement your own templates in terms of non-generic implementations operating on void* where applicable, the templates serving merely as type-safe wrappers (see "Item 42: Use private inheritance judiciously" in "More Effective C++" for a nice example).
Check your compiler's documentation for precompiled headers.
Refactor your application's entire architecture so that there are more internal libraries and a smaller application layer, the goal being that in the long run the libraries become so stable that you don't have to rebuild them all the time.
Experiment with optimisation flags. See if you can turn down optimisation in several specific compilation units where optimisation doesn't make a measurable difference in execution speed or binary size yet signficantly increases compilation times.
If refactoring to remove dependencies is not an option, and the build architecture is so unreliable that you have to clean and rebuild all from time to time (in a perfect world this should never happen), you can:
Improve your machine: use an SSD hard disk and/or lot of RAM. If the project has thousand of source files, you will need to hit the disk a lot of times, with random but cacheable accesses. For 12 core machine with hyper threading I would suggest no less than 24gb of ram, probably better going toward 32. You should measure what is your source code size and how much ram is used by the concurrent 24 compilers you can run in parallel.
Use a compiler cache. I don't know how to setup such environment from windows, and with your build system, so I can't suggest much here.
The culprit is likely the structure of your project itself: the expected time for rebuilding is roughly proportional to the number of source files times the number of headers they include. With strong template based programming, that effort grows with the square of source files.
So, you have exactly two contrary options to get your total compilation time down:
You reduce the number of headers included by each source file. That means you have to avoid any templates that use templates. Try to reduce inter-template dependencies as much as you can.
Program only in headers and only use a single .cpp file which instanciates all the different templates in your application. This avoids any recompilation of a header.
A possible variant of this is to create a precompiled header from all header files in your project, so that precompiling the header takes the bulk of the building time.
Of course, option 2 means that there is no such thing as an incremental build, you will need to recompile the entire project everytime you compile. This can make your project entirely unmaintainable. So, I would strongly suggest to go for option 1. But that requires a different programming style (one that uses templates very sparingly) than your project seems to be written in.
In either case, there is no magic bullet: It is not likely possible to make a significant change to the compilation time without restructuring the entire project.
Some suggestions you could try relatively easily:
Use distributed compilation using IncrediBuild or such
Use "unity" builds, i.e. including several cpp files in a single cpp file and compiling only the unity cpp's (you can have a simple tool building the unity cpp's with given number of cpp files in each). Even though I'm not big fan of this technique due to its shortcomings I have seen it employed quite often in projects with bad physical structure to optimize compile & link times.
Use "#pragma once" in headers
Use precompiled headers
Optimize redundant #include's in header files. I don't know if there's a tool for it, but you could easily build a tool which comments out #includes in headers and tries to compile them to see which ones are not needed.
Profile #include file compilation speeds and focus on addressing the worst performers and the ones that are deepest in the #include chain. I.e. use forward declarations instead of #include's, etc.
Buy more/better hardware

Multiple source file executable slower than single source file executable

I had a single source file which had all the class definitions and functions.
For better organization, I moved the class declarations(.h) and implementations(.cpp) into separate files.
But when I compiled them, it resulted in a slower executable than the one I get from single source executable. Its around 20-30 seconds slower for the same input. I dint change any code.
Why is this happening? And how can I make it faster again?
Update: The single source executable completes in 40 seconds whereas the multiple source executable takes 60. And I'm referring to runtime and not compilation.
I think, your program runs faster when compiled as a single file because in this case compiler has more information, needed to optimize the code. For example, it can automatically inline some functions, which is not possible in case of separate compilation.
To make it faster again, you can try to enable link-time optimizer (or whole program optimizer) with this option: -flto.
If -flto option is not available (and it is available only starting with gcc 4.6) or if you don't want to use it for some reason, you have at least 2 options:
If you split your project only for better organization, you can create a single source file (like all.cxx) and #include all source files (all other *.cxx files) to this file. Then you need to build only this all.cxx, and all compiler optimizations are available again. Or, if you split it also to make compilation incremental, you may prepare 2 build options: incremental build and unity build. First one builds all separate sources, second one - only all.cxx. See more information on this here.
You can find functions, that cost you performance after splitting the project, and move them either to the compilation unit, where they are used, or to header file. To do this, start with profiling (see "What can I use to profile C++ code in Linux?"). Further investigate parts of the program, that significantly impact program's performance; here are 2 options: either use profiler again to compare results of incremental and unity builds (but this time you need a sampling profiler, like oprofile, while, an instrumenting profiler, like gprof, most likely, is too heavy for this task); or apply 'experimental' strategy, as described by gbulmer.
This probably has to do with link time optimization. When all your code is in a single source file, the compiler has more knowledge about what your code does so it can perform more optimizations. One such optimization is inlining: the compiler can only inline a function if it knows its implementation at compile time!
These optimizations can also be done at link time (rather than compile time) by passing the -flto flag to gcc, both for the compile and for the link stage (see here).
This is a slower approach to get back to the faster runtime, but if you wanted to get a better understanding of what is causing the large change, you could do a few 'experiments'
One experiment would be to find which function might be responsible for the large change.
To do that, you could 'profile' the runtime of each function.
For example, use GNU gprof, part of GNU binutils:
http://www.gnu.org/software/binutils/
Docs at: http://sourceware.org/binutils/docs-2.22/gprof/index.html
This will measure the time consumed by each function in your program, and where it was called from. Doing these measurements will likely have an 'Heisenberg effect'; taking measurements will effect the performance of the program. So you might want to try an experiment to find which class is making the most difference.
Try to get a picture of how the runtime varies between having the class source code in the main source, and the same program but with the class compiled and linked in separately.
To get a class implementation into the final program, you can either compile and link it, or just #include it into the 'main' program then compile main.
To make it easier to try permutations, you could switch a #include on or off using #if:
#if defined(CLASSA) // same as #ifdef CLASSA
#include "classa.cpp"
#endif
#if defined(CLASSB)
#include "classb.cpp"
#endif
Then you can control which files are #included using command line flags to the compiler, e.g.
g++ -DCLASSA -DCLASSB ... main.c classc.cpp classd.cpp classf.cpp
It might only take you a few minutes to generate the permutations of the -Dflags, and link commands. Then you'd have a way to generate all permutations of compile 'in one unit' vs separately linked, then run (and time) each one.
I assume your header files are wrapped in the usual
#ifndef _CLASSA_H_
#define _CLASSA_H_
//...
#endif
You would then have some information about the class which is important.
These types of experiment might yield some insight into the behaviour of the program and compiler which might stimulate some other ideas for improvements.

How to find compilation bottlenecks?

How do I find which parts of code are taking a long time to compile?
I am already using precompiled headers for all of my headers, and they definitely improve the compilation speed. Nevertheless, whenever I make a change to my C++ source file, compiling it takes a long time (this is CPU/memory-bound, not I/O-bound -- it's all cached). Furthermore, this is not related to the linking portion, just the compilation portion.
I've tried turning on /showIncludes, but of course, since I'm using precompiled headers, nothing is getting included after stdafx.h. So I know it's only the source code that takes a while to compile, but I don't know what part of it.
I've also tried doing a minimal build, but it doesn't help. Neither does /MP, because it's a single source file anyway.
I could try dissecting the source code and figuring out which part is a bottleneck by adding/removing it, but that's a pain and doesn't scale. Furthermore, it's hard to remove something and still let the code compile -- error messages, if any, come back almost immediately.
Is there a better way to figure out what's slowing down the compilation?
Or, if there isn't a way: are there any language constructs (e.g. templates?) that take a lot longer to compile?
What I have in my C++ source code:
Three (relatively large) ATL dialog classes (including the definitions/logic).
They could very well be the cause, but they are the core part of the program anyway, so obviously they need to be recompiled whenever I change them.
Random one-line (or similarly small) utility functions, e.g. a byte-array-to-hex converter
References to (inline) classes found inside my header files. (One of the header files is gigantic, but it uses templates only minimally, and of course it's precompiled. The other one is the TR1 regex -- it's huge, but it's barely used.)
Note:
I'm looking for techniques that I can apply more generally in figuring out the cause of these issues, not specific recommendations for my very particular situation. Hopefully that would be more useful to other people as well.
Two general ways to improve the compilation time :
instead of including headers in headers, use forward declare (include headers only in the source files)
minimize templated code (if you can avoid using templates)
Only these two rules will greatly improve your build time.
You can find more tricks in "Large-Scale C++ Software Design" by Lakos.
For visual studio (I am not sure if it is too old), take a look into this : How should I detect unnecessary #include files in a large C++ project?
Template code generally takes longer to compile.
You could investigate using "compiler firewalls", which reduce the frequency of a .cpp file having to build (they can reduce time to read included files as well because of the forward declarations).
You can also shift time spent doing code generation from the compiler to the linker by using Link-Time Code Generation and/or Whole Program Optimization, though generally you lose time in the long run.

How do YOU reduce compile time, and linking time for Visual C++ projects (native C++)?

How do YOU reduce compile time, and linking time for VC++ projects (native C++)?
Please specify if each suggestion applies to debug, release, or both.
It may sound obvious to you, but we try to use forward declarations as much as possible, even if it requires to write out long namespace names the type(s) is/are in:
// Forward declaration stuff
namespace plotter { namespace logic { class Plotter; } }
// Real stuff
namespace plotter {
namespace samples {
class Window {
logic::Plotter * mPlotter;
// ...
};
}
}
It greatly reduces the time for compiling also on others compilers. Indeed it applies to all configurations :)
Use the Handle/Body pattern (also sometimes known as "pimpl", "adapter", "decorator", "bridge" or "wrapper"). By isolating the implementation of your classes into your .cpp files, they need only be compiled once. Most changes do not require changes to the header file so it means you can make fairly extensive changes while only requiring one file to be recompiled. This also encourages refactoring and writing of comments and unit tests since compile time is decreased. Additionally, you automatically separate the concerns of interface and implementation so the interface of your code is simplified.
If you have large complex headers that must be included by most of the .cpp files in your build process, and which are not changed very often, you can precompile them. In a Visual C++ project with a typical configuration, this is simply a matter of including them in stdafx.h. This feature has its detractors, but libraries that make full use of templates tend to have a lot of stuff in headers, and precompiled headers are the simplest way to speed up builds in that case.
These solutions apply to both debug and release, and are focused on a codebase that is already large and cumbersome.
Forward declarations are a common solution.
Distributed building, such as with Incredibuild is a win.
Pushing code from headers down into source files can work. Small classes, constants, enums and so on might start off in a header file simply because it could have been used in multiple compilation units, but in reality they are only used in one, and could be moved to the cpp file.
A solution I haven't read about but have used is to split large headers. If you have a handful of very large headers, take a look at them. They may contain related information, and may also depend on a lot of other headers. Take the elements that have no dependencies on other files...simple structs, constants, enums and forward declarations and move them from the_world.h to the_world_defs.h. You may now find that a lot of your source files can now include only the_world_defs.h and avoid including all that overhead.
Visual Studio also has a "Show Includes" option that can give you a sense of which source files include many headers and which header files are most frequently included.
For very common includes, consider putting them in a pre-compiled header.
I use Unity Builds (Screencast located here).
The compile speed question is interesting enough that Stroustrup has it in his FAQ.
We use Xoreax's Incredibuild to run compilation in parallel across multiple machines.
Also an interesting article from Ned Batchelder: http://nedbatchelder.com/blog/200401/speeding_c_links.html (about C++ on Windows).
Our development machines are all quad-core and we use Visual Studio 2008 supports parallel compiling. I am uncertain as to whether all editions of VS can do this.
We have a solution file with approximately 168 individual projects, and compile this way takes about 25 minutes on our quad-core machines, compared to about 90 minutes on the single core laptops we give to summer students. Not exactly comparable machines but you get the idea :)
With Visual C++, there is a method, some refer to as Unity, that improves link time significantly by reducing the number of object modules.
This involves concatenating the C++ code, usually in groups by library. This of course makes editing the code much more difficult, and you will run into namespace collisions unless you use them well. It keeps you from using "using namespace foo";
Several teams at our company have elaborate systems to take the normal C++ files and concatenate them at compile time as a build step. The reduction in link times can be enormous.
Another useful technique is blobbing. I think it is something similar to what was described by Matt Shaw.
Simply put, you just create one cpp file in which you include other cpp files. You may have two different project configurations, one ordinary and one blob. Of course, blobbing puts some constrains on your code, e.g. class names in unnamed namespaces may clash.
One technique to avoid recompiling the whole code in a blob (as David Rodríguez mentioned) when you change one cpp file - is to have your "working" blob which is created from files modified recently and other ordinary blobs.
We use blobbing at work most of the time, and it reduces project build time, especially link time.
Compile Time:
If you have IncrediBuild, compile time won't be a problem. If you don't have a IncrediBuild, try the "unity build" method. It combine multiple cpp files to a single cpp file so the whole compile time is reduced.
Link Time:
The "unity build" method also contribute to reduce the link time but not much. How ever, you can check if the "Whole global optimization" and "LTCG" are enabled, while these flags make the program fast, they DO make the link SLOW.
Try turning off the "Whole Global Optimization" and set LTCG to "Default" the link time might be reduced by 5/6. (LTCG stands for Link Time Code Generation)