I'm currently trying to optimise build speed for a big project with following in mind:
build speed is priority 1
resulting binaries size does not matter
Infos:
Environment: Visual Studio 2012 (required, because of the software I'm developing for) + Windows machine
BuildTime: 12mins (clean build), 1min for small changes and every now and than small changes result in 5-6min because of slow linking (this is what I want to address)
Custom files in project: approx. 2500 (SDK I need to use excluded, a big SDK for a CAD system)
Lines of code in custom files: approx. 500000
I'm using an up-to-date CAD capable computer (32GB RAM, >3GHz QuadCore, SSD)
Ideas:
use precompiled headers => done, but does not have the effect I want; helps speed up compile time most of the time, but every now and than does not
split up project into libraries => not sure if this helps
Questions
I could not find anything about using libraries and build speed, but I assume if I precompile libraries, the linker will be faster.
Is this assumption true?
If I make a static library with the core functions, will this have an effect on build time? Or will the linker need as long as it does currently?
If I make a dynamic library, will this have an effect on build time? Or will the linker again check the dll completely and will need the same time?
I assume if I precompile libraries, the linker will be faster. Is this assumption true?
No, not likely. If at all (because the linker has to open fewer files), then the difference will be marginal.
If I make a static library with the core functions, will this have an effect on build time? Or will the linker need as long as it does currently?
It may make a huge difference on compile time, since although on a truly clean rebuild you still have to compile everything as before, on a normal "mostly clean" rebuild rebuilding the support libraries is superfluous since nothing ever changes inside them, so all you really need to rebuild is the user code, and as a result you compile a lot fewer files.
Note that every sane build system normally builds a dependency graph and tries to compile as few files as possible anyway (and, to the extent possible, with some level of parallelism), unless you explicitly tell it to do a clean build (which is rarely necessary to be done). Doctor, it hurts when I do this -- well, don't do it.
The difference for the linker will, again, be marginal. The linker still needs to look up the exact same amount of symbols, and still needs to copy the same amount of code into the executable.
You may want to play with link order. Funny as it sounds, sometimes the order in which libraries and object files are linked makes a 5x difference on how long it takes the linker to do its job.
That being said, 12 minutes for a clean build indeed isn't a lot. Your non-clean buils will likely be in the two-digit second range, of which linking probably takes 90%. That's normally not a showstopper. Come back when a build takes 4 hours :-)
If I make a dynamic library, will this have an effect on build time? Or will the linker again check the dll completely and will need the same time?
The linker will still have to do some work for every function you call, which might be slightly faster, but will still be more or less the same.
Note that you add runtime (startup) overhead by moving code into a DLL. It is more work for the loader to load a program with parts of the code in a DLL as it needs to load another image, parse its header, resolve symbols, set up some pointers, run per-thread init functions, etc. That's usually not an issue (the difference is not really that much noticeable), just letting you know it's not free.
12 minutes is a short full build time and 500KLOC is not that big. Many free software projects (GCC, Qt, ...) have longer ones (hours) and millions of C++ lines.
You might want to use a serious and parallel build automation tool, such as ninja. Perhaps you could do some distributed build (like what distcc permits) if you can compile on remote machines.
You could configure your IDE to run an external command (such as ninja) for builds. This don't change autocompletion abilities. You could adopt another source code editor (e.g. GNU emacs).
C++ is not (yet) modular (it does not have genuine modules, like e.g. Ocaml or Go), and that makes its compilation slow (e.g. because standard container headers are big, e.g. <vector> brings about 10KLOC of included code, probably used and included in most of your C++ code). So you should avoid having many small files (e.g. merging two files of 250 lines each into one of 500 lines could decrease build time) and it looks like you have too much small C++ files. I would recommend source files of more than a thousand lines each. Having only one class implementation (or one function) per source file slows down the total build time.
You surely want to use more indirection in your code. Use more systematically PIMPL idioms and virtual method tables, closures, std::function-s. Remember the rule of five.
Related
I once worked on a C++ project that took about an hour and a half for a full rebuild. Small edit, build, test cycles took about 5 to 10 minutes. It was an unproductive nightmare.
What is the worst build times you ever had to handle?
What strategies have you used to improve build times on large projects?
Update:
How much do you think the language used is to blame for the problem? I think C++ is prone to massive dependencies on large projects, which often means even simple changes to the source code can result in a massive rebuild. Which language do you think copes with large project dependency issues best?
Forward declaration
pimpl idiom
Precompiled headers
Parallel compilation (e.g. MPCL add-in for Visual Studio).
Distributed compilation (e.g. Incredibuild for Visual Studio).
Incremental build
Split build in several "projects" so not compile all the code if not needed.
[Later Edit]
8. Buy faster machines.
My strategy is pretty simple - I don't do large projects. The whole thrust of modern computing is away from the giant and monolithic and towards the small and componentised. So when I work on projects, I break things up into libraries and other components that can be built and tested independantly, and which have minimal dependancies on each other. A "full build" in this kind of environment never actually takes place, so there is no problem.
One trick that sometimes helps is to include everything into one .cpp file. Since includes are processed once per file, this can save you a lot of time. (The downside to this is that it makes it impossible for the compiler to parallelize compilation)
You should be able to specify that multiple .cpp files should be compiled in parallel (-j with make on linux, /MP on MSVC - MSVC also has an option to compile multiple projects in parallel. These are separate options, and there's no reason why you shouldn't use both)
In the same vein, distributed builds (Incredibuild, for example), may help take the load off a single system.
SSD disks are supposed to be a big win, although I haven't tested this myself (but a C++ build touches a huge number of files, which can quickly become a bottleneck).
Precompiled headers can help too, when used with care. (They can also hurt you, if they have to be recompiled too often).
And finally, trying to minimize dependencies in the code itself is important. Use the pImpl idiom, use forward declarations, keep the code as modular as possible. In some cases, use of templates may help you decouple classes and minimize dependencies. (In other cases, templates can slow down compilation significantly, of course)
But yes, you're right, this is very much a language thing. I don't know of another language which suffers from the problem to this extent. Most languages have a module system that allows them to eliminate header files, which area huge factor. C has header files, but is such a simple language that compile times are still manageable. C++ gets the worst of both worlds. A big complex language, and a terrible primitive build mechanism that requires a huge amount of code to be parsed again and again.
Multi core compilation. Very fast with 8 cores compiling on the I7.
Incremental linking
External constants
Removed inline methods on C++ classes.
The last two gave us a reduced linking time from around 12 minutes to 1-2 minutes. Note that this is only needed if things have a huge visibility, i.e. seen "everywhere" and if there are many different constants and classes.
Cheers
IncrediBuild
Unity Builds
Incredibuild
Pointer to implementation
forward declarations
compiling "finished" sections of the proejct into dll's
ccache & distcc (for C/C++ projects) -
ccache caches compiled output, using the pre-processed file as the 'key' for finding the output. This is great because pre-processing is pretty quick, and quite often changes that force recompile don't actually change the source for many files. Also, it really speeds up a full re-compile. Also nice is the instance where you can have a shared cache among team members. This means that only the first guy to grab the latest code actually compiles anything.
distcc does distributed compilation across a network of machines. This is only good if you HAVE a network of machines to use for compilation. It goes well with ccache, and only moves the pre-processed source around, so the only thing you have to worry about on the compiler engine systems is that they have the right compiler (no need for headers or your entire source tree to be visible).
The best suggestion is to build makefiles that actually understand dependencies and do not automatically rebuild the world for a small change. But, if a full rebuild takes 90 minutes, and a small rebuild takes 5-10 minutes, odds are good that your build system already does that.
Can the build be done in parallel? Either with multiple cores, or with multiple servers?
Checkin pre-compiled bits for pieces that really are static and do not need to be rebuilt every time. 3rd party tools/libraries that are used, but not altered are a good candidate for this treatment.
Limit the build to a single 'stream' if applicable. The 'full product' might include things like a debug version, or both 32 and 64 bit versions, or may include help files or man pages that are derived/built every time. Removing components that are not necessary for development can dramatically reduce the build time.
Does the build also package the product? Is that really required for development and testing? Does the build incorporate some basic sanity tests that can be skipped?
Finally, you can re-factor the code base to be more modular and to have fewer dependencies. Large Scale C++ Software Design is an excellent reference for learning to decouple large software products into something that is easier to maintain and faster to build.
EDIT: Building on a local filesystem as opposed to a NFS mounted filesystem can also dramatically speed up build times.
Fiddle with the compiler optimisation flags,
use option -j4 for gmake for parallel compilation (multicore or single core)
if you are using clearmake , use winking
we can take out the debug flags..in extreme cases.
Use some powerful servers.
This book Large-Scale C++ Software Design has very good advice I've used in past projects.
Minimize your public API
Minimize inline functions in your API. (Unfortunately this also increases linker requirements).
Maximize forward declarations.
Reduce coupling between code. For instance pass in two integers to a function, for coordinates, instead of your custom Point class that has it's own header file.
Use Incredibuild. But it has some issues sometimes.
Do NOT put code that get exported from two different modules in the SAME header file.
Use the PImple idiom. Mentioned before, but bears repeating.
Use Pre-compiled headers.
Avoid C++/CLI (i.e. managed c++). Linker times are impacted too.
Avoid using a global header file that includes 'everything else' in your API.
Don't put a dependency on a lib file if your code doesn't really need it.
Know the difference between including files with quotes and angle brackets.
Powerful compilation machines and parallel compilers. We also make sure the full build is needed as little as possible. We don't alter the code to make it compile faster.
Efficiency and correctness is more important than compilation speed.
In Visual Studio, you can set number of project to compile at a time. Its default value is 2, increasing that would reduce some time.
This will help if you don't want to mess with the code.
This is the list of things we did for a development under Linux :
As Warrior noted, use parallel builds (make -jN)
We use distributed builds (currently icecream which is very easy to setup), with this we can have tens or processors at a given time. This also has the advantage of giving the builds to the most powerful and less loaded machines.
We use ccache so that when you do a make clean, you don't have to really recompile your sources that didn't change, it's copied from a cache.
Note also that debug builds are usually faster to compile since the compiler doesn't have to make optimisations.
We tried creating proxy classes once.
These are really a simplified version of a class that only includes the public interface, reducing the number of internal dependencies that need to be exposed in the header file. However, they came with a heavy price of spreading each class over several files that all needed to be updated when changes to the class interface were made.
In general large C++ projects that I've worked on that had slow build times were pretty messy, with lots of interdependencies scattered through the code (the same include files used in most cpps, fat interfaces instead of slim ones). In those cases, the slow build time was just a symptom of the larger problem, and a minor symptom at that. Refactoring to make clearer interfaces and break code out into libraries improved the architecture, as well as the build time. When you make a library, it forces you to think about what is an interface and what isn't, which will actually (in my experience) end up improving the code base. If there's no technical reason to have to divide the code, some programmers through the course of maintenance will just throw anything into any header file.
Cătălin Pitiș covered a lot of good things. Other ones we do:
Have a tool that generates reduced Visual Studio .sln files for people working in a specific sub-area of a very large overall project
Cache DLLs and pdbs from when they are built on CI for distribution on developer machines
For CI, make sure that the link machine in particular has lots of memory and high-end drives
Store some expensive-to-regenerate files in source control, even though they could be created as part of the build
Replace Visual Studio's checking of what needs to be relinked by our own script tailored to our circumstances
It's a pet peeve of mine, so even though you already accepted an excellent answer, I'll chime in:
In C++, it's less the language as such, but the language-mandated build model that was great back in the seventies, and the header-heavy libraries.
The only thing that is wrong about Cătălin Pitiș' reply: "buy faster machines" should go first. It is the easyest way with the least impact.
My worst was about 80 minutes on an aging build machine running VC6 on W2K Professional. The same project (with tons of new code) now takes under 6 minutes on a machine with 4 hyperthreaded cores, 8G RAM Win 7 x64 and decent disks. (A similar machine, about 10..20% less processor power, with 4G RAM and Vista x86 takes twice as long)
Strangely, incremental builds are most of the time slower than full rebuuilds now.
Full build is about 2 hours. I try to avoid making modification to the base classes and since my work is mainly on the implementation of these base classes I only need to build small components (couple of minutes).
Create some unit test projects to test individual libraries, so that if you need to edit low level classes that would cause a huge rebuild, you can use TDD to know your new code works before you rebuild the entire app. The John Lakos book as mentioned by Themis has some very practical advice for restructuring your libraries to make this possible.
Are there any compelling performance reasons to choose static linking over dynamic linking or vice versa in certain situations? I've heard or read the following, but I don't know enough on the subject to vouch for its veracity.
1) The difference in runtime performance between static linking and dynamic linking is usually negligible.
2) (1) is not true if using a profiling compiler that uses profile data to optimize program hotpaths because with static linking, the compiler can optimize both your code and the library code. With dynamic linking only your code can be optimized. If most of the time is spent running library code, this can make a big difference. Otherwise, (1) still applies.
Dynamic linking can reduce total resource consumption (if more than one process shares the same library (including the version in "the same", of course)). I believe this is the argument that drives its presence in most environments. Here "resources" include disk space, RAM, and cache space. Of course, if your dynamic linker is insufficiently flexible there is a risk of DLL hell.
Dynamic linking means that bug fixes and upgrades to libraries propagate to improve your product without requiring you to ship anything.
Plugins always call for dynamic linking.
Static linking, means that you can know the code will run in very limited environments (early in the boot process, or in rescue mode).
Static linking can make binaries easier to distribute to diverse user environments (at the cost of sending a larger and more resource-hungry program).
Static linking may allow slightly faster startup times, but this depends to some degree on both the size and complexity of your program and on the details of the OS's loading strategy.
Some edits to include the very relevant suggestions in the comments and in other answers. I'd like to note that the way you break on this depends a lot on what environment you plan to run in. Minimal embedded systems may not have enough resources to support dynamic linking. Slightly larger small systems may well support dynamic linking because their memory is small enough to make the RAM savings from dynamic linking very attractive. Full-blown consumer PCs have, as Mark notes, enormous resources, and you can probably let the convenience issues drive your thinking on this matter.
To address the performance and efficiency issues: it depends.
Classically, dynamic libraries require some kind of glue layer which often means double dispatch or an extra layer of indirection in function addressing and can cost a little speed (but is the function calling time actually a big part of your running time???).
However, if you are running multiple processes which all call the same library a lot, you can end up saving cache lines (and thus winning on running performance) when using dynamic linking relative to using static linking. (Unless modern OS's are smart enough to notice identical segments in statically linked binaries. Seems hard, does anyone know?)
Another issue: loading time. You pay loading costs at some point. When you pay this cost depends on how the OS works as well as what linking you use. Maybe you'd rather put off paying it until you know you need it.
Note that static-vs-dynamic linking is traditionally not an optimization issue, because they both involve separate compilation down to object files. However, this is not required: a compiler can in principle, "compile" "static libraries" to a digested AST form initially, and "link" them by adding those ASTs to the ones generated for the main code, thus empowering global optimization. None of the systems I use do this, so I can't comment on how well it works.
The way to answer performance questions is always by testing (and use a test environment as much like the deployment environment as possible).
1) is based on the fact that calling a DLL function is always using an extra indirect jump. Today, this is usually negligible. Inside the DLL there is some more overhead on i386 CPU's, because they can't generate position independent code. On amd64, jumps can be relative to the program counter, so this is a huge improvement.
2) This is correct. With optimizations guided by profiling you can usually win about 10-15 percent performance. Now that CPU speed has reached its limits it might be worth doing it.
I would add: (3) the linker can arrange functions in a more cache efficient grouping, so that expensive cache level misses are minimised. It also might especially effect the startup time of applications (based on results i have seen with the Sun C++ compiler)
And don't forget that with DLLs no dead code elimination can be performed. Depending on the language, the DLL code might not be optimal either. Virtual functions are always virtual because the compiler doesn't know whether a client is overwriting it.
For these reasons, in case there is no real need for DLLs, then just use static compilation.
EDIT (to answer the comment, by user underscore)
Here is a good resource about the position independent code problem http://eli.thegreenplace.net/2011/11/03/position-independent-code-pic-in-shared-libraries/
As explained x86 does not have them AFAIK for anything else then 15 bit jump ranges and not for unconditional jumps and calls. That's why functions (from generators) having more then 32K have always been a problem and needed embedded trampolines.
But on popular x86 OS like Linux you do not need to care if the .so/DLL file is not generated with the gcc switch -fpic (which enforces the use of the indirect jump tables). Because if you don't, the code is just fixed like a normal linker would relocate it. But while doing this it makes the code segment non shareable and it would need a full mapping of the code from disk into memory and touching it all before it can be used (emptying most of the caches, hitting TLBs) etc. There was a time when this was considered slow.
So you would not have any benefit anymore.
I do not recall what OS (Solaris or FreeBSD) gave me problems with my Unix build system because I just wasn't doing this and wondered why it crashed until I applied -fPIC to gcc.
Dynamic linking is the only practical way to meet some license requirements such as the LGPL.
I agree with the points dnmckee mentions, plus:
Statically linked applications might be easier to deploy, since there are fewer or no additional file dependencies (.dll / .so) that might cause problems when they're missing or installed in the wrong place.
One reason to do a statically linked build is to verify that you have full closure for the executable, i.e. that all symbol references are resolved correctly.
As a part of a large system that was being built and tested using continuous integration, the nightly regression tests were run using a statically linked version of the executables. Occasionally, we would see that a symbol would not resolve and the static link would fail even though the dynamically linked executable would link successfully.
This was usually occurring when symbols that were deep seated within the shared libs had a misspelt name and so would not statically link. The dynamic linker does not completely resolve all symbols, irrespective of using depth-first or breadth-first evaluation, so you can finish up with a dynamically linked executable that does not have full closure.
1/ I've been on projects where dynamic linking vs static linking was benchmarked and the difference wasn't determined small enough to switch to dynamic linking (I wasn't part of the test, I just know the conclusion)
2/ Dynamic linking is often associated with PIC (Position Independent Code, code which doesn't need to be modified depending on the address at which it is loaded). Depending on the architecture PIC may bring another slowdown but is needed in order to get benefit of sharing a dynamically linked library between two executable (and even two process of the same executable if the OS use randomization of load address as a security measure). I'm not sure that all OS allow to separate the two concepts, but Solaris and Linux do and ISTR that HP-UX does as well.
3/ I've been on other projects which used dynamic linking for the "easy patch" feature. But this "easy patch" makes the distribution of small fix a little easier and of complicated one a versioning nightmare. We often ended up by having to push everything plus having to track problems at customer site because the wrong version was token.
My conclusion is that I'd used static linking excepted:
for things like plugins which depend on dynamic linking
when sharing is important (big libraries used by multiple processes at the same time like C/C++ runtime, GUI libraries, ... which often are managed independently and for which the ABI is strictly defined)
If one want to use the "easy patch", I'd argue that the libraries have to be managed like the big libraries above: they must be nearly independent with a defined ABI that must not to be changed by fixes.
Static linking is a process in compile time when a linked content is copied into the primary binary and becomes a single binary.
Cons:
compile time is longer
output binary is bigger
Dynamic linking is a process in runtime when a linked content is loaded. This technic allows to:
upgrade linked binary without recompiling a primary one that increase an ABI stability[About]
has a single shared copy
Cons:
start time is slower(linked content should be copied)
linker errors are thrown in runtime
[iOS Static vs Dynamic framework]
It is pretty simple, really. When you make a change in your source code, do you want to wait 10 minutes for it to build or 20 seconds? Twenty seconds is all I can put up with. Beyond that, I either get out the sword or start thinking about how I can use separate compilation and linking to bring it back into the comfort zone.
Best example for dynamic linking is, when the library is dependent on the used hardware. In ancient times the C math library was decided to be dynamic, so that each platform can use all processor capabilities to optimize it.
An even better example might be OpenGL. OpenGl is an API that is implemented differently by AMD and NVidia. And you are not able to use an NVidia implementation on an AMD card, because the hardware is different. You cannot link OpenGL statically into your program, because of that. Dynamic linking is used here to let the API be optimized for all platforms.
Dynamic linking requires extra time for the OS to find the dynamic library and load it. With static linking, everything is together and it is a one-shot load into memory.
Also, see DLL Hell. This is the scenario where the DLL that the OS loads is not the one that came with your application, or the version that your application expects.
On Unix-like systems, dynamic linking can make life difficult for 'root' to use an application with the shared libraries installed in out-of-the-way locations. This is because the dynamic linker generally won't pay attention to LD_LIBRARY_PATH or its equivalent for processes with root privileges. Sometimes, then, static linking saves the day.
Alternatively, the installation process has to locate the libraries, but that can make it difficult for multiple versions of the software to coexist on the machine.
Another issue not yet discussed is fixing bugs in the library.
With static linking, you not only have to rebuild the library, but will have to relink and redestribute the executable. If the library is just used in one executable, this may not be an issue. But the more executables that need to be relinked and redistributed, the bigger the pain is.
With dynamic linking, you just rebuild and redistribute the dynamic library and you are done.
Static linking includes the files that the program needs in a single executable file.
Dynamic linking is what you would consider the usual, it makes an executable that still requires DLLs and such to be in the same directory (or the DLLs could be in the system folder).
(DLL = dynamic link library)
Dynamically linked executables are compiled faster and aren't as resource-heavy.
static linking gives you only a single exe, inorder to make a change you need to recompile your whole program. Whereas in dynamic linking you need to make change only to the dll and when you run your exe, the changes would be picked up at runtime.Its easier to provide updates and bug fixes by dynamic linking (eg: windows).
There are a vast and increasing number of systems where an extreme level of static linking can have an enormous positive impact on applications and system performance.
I refer to what are often called "embedded systems", many of which are now increasingly using general-purpose operating systems, and these systems are used for everything imaginable.
An extremely common example are devices using GNU/Linux systems using Busybox. I've taken this to the extreme with NetBSD by building a bootable i386 (32-bit) system image that includes both a kernel and its root filesystem, the latter which contains a single static-linked (by crunchgen) binary with hard-links to all programs that itself contains all (well at last count 274) of the standard full-feature system programs (most except the toolchain), and it is less than 20 megabytes in size (and probably runs very comfortably in a system with only 64MB of memory (even with the root filesystem uncompressed and entirely in RAM), though I've been unable to find one so small to test it on).
It has been mentioned in earlier posts that the start-up time of a static-linked binaries is faster (and it can be a lot faster), but that is only part of the picture, especially when all object code is linked into the same file, and even more especially when the operating system supports demand paging of code direct from the executable file. In this ideal scenario the startup time of programs is literally negligible since almost all pages of code will already be in memory and be in use by the shell (and and init any other background processes that might be running), even if the requested program has not ever been run since boot since perhaps only one page of memory need be loaded to fulfill the runtime requirements of the program.
However that's still not the whole story. I also usually build and use the NetBSD operating system installs for my full development systems by static-linking all binaries. Even though this takes a tremendous amount more disk space (~6.6GB total for x86_64 with everything, including toolchain and X11 static-linked) (especially if one keeps full debug symbol tables available for all programs another ~2.5GB), the result still runs faster overall, and for some tasks even uses less memory than a typical dynamic-linked system that purports to share library code pages. Disk is cheap (even fast disk), and memory to cache frequently used disk files is also relatively cheap, but CPU cycles really are not, and paying the ld.so startup cost for every process that starts every time it starts will take hours and hours of CPU cycles away from tasks which require starting many processes, especially when the same programs are used over and over, such as compilers on a development system. Static-linked toolchain programs can reduce whole-OS multi-architecture build times for my systems by hours. I have yet to build the toolchain into my single crunchgen'ed binary, but I suspect when I do there will be more hours of build time saved because of the win for the CPU cache.
Another consideration is the number of object files (translation units) that you actually consume in a library vs the total number available. If a library is built from many object files, but you only use symbols from a few of them, this might be an argument for favoring static linking, since you only link the objects that you use when you static link (typically) and don't normally carry the unused symbols. If you go with a shared lib, that lib contains all translation units and could be much larger than what you want or need.
I was asked of this question when mentor an entry-level programmer, I was thinking of this compile + link process so official and usual that I never think about why.
One thing I could think of is to improve the development productivity, but should there be any other more compiler-related reasons?
Efficiency.
When you compile a program you create an object file for each source file, if you change a source file you only need to recompile that module and then relink (relinking is cheap).
If the compiler did everything in one pass it would have to recompile everything for every change.
It also fits with the unix philosophy of small programs that do one thing, so you have a pre-processor, a compiler, a linker, a library creator. These steps might now be different modes of the same tool.
However there are reasons why you want the compiler to link in one step, there are some optimizations you can do if you allow the compiler to change object files at link time - most modern compilers allow this but it requires them to put extra info into the object files at compile time.
It would be better if the compiler could store the entire project in a single database, rather than the mess of sources, resources, browse info files, object files etc - but developers are very conservative!
Part of this is historical. Back in the dark ages, computers had little memory. It was easy to to create a program with enough source code that all of it couldn't be processed at one time. So the processing had to be done in stages: preprocessing source code, compile source to assembly (one by one), assembly to object code, all object files linked into the final executable. Each of these steps had one or more stand alone tools to do its task. Over the years the tools were improved incrementally, but no major redesign of the process has ever become mainstream.
It's important that the build time, even for a very large project, be under 24 hours. And being able to build overnight is better. Separate compilation, which is to say dividing a program into "compilation units" and compiling them independently, is the way to reduce build time:
If a compilation unit hasn't been changed, and if nothing it depends on has changed, you can reuse the result of an old compilation.
You can often compile multiple units in parallel, or even distributed over a network of workstations. The lowly Make will compile in parallel, and other tools like ccdist exist to distribute the work of compilation.
Linking provides few benefits in and of itself but is necessary to use the results of separate compilation.
What an excellent time to teach your protégé about the Single Responsibility Principle!
Compiling the file changes the code into binary that the computer can read. Linking the file tells the computer how to complete a command. So its impossible to generate it all at once, without the two steps.
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
I recently had cause to work with some Visual Studio C++ projects with the usual Debug and Release configurations, but also 'Release All' and 'Debug All', which I had never seen before.
It turns out the author of the projects has a single ALL.cpp which #includes all other .cpp files. The *All configurations just build this one ALL.cpp file. It is of course excluded from the regular configurations, and regular configurations don't build ALL.cpp
I just wondered if this was a common practice? What benefits does it bring? (My first reaction was that it smelled bad.)
What kinds of pitfalls are you likely to encounter with this? One I can think of is if you have anonymous namespaces in your .cpps, they're no longer 'private' to that cpp but now visible in other cpps as well?
All the projects build DLLs, so having data in anonymous namespaces wouldn't be a good idea, right? But functions would be OK?
It's referred to by some (and google-able) as a "Unity Build". It links insanely fast and compiles reasonably quickly as well. It's great for builds you don't need to iterate on, like a release build from a central server, but it isn't necessarily for incremental building.
And it's a PITA to maintain.
EDIT: here's the first google link for more info: http://buffered.io/posts/the-magic-of-unity-builds/
The thing that makes it fast is that the compiler only needs to read in everything once, compile out, then link, rather than doing that for every .cpp file.
Bruce Dawson has a much better write up about this on his blog: http://randomascii.wordpress.com/2014/03/22/make-vc-compiles-fast-through-parallel-compilation/
Unity builds improved build speeds for three main reasons. The first reason is that all of the shared header files only need to be parsed once. Many C++ projects have a lot of header files that are included by most or all CPP files and the redundant parsing of these is the main cost of compilation, especially if you have many short source files. Precompiled header files can help with this cost, but usually there are a lot of header files which are not precompiled.
The next main reason that unity builds improve build speeds is because the compiler is invoked fewer times. There is some startup cost with invoking the compiler.
Finally, the reduction in redundant header parsing means a reduction in redundant code-gen for inlined functions, so the total size of object files is smaller, which makes linking faster.
Unity builds can also give better code-gen.
Unity builds are NOT faster because of reduced disk I/O. I have profiled many builds with xperf and I know what I'm talking about. If you have sufficient memory then the OS disk cache will avoid the redundant I/O - subsequent reads of a header will come from the OS disk cache. If you don't have enough memory then unity builds could even make build times worse by causing the compiler's memory footprint to exceed available memory and get paged out.
Disk I/O is expensive, which is why all operating systems aggressively cache data in order to avoid redundant disk I/O.
I wonder if that ALL.cpp is attempting to put the entire project within a single compilation unit, to improve the ability for the compiler to optimize the program for size?
Normally some optimizations are only performed within distinct compilation units, such as removal of duplicate code and inlining.
That said, I seem to remember that recent compilers (Microsoft's, Intel's, but I don't think this includes GCC) can do this optimization across multiple compilation units, so I suspect that this 'trick' is unneccessary.
That said, it would be curious to see if there is indeed any difference.
I agree with Bruce; from my experience I had tried implementing the Unity Build for one of my .dll projects which had a ton of header includes and lots of .cpps; to bring down the overall Compilation time on the VS2010(had already exhausted the Incremental Build options) but rather than cutting down the Compilation time, I ran out of memory and the Build not even able to finish the Compilation.
However to add; I did find enabling the Multi-Processor Compilation option in Visual Studio quite helps in cutting down the Compilation time; I am not sure if such an option is available across other platform compilers.
In addition to Bruce Dawson's excellent answer the following link brings more insights onto the pros and cons of unity builds - https://github.com/onqtam/ucm#unity-builds
We have a multi platform project for MacOS and Windows. We have lots of large templates and many headers to include. We halfed build time with Visual Studio 2017 msvc using precompiled headers. For MacOS we tried several days to reduce build time but precompiled headers only reduced build time to 85%. Using a single .cpp-File was the breakthrough. We simply create this ALL.cpp with a script. Our ALL.cpp contains a list of includes of all .cpp-Files of our project. We reduce build time to 30% with XCode 9.0. The only drawback was we had to give names to all private unamed compilation unit namespaces in .cpp-Files. Otherwise the local names will clash.
We have project which uses gcc and make files. Project also contains of one big subproject (SDK) and a lot of relatively small subprojects which use that SDK and some shared framework.
We use precompiled headers, but that helps only for re-compilation to be faster.
Is there any known techniques and tools to help with build-time optimizations? Or maybe you know some articles/resources about this or related topics?
You can tackle the problem from two sides: refactor the code to reduce the complexity the compiler is seeing, or speed up the compiler execution.
Without touching the code, you can add more compilation power into it. Use ccache to avoid recompiling files you have already compiled and distcc to distribute the build time among more machines. Use make -j where N is the number of cores+1 if you compile locally, or a bigger number for distributed builds. That flag will run more than one compiler in parallel.
Refactoring the code. Prefer forward declaration to includes (simple). Decouple as much as you can to avoid dependencies (use the PIMPL idiom).
Template instantiation is expensive, they are recompiled in every compilation unit that uses them. If you can refactor your templates as to forward declare them and then instantiate them in only one compilation unit.
The best I can think of with make is the -j option. This tells make to run as many jobs as possible in parallel:
make -j
If you want to limit the number of concurrent jobs to n you can use:
make -j n
Make sure the dependencies are correct so make doesn't run jobs it doesn't have to.
Another thing to take into account is optimizations that gcc does with the -O switch. You can specify various levels of optimization. The higher the optimization, the longer the compile and link times. A project I work with runs takes 2 minutes to link with -O3, and half a minute with -O1. You should make sure you're not optimizing more than you need to. You could build without optimization for development builds and with optimization for deployment builds.
Compiling with debug info (gcc -g) will probably increase the size of your executable and may impact your build time. If you don't need it, try removing it to see if it affects you.
The type of linking (static vs. dynamic) should make a difference. As far as I understand static linking takes longer (though I may be wrong here). You should see if this affects your build.
From the description of the project I guess that you have one Makefile per directory and are using recursive make a lot. In that case techniques from "Recursive Make Considered Harmful" should help very much.
If you have multiple computers available gcc is well distributed by distcc.
You can also use ccache in addition.
All this works with very little changes of the makefiles.
Also, you'll probably want to keep your source code files as small and self-contained as possible/feasible, i.e. prefer many smaller object files over one huge single object file.
This will also help avoid unnecessary recompilations, in addition you can have one static library with object files for each source code directory or module, basically allowing the compiler to reuse as much previously compiled code as possible.
Something else, which wasn't yet mentioned in any of the previous responses, is making symbol linkage as 'private' as possible, i.e. prefer static linkage (functions, variables) for your code if it doesn't have to be visible externally.
In addition, you may also want to look into using the GNU gold linker, which is much more efficient for compiling C++ code for ELF targets.
Basically, I'd advise you to carefully profile your build process and check where the most time is spend, that'll give you some hints as to how to optimize your build process or your projects source code structure.
You could consider switching to a different build system (which obviously won't work for everyone), such as SCons. SCons is much smarter than make. It automatically scans header dependencies, so you always have the smallest set of rebuild dependencies. By adding the line Decider('MD5-timestamp') to your SConstruct file, SCons will first look at the time stamp of a file, and if it's newer than the previously built time stamp, it will use the MD5 of the file to make sure you actually changed something. This works not just on source files but object files as well. This means that if you change a comment, for instance, you don't have to re-link.
The automatic scanning of header files has also ensured that I never have to type scons --clean. It always does the right thing.
If you have a LAN with developer machines, perhaps you should try implementing a distributed compiler solution, such as distcc.
This might not help if all of the time during the build is spent analyzing dependencies, or doing some single serial task. For the raw crunch of compiling many source files into object files, parallel building obviously helps, as suggested (on a single machine) by Nathan. Parallelizing across multiple machines can take it even further.
http://ccache.samba.org/ speeds up big time.
I work on a middle sized project, and that's the only thing we do to speed up the compile time.
You can use distcc distributed compiler to reduce the build time if you have access to several machines.
Here's an article from from IBM developerWorks related to distcc and how you can use it:
http://www.ibm.com/developerworks/linux/library/l-distcc.html
Another method to reduce build time is to use precompiled headers. Here's a starting point for gcc.
Also don't forget to use -j when building with make if your machine has more than one cpu/core(2x the number of cores/cpus is just fine).
Using small files may not always be a good recommendation. A disk have a 32 or 64K min sector size, with a file taking at least a sector. So 1024 files of 3K size (small code inside) will actually take 32 or 64 Meg on disk, instead of the expected 3 meg. 32/64 meg that needs to be read by the drive. If files are dispersed around on the disk you increase read time even more with seek time. This is helped with Disk Cache obviously, to a limit. pre-compiled header can also be of good help alleviating this.
So with due respect to coding guidelines, there is no point in going out of them just to place each strcuct, typedef or utility class into separate files.