do you have any tips to really speed up a large C++ source code ?
I compiled QT5 with last visual studio 2013 compiler, this took at least 3 hours with intel quad core 3.2GHz, 8GB memory and SSD drive.
What solutions do I have if I want to do this in 30 minutes ?
thanks.
Forward declarations and PIMPL.
example.h:
// no include
class UsedByExample;
class Example
{
// ...
UsedByExample *ptr; // forward declaration is good enough
UsedByExample &ref; // forward declaration is good enough
};
example.cpp:
#include "used_by_example.h"
// ...
UsedByExample object; // need #include
A little known / underused fact is that forward declarations are also good enough for function return values:
class Example;
Example f(); // forward declaration is good enough
Only the code which calls f() and has to operate on the returned Example object actually needs the definition of Example.
The purpose of PIMPL, an idiom depending on forward declarations, is to hide private members completely from outside compilation units. This can thus also reduce compile time.
So, if you have this class:
example.h:
#include "other_class.h"
#include "yet_another_class.h"
#include "and_yet_another_class.h"
class Example
{
// ...
public:
void f();
void g();
private:
OtherClass a;
YetAnotherClass b;
AndYetAnotherClass c;
};
You can actually turn it into two classes, one being the implementation and the other the interface.
example.h:
// no more includes
class ExampleImpl; // forward declaration
class Example
{
// ...
public:
void f();
void g();
private:
ExampleImpl *impl;
};
example_impl.h:
#include "other_class.h"
#include "yet_another_class.h"
#include "and_yet_another_class.h"
class ExampleImpl
{
// ...
void f();
void g();
// ...
OtherClass a;
YetAnotherClass b;
AndYetAnotherClass c;
};
Disadvantages may include higher complexity, memory-management issues and an added layer of indirection.
Use a fast SSD setup. Or even create a ram disk, if suitable on your system.
Project -> Properties -> Configuration Properties -> C/C++ -> General -> Multi-processor Compilation: Yes (/MP)
Tools -> Options -> Projects and Solutions -> Build and Run: and set the maximum number of parallel project builds. (already set to 8 on my system, probably determined on first run of VS2013)
Cut down on the number of dependencies, so that if one part of the code changes, the rest doesn't have to be recompiled. I.e. any .cp/.cpp/.cc file that includes a particular header needs to be recompiled when that header is changed. So forward-declare stuff if you can.
Avoid compiling as much as possible. If there are modules you don't need, leave them out. If you have code that rarely changes, put it in a static library.
Don't use excessive amounts of templates. The compiler has to generate a new copy of each version of the template, and all code for a template goes in the header and needs to be re-read over and over again. That in itself is not a problem, but it is the opposite of forward-declaring, and adds dependencies.
If you have headers that every file uses, and which change rarely, see if you can put them in a precompiled header. A precompiled header is only compiled once and saved in a format specific to the compiler that is fast to read, and so for classes used a lot, can lead to great speed-ups.
Note that this only works with code you have written. For code from third parties, only #2 and #4 can help, but will not improve absolute compile times by much, only reduce the number of times code needs to be analyzed again after you've built it once.
To actually make things faster, your options are more limited. You already have an SSD, so you're probably not hard disk bound anymore, and swapping with an SSD should also be faster, so RAM is probably not the most pressing issue. So you are probably CPU-bound.
There are a couple of options:
1: you can make 1 (or a few) .cpp files that includes lots of cpp files from your project
Then, you compile only those files and ignore the rest.
Advantages:
compilation is a lot faster on one machiene
there should already be tools to generate those files for you. In any case the technology is really simple , you could build a simple script to parse the project files and to compilation units, after that you only need to include those in the project and ignore the rest of the files
Disadvantages:
changing one cpp files will trigger the rebuild that includes that cpp file and for minor changes it takes while loger to compile
you might need to change a bit of code to make it work , it might not work out of the box, for example if you have a function with the same name in two different cpp files you need to change that.
2: use a tool like incredibuild
Advantages:
works out of the box for your project. Install the app and you can already compile your project
compilation is really fast even for small changes
Disadvantages:
is not free
you will need more computers to achieve a speadup
You might find alternatives for option 2 , here is a related question.
Other tips to improve compilation time is to move as much of the code in cpp files and avoid inline declarations. Also extensive use of metaprogramming adds build time.
Simply find your bottleneck, and improve that part of your PC. For example, HDD / SSD performance is often a bottleneck.
On the code side of things, use things like forward declaration to avoiding including headers where possible and thus improve compilation time further.
Don't use templates. Really. If every class you use is templated (and if everything is a class), you have only a single translation unit. Consequently, your build system is powerless to reduce compilation to only the few parts that actually need rebuilding.
If you have a large number of templated classes, but a fair amount of untemplated classes, the situation is not much better: Any templated class that is used in more than one compilation unit has to be compiled several times!
Of course, you don't want to throw out the small, usefull templated helper classes, but for all code you write, you should think twice before you make a template out of it. Especially, you should avoid using templates for complex classes that use five more different templated classes. Your code might actually get a lot more readable from it.
If you want to go radical, write in C. The only thing that compiles faster than that is assembler (thanks for reminding me of that #black).
In addition to what's been said previously, you should avoid things like:
1. dynamic binding. The more it's complex, the more work the compiler will have to do.
2. High levels of optimization: compiling for a certain architecture using optimized code (Ox) takes longer.
Thanks for all your answers.
I have to enable multicores compilation and do some optimizations a little everywhere.
Most of the time cost is because of template.
thanks.
Related
There are many slim laptops who are just cheap and great to use. Programming has the advantage of being done in any place where there is silence and comfort, since concentrating for long hours is important factor to be able to do effective work.
I'm kinda old fashioned as I like my statically compiled C or C++, and those languages can be pretty long to compile on those power-constrainted laptops, especially C++11 and C++14.
I like to do 3D programming, and the libraries I use can be large and won't be forgiving: bullet physics, Ogre3D, SFML, not to mention the power hunger of modern IDEs.
There are several solutions to make building just faster:
Solution A: Don't use those large libraries, and come up with something lighter on your own to relieve the compiler. Write appropriate makefiles, don't use an IDE.
Solution B: Set up a building server elsewhere, have a makefile set up on an muscled machine, and automatically download the resulting exe. I don't think this is a casual solution, as you have to target your laptop's CPU.
Solution C: use the unofficial C++ module
???
Any other suggestion ?
Compilation speed is something, that can be really boosted, if you know how to. It is always wise to think carefully about project's design (especially in case of large projects, consisted of multiple modules) and modify it, so compiler can produce output efficiently.
1. Precompiled headers.
Precompiled header is a normal header (.h file), that contains the most common declarations, typedefs and includes. During compilation, it is parsed only once - before any other source is compiled. During this process, compiler generates data of some internal (most likely, binary) format, Then, it uses this data to speed up code generation.
This is a sample:
#pragma once
#ifndef __Asx_Core_Prerequisites_H__
#define __Asx_Core_Prerequisites_H__
//Include common headers
#include "BaseConfig.h"
#include "Atomic.h"
#include "Limits.h"
#include "DebugDefs.h"
#include "CommonApi.h"
#include "Algorithms.h"
#include "HashCode.h"
#include "MemoryOverride.h"
#include "Result.h"
#include "ThreadBase.h"
//Others...
namespace Asx
{
//Forward declare common types
class String;
class UnicodeString;
//Declare global constants
enum : Enum
{
ID_Auto = Limits<Enum>::Max_Value,
ID_None = 0
};
enum : Size_t
{
Max_Size = Limits<Size_t>::Max_Value,
Invalid_Position = Limits<Size_t>::Max_Value
};
enum : Uint
{
Timeout_Infinite = Limits<Uint>::Max_Value
};
//Other things...
}
#endif /* __Asx_Core_Prerequisites_H__ */
In project, when PCH is used, every source file usually contains #include to this file (I don't know about others, but in VC++ this actually a requirement - every source attached to project configured for using PCH, must start with: #include PrecompiledHedareName.h). Configuration of precompiled headers is very platform-dependent and beyond the scope of this answer.
Note one important matter: things, that are defined/included in PCH should be changed only when absolutely necessary - every chnge can cause recompilation of whole project (and other depended modules)!
More about PCH:
Wiki
GCC Doc
Microsoft Doc
2. Forward declarations.
When you don't need whole class definition, forward declare it to remove unnecessary dependencies in your code. This also implicates extensive use of pointers and references when possible. Example:
#include "BigDataType.h"
class Sample
{
protected:
BigDataType _data;
};
Do you really need to store _data as value? Why not this way:
class BigDataType; //That's enough, #include not required
class Sample
{
protected:
BigDataType* _data; //So much better now
};
This is especially profitable for large types.
3. Do not overuse templates.
Meta-programming is a very powerful tool in developer's toolbox. But don't try to use them, when they are not necessary.
They are great for things like traits, compile-time evaluation, static reflection and so on. But they introduce a lot of troubles:
Error messages - if you have ever seen errors caused by improper usage of std:: iterators or containers (especially the complex ones, like std::unordered_map), than you know what is this all about.
Readability - complex templates can be very hard to read/modify/maintain.
Quirks - many techniques, templates are used for, are not so well-known, so maintenance of such code can be even harder.
Compile time - the most important for us now:
Remember, if you define function as:
template <class Tx, class Ty>
void sample(const Tx& xv, const Ty& yv)
{
//body
}
it will be compiled for each exclusive combination of Tx and Ty. If such function is used often (and for many such combinations), it can really slow down compilation process. Now imagine, what will happen, if you start to overuse templating for whole classes...
4. Using PIMPL idiom.
This is a very useful technique, that allows us to:
hide implementation details
speed up code generation
easy updates, without breaking client code
How does it work? Consider class, that contain a lot of data (for example, representing person). It could look like this:
class Person
{
protected:
string name;
string surname;
Date birth_date;
Date registration_date;
string email_address;
//and so on...
};
Our application evolves and we need to extend/change Person definition. We add some new fields, remove others... and everything crashes: size of Person changes, names of fields change... cataclysm. In particular, every client code, that depends on Person's definition needs to be changed/updated/fixed. Not good.
But we can do it the smart way - hide the details of Person:
class Person
{
protected:
class Details;
Details* details;
};
Now, we do few nice things:
client cannot create code, that depends on how Person is defined
no recompilation needed as long as we don't modify public interface used by client code
we reduce the compilation time, because definitions of string and Date no longer need to be present (in previous version, we had to include appropriate headers for these types, that adds additional dependencies).
5. #pragma once directive.
Although it may give no speed boost, it is clearer and less error-prone. It is basically the same thing as using include guards:
#ifndef __Asx_Core_Prerequisites_H__
#define __Asx_Core_Prerequisites_H__
//Content
#endif /* __Asx_Core_Prerequisites_H__ */
It prevents from multiple parses of the same file. Although #pragma once is not standard (in fact, no pragma is - pragmas are reserved for compiler-specific directives), it is quite widely supported (examples: VC++, GCC, CLang, ICC) and can be used without worrying - compilers should ignore unknown pragmas (more or less silently).
6. Unnecessary dependencies elimination.
Very important point! When code is being refactored, dependencies often change. For example, if you decide to do some optimizations and use pointers/references instead of values (vide point 2 and 4 of this answer), some includes can become unnecessary. Consider:
#include "Time.h"
#include "Day.h"
#include "Month.h"
#include "Timezone.h"
class Date
{
protected:
Time time;
Day day;
Month month;
Uint16 year;
Timezone tz;
//...
};
This class has been changed to hide implementation details:
//These are no longer required!
//#include "Time.h"
//#include "Day.h"
//#include "Month.h"
//#include "Timezone.h"
class Date
{
protected:
class Details;
Details* details;
//...
};
It is good to track such redundant includes, either using brain, built-in tools (like VS Dependency Visualizer) or external utilities (for example, GraphViz).
Visual Studio has also a very nice option - if you click with RMB on any file, you will see an option 'Generate Graph of include files' - it will generated a nice, readable graph, that can be easily analyzed and used to track unnecessary dependencies.
Sample graph, generated inside my String.h file:
As Mr. Yellow indicated in a comment, one of the best ways to improve compile times is to pay careful attention to your use of header files. In particular:
Use precompiled headers for any header that you don't expect to change including operating system headers, third party library headers, etc.
Reduce the number of headers included from other headers to the minimum necessary.
Determine whether a include is needed in the header or whether it can be moved to cpp file. This sometimes causes a ripple effect because someone else was depending on you to include the header for it, but it is better in the long term to move the include to the place where it's actually needed.
Using forward declared classes, etc. can often eliminate the need to include the header in which that class is declared. Of course, you still need to include the header in the cpp file, but that only happens once, as opposed to happening every time the corresponding header file is included.
Use #pragma once (if it is supported by your compiler) rather than include guard symbols. This means the compiler does not even need to open the header file to discover the include guard. (Of course many modern compilers figure that out for you anyway.)
Once you have your header files under control, check your make files to be sure you no longer have unnecessary dependencies. The goal is to rebuild everything you need to, but no more. Sometimes people err on the side of building too much because that is safer than building too little.
If you've tried all of the above, there's a commercial product that does wonders, assuming you have some available PCs on your LAN. We used to use it at a previous job. It's called Incredibuild (www.incredibuild.com) and it shrunk our build time from over an hour (C++) to about 10 minutes. From their website:
IncrediBuild accelerates build time through efficient parallel computing. By harnessing idle CPU resources on the network, IncrediBuild transforms a network of PCs and servers into a private computing cloud that can best be described as a “virtual supercomputer.” Processes are distributed to remote CPU resources for parallel processing, dramatically shortening build time up by to 90% or more.
Another point that's not mentioned in the other answers: Templates. Templates can be a nice tool, but they have fundamental drawbacks:
The template, and all the templates it depends upon, must be included. Forward declarations don't work.
Template code is frequently compiled several times. In how many .cpp files do you use an std::vector<>? That is how many times your compiler will need to compile it!
(I'm not advocating against the use of std::vector<>, on the contrary you should use it frequently; it's simply an example of a really frequently used template here.)
When you change the implementation of a template, you must recompile everything that uses that template.
With template heavy code, you often have relatively few compilation units, but each of them is huge. Of course, you can go all-template and have only a single .cpp file that pulls in everything. This would avoid multiple compiling of template code, however it renders make useless: any compilation will take as long as a compilation after a clean.
I would recommend going the opposite direction: Avoid template-heavy or template-only libraries, and avoid creating complex templates. The more interdependent your templates become, the more repeated compilation is done, and the more .cpp files need to be rebuilt when you change a template. Ideally any template you have should not make use of any other template (unless that other template is std::vector<>, of course...).
I am developing a fairly large C++ support library, and have found myself moving towards a header-only approach. In C++ this almost works because you can implement where you define in classes. For templated methods, the implementation has to be in the same file anyway, so I find that it is much easier to just keep the implementation with the definition.
However, there are several times where "sources" must be used. As just one example, circular dependencies sometimes occur and the implementation has to be written outside the class definition. Here is how I am handling it:
//part of libfoo.h
class Bar
{
void CircularDependency(void);
};
#ifdef LIBFOO_COMPILE_INLINE
void Bar::CircularDependency(void)
{
//...
}
#endif
Then the project that uses libfoo would do the following in main.cpp:
//main.cpp
#define LIBFOO_COMPILE_INLINE
#include "libfoo.h"
And in any other .cpp:
//other.cpp
#include "libfoo.h"
The point is that the compile-inline section only gets compiled once (in main.cpp).
And finally my question: is there a name for this idiom or any other projects that work this way? It just seems to be a natural outcome of the implementation and definition being blurred by templating and class methods. And: are there any reasons why this is a bad idea or why it would potentially not scale well?
Quick aside: I know that many coders, with good reason, prefer their headers to resemble interfaces and not implementations, but IMHO documentation generators are better suited to describing interfaces because I like to hide private members all together :-)
You should be able to use the inline keyword if the circular dependency problem is resolved by the time the definition of Bar::CircularDependency() would show up in the libfoo.h header:
//part of libfoo.h
class Bar
{
void CircularDependency(void);
};
// other stuff that 'resolves' the circular dependency
// ...
// ...
inline void Bar::CircularDependency(void)
{
//...
}
That would make your library easier to use (the user wouldn't need to deal with the fact that LIBFOO_COMPILE_INLINE needed to be defined in exactly one place where the header is included).
Circular dependencies are not really a problem as you can forward decalre that functions are inline. I wouldn't really suggest this, though: while the "source-less" approach initially makes it easier to use a library coming from somewhere, it causes longer compile times and tighter coupling between files. In a huge source base this is essentially killing the hOpe to get code build in reasonable times. Sure huge only starts at a couple millions lines of code but who cares about trivial programs...? (and, yes, the place I work at has several tens of millions lines of code being build into single executables)
My reasons for why this is a bad idea.
Increase in compile time
Every individual compilation unit that includes the header should compile all the header files in addition to the source in itself. This would probably increase the compile time and might be frustrating while testing your code, with small changes. One could argue the compiler might optimize it, but IMO it could not optimize it beyond a point.
Bigger code segment
If all the functions are written inline, it means that the compiler has to put all that code wherever the function is called. This is going to blow up the code segment and it would affect the load time of the program and the program would take more memory.
Creates dependency in client code with tight coupling
Whenever you change your implementation, every client should get updated (by re compiling the code). But if the implementation has been put in an independent shared object (.so or .dll), the client should just link to the new shared object.
Also I am not sure why one would do this.
//main.cpp
#define LIBFOO_COMPILE_INLINE
#include "libfoo.h"
If at all one has to do this, (s)he could have simply put the implementation code in main.cpp itself. Anyway, you could define LIBFOO_COMPILE_INLINE only in one compilation unit. Otherwise you will duplicate definitions.
I am actually much interested in developing a idiom to write cohesive template code. Sometime in future, C++ compiler should support writing cohesive templates. By this, I mean the client need not have to recompile the code, whenever template implementation in modified.
I have seen many explanations on when to use forward declarations over including header files, but few of them go into why it is important to do so. Some of the reasons I have seen include the following:
compilation speed
reducing complexity of header file management
removing cyclic dependencies
Coming from a .net background I find header management frustrating. I have this feeling I need to master forward declarations, but I have been scrapping by on includes so far.
Why cannot the compiler work for me and figure out my dependencies using one mechanism (includes)?
How do forward declarations speed up compilations since at some point the object referenced will need to be compiled?
I can buy the argument for reduced complexity, but what would a practical example of this be?
"to master forward declarations" is not a requirement, it's a useful guideline where possible.
When a header is included, and it pulls in more headers, and yet more, the compiler has to do a lot of work processing a single translation module.
You can see how much, for example, with gcc -E:
A single #include <iostream> gives my g++ 4.5.2 additional 18,560 lines of code to process.
A #include <boost/asio.hpp> adds another 74,906 lines.
A #include <boost/spirit/include/qi.hpp> adds 154,024 lines, that's over 5 MB of code.
This adds up, especially if carelessly included in some file that's included in every file of your project.
Sometimes going over old code and pruning unnecessary includes improves the compilation dramatically just because of that. Replacing includes with forward declarations in the translation modules where only references or pointers to some class are used, improves this even further.
Why cannot the compiler work for me and figure out my dependencies using one mechanism (includes)?
It cannot because, unlike some other languages, C++ has an ambiguous grammar:
int f(X);
Is it a function declaration or a variable definition? To answer this question the compiler must know what does X mean, so X must be declared before that line.
Because when you're doing something like this :
bar.h :
class Bar {
int foo(Foo &);
}
Then the compiler does not need to know how the Foo struct / class is defined ; so importing the header that defines Foo is useless. Moreover, importing the header that defines Foo might also need importing the header that defines some other class that Foo uses ; and this might mean importing the header that defines some other class, etc.... turtles all the way.
In the end, the file that the compiler is working against is almost like the result of copy pasting all the headers ; so it will get big for no good reason, and when someone makes a typo in a header file that you don't need (or import , or something like that), then compiling your class starts to take waaay too much time (or fail for no obvious reason).
So it's a good thing to give as little info as needed to the compiler.
How do forward declarations speed up compilations since at some point the object referenced will need to be compiled?
1) reduced disk i/o (fewer files to open, fewer times)
2) reduced memory/cpu usage
most translations need only a name. if you use/allocate the object, you'll need its declaration.
this is probably where it will click for you: each file you compile compiles what is visible in its translation.
a poorly maintained system will end up including a ton of stuff it does not need - then this gets compiled for every file it sees. by using forwards where possible, you can bypass that, and significantly reduce the number of times a public interface (and all of its included dependencies) must be compiled.
that is to say: the content of the header won't be compiled once. it will be compiled over and over. everything in this translation must be parsed, checked that it's a valid program, checked for warnings, optimized, etc. many, many times.
including lazily only adds significant disk/cpu/memory increase, which turns into intolerable build times for you, while introducing significant dependencies (in non-trivial projects).
I can buy the argument for reduced complexity, but what would a practical example of this be?
unnecessary includes introduce dependencies as side effects. when you edit an include (necessary or not), then every file which includes it must be recompiled (not trivial when hundreds of thousands of files must be unnecessarily opened and compiled).
Lakos wrote a good book which covers this in detail:
http://www.amazon.com/Large-Scale-Software-Design-John-Lakos/dp/0201633620/ref=sr_1_1?ie=UTF8&s=books&qid=1304529571&sr=8-1
Header file inclusion rules specified in this article will help reduce the effort in managing header files.
I used forward declarations simply to reduce the amount of navigation between source files done. e.g. if module X calls some glue or interface function F in module Y, then using a forward declaration means the writing the function and the call can be done by only visiting 2 places, X.c and Y.c not so much of an issue when a good IDE helps you navigate, but I tend to prefer coding bottom-up creating working code then figuring out how to wrap it rather than through top down interface specification.. as the interfaces themselves evolve it's handy to not have to write them out in full.
In C (or c++ minus classes) it's possible to truly keep structure details Private by only defining them in the source files that use them, and only exposing forward declarations to the outside world - a level of black boxing that requires performance-destroying virtuals in the c++/classes way of doing things. It's also possible to avoid needing to prototype things (visiting the header) by listing 'bottom-up' within the source files (good old static keyword).
The pain of managing headers can sometimes expose how modular your program is or isn't - if its' truly modular, the number of headers you have to visit and the amount of code & datastructures declared within them should be minimized.
Working on a big project with 'everything included everywhere' through precompiled headers won't encourage this real modularity.
module dependancies can correlate with data-flow relating to performance issues, i.e. both i-cache & d-cache issues. If a program involves many modules that call each other & modify data at many random places, it's likely to have poor cache-coherency - the process of optimizing such a program will often involve breaking up passes and adding intermediate data.. often playing havoc with many'class diagrams'/'frameworks' (or at least requiring the creation of many intermediates datastructures). Heavy template use often means complex pointer-chasing cache-destroying data structures. In its optimized state, dependancies & pointer chasing will be reduced.
I believe forward declarations speed up compilation because the header file is ONLY included where it is actually used. This reduces the need to open and close the file once. You are correct that at some point the object referenced will need to be compiled, but if I am only using a pointer to that object in my other .h file, why actually include it? If I tell the compiler I am using a pointer to a class, that's all it needs (as long as I am not calling any methods on that class.)
This is not the end of it. Those .h files include other .h files... So, for a large project, opening, reading, and closing, all the .h files which are included repetitively can become a significant overhead. Even with #IF checks, you still have to open and close them a lot.
We practice this at my source of employment. My boss explained this in a similar way, but I'm sure his explanation was more clear.
How do forward declarations speed up compilations since at some point the object referenced will need to be compiled?
Because include is a preprocessor thing, which means it is done via brute force when parsing the file. Your object will be compiled once (compiler) then linked (linker) as appropriate later.
In C/C++, when you compile, you've got to remember there is a whole chain of tools involved (preprocessor, compiler, linker plus build management tools like make or Visual Studio, etc...)
Good and evil. The battle continues, but now on the battle field of header files. Header files are a necessity and a feature of the language, but they can create a lot of unnecessary overhead if used in a non optimal way, e.g. not using forward declarations etc.
How do forward declarations speed up
compilations since at some point the
object referenced will need to be
compiled?
I can buy the argument for reduced
complexity, but what would a practical
example of this be?
Forward declarations are bad ass. My experience is that a lot of c++ programmers are not aware of the fact that you don't have to include any header file, unless you actually want to use some type, e.g. you need to have the type defined so the compiler understands what you want to do. It's important to try and refrain from including header files in other header files.
Just passing around a pointer from one function to another, only requires a forward declaration:
// someFile.h
class CSomeClass;
void SomeFunctionUsingSomeClass(CSomeClass* foo);
Including someFile.h does not require you to include the header file of CSomeClass, since you are merely passing a pointer to it, not using the class. This means that the compiler only needs to parse one line (class CSomeClass;) instead of an entire header file (that might be chained to other header files etc etc).
This reduces both compile time and link time, and we are talking big optimizations here if you have many headers and many classes.
What are the best ways to speed up compilation time in Visual Studio 2005 for a solution containing mainly C++ projects?
Besides pre-compiling headers, there are number of other things that could be slowing you down:
Virus checking software - can have a nasty impact on builds. If you have a virus checker running try to turn it off and see what sort of improvement you get.
Low RAM - Not enough RAM will cause many more disc reads and slow you down. cont ->
Slow HDD - You have to write to disc regardless and a slow drive, like those present in many laptops and lower-end systems, will kill your builds. You can get a faster drive, a RAID array or SSD
Slow processor(s)... of course.
Unlikely, but: Check and see if your projects are referencing network shares. Having to pull files across the network with each build would a big slowdown.
EDIT Few more thoughts:
If your solution contains a large number of projects you could consider creating other "Sub" solutions that contain only the projects that you're actively working on. This possibility depends on how interrelated your projects are.
The project builds can have pre and post build event commands associated with them. Check the properties of your projects to see if there are any costly build events specified.
You can create a pre-compiled header of the files which normally wont change frequently. This can dramatically increase your compilation time.
At the code level, it's helpful to minimize the amount of headers included by other headers. Repeatedly loading and reading files can have a big impact. Instead, forward declare things wherever possible. For example:
#include <iosfwd>
#include <string>
#include "M.h"
#include "R2.h"
#include "A2.h"
class M2;
class A;
class R;
template<class C> class T;
class X{
M member; // definition of M is required
M2 *member2; // forward declaration is sufficient for pointers & references
public:
// forward declaration of argument type A is sufficient
X(const A &arg);
// definition required for most std templates and specializations
X(const std::string &arg);
// forward declaration of your own templates is usually okay...
void f(T<int> t);
// unless you're defining a new one. The complete definition
// must be present in the header
template<class Z>
void whatever(){
// blah blah
}
R f(); // forward declaration of return type R is sufficient
// definitions of instantiated types R2 and A2 are required
R2 g(A2 arg){
return arg.h();
}
// ostream is forward-declared in iosfwd
friend std::ostream &operator << (std::ostream &o, const X &x);
};
Users of this class will have to #include whatever files provide the class definitions if they actually call any of X(const A&), f(t<int>), f(), or operator <<.
Templates will almost certainly add to inclusion overhead. This is generally worth their power, in my opinion. When you make your own templates, the complete definition must reside in the header file; it cannot go in a cpp file for separate compilation. Standard templates can't be forward declared, because implementations are allowed to provide additional template parameters (with default arguments) and forward declarations of templates must list all of the template parameters. The iosfwd header forward-declares all of the standard stream templates and classes, among other things. Your own templates can be forward-declared as long as you don't instantiate any specializations.
Precompiled headers can also help, but most compilers limit the quantity that you can include in a source file (Visual Studio limits you to one), so they aren't a panacea.
Unity builds.
Precompiled headers.
Windows 7 builds a lot faster than Windows XP (if that's relevant to you).
Turn off anti-virus scanners.
Minimise dependencies (i.e. #include's from header files).
Minimise amount of template code.
Precompiled headers can be helpful if you include complex headers (STL, Boost for example) directly in a lot of files.
Make sure your build does not access the network inadvertently, or intentionally.
If you have multicore cpu, use /MP compiler option.
Edit: What techniques can be used to speed up C++ compilation times?
If you have a lot of projects in the solution remove some of them. If they're referenced as projects you can reference the binary output instead (use the browse button in the add reference dialog). This removes a lot of dependency checking in the solution. It's not ideal but it does speed things up.
The solution we've used was IncrediBuild. Just throw more hardware at the problem. In addition to all the developer machines (fairly powerful themselves) we had a number of 4-core servers doing nothing but compiling.
Change your Solution Platform on "Any CPU" option which is given on the top of visual stdio then your program build speed will be definitely increased.
I've just started learning Qt, using their tutorial. I'm currently on tutorial 7, where we've made a new LCDRange class. The implementation of LCDRange (the .cpp file) uses the Qt QSlider class, so in the .cpp file is
#include <QSlider>
but in the header is a forward declaration:
class QSlider;
According to Qt,
This is another classic trick, but one that's much less used often. Because we don't need QSlider in the interface of the class, only in the implementation, we use a forward declaration of the class in the header file and include the header file for QSlider in the .cpp file.
This makes the compilation of big projects much faster, because the compiler usually spends most of its time parsing header files, not the actual source code. This trick alone can often speed up compilations by a factor of two or more.
Is this worth doing? It seems to make sense, but it's one more thing to keep track of - I feel it would be much simpler just to include everything in the header file.
Absolutely. The C/C++ build model is ...ahem... an anachronism (to say the best). For large projects it becomes a serious PITA.
As Neil notes correctly, this should not be the default approach for your class design, don't go out of your way unless you really need to.
Breaking Circular include references is the one reason where you have to use forward declarations.
// a.h
#include "b.h"
struct A { B * a; }
// b.h
#include "a.h" // circlular include reference
struct B { A * a; }
// Solution: break circular reference by forward delcaration of B or A
Reducing rebuild time - Imagine the following code
// foo.h
#include <qslider>
class Foo
{
QSlider * someSlider;
}
now every .cpp file that directly or indirectly pulls in Foo.h also pulls in QSlider.h and all of its dependencies. That may be hundreds of .cpp files! (Precompiled headers help a bit - and sometimes a lot - but they turn disk/CPU pressure in memory/disk pressure, and thus are soon hitting the "next" limit)
If the header requires only a reference declaration, this dependency can often be limited to a few files, e.g. foo.cpp.
Reducing incremental build time - The effect is even more pronounced, when dealing with your own (rather than stable library) headers. Imagine you have
// bar.h
#include "foo.h"
class Bar
{
Foo * kungFoo;
// ...
}
Now if most of your .cpp's need to pull in bar.h, they also indirectly pull in foo.h. Thus, every change of foo.h triggers build of all these .cpp files (which might not even need to know Foo!). If bar.h uses a forward declaration for Foo instead, the dependency on foo.h is limited to bar.cpp:
// bar.h
class Foo;
class Bar
{
Foo * kungFoo;
// ...
}
// bar.cpp
#include "bar.h"
#include "foo.h"
// ...
It is so common that it is a pattern - the PIMPL pattern. It's use is two-fold: first it provides true interface/implementation isolation, the other is reducing build dependencies. In practice, I'd weight their usefulness 50:50.
You need a reference in the header, you can't have a direct instantiation of the dependent type. This limits the cases where forward declarations can be applied. If you do it explicitely, it is common to use a utility class (such as boost::scoped_ptr) for that.
Is Build Time worth it? Definitely, I'd say. In the worst case build time grows polynomial with the number of files in the project. other techniques - like faster machines and parallel builds - can provide only percentage gains.
The faster the build, the more often developers test what they did, the more often unit tests run, the faster build breaks can be found fixed, and less often developers end up procrastinating.
In practice, managing your build time, while essential on a large project (say, hundreds of source files), it still makes a "comfort difference" on small projects. Also, adding improvements after the fact is often an exercise in patience, as a single fix might shave off only seconds (or less) of a 40 minute build.
I use it all the time. My rule is if it doesn't need the header, then i put a forward declaration ("use headers if you must, use forward declarations if you can"). The only thing that sucks is that i need to know how the class was declared (struct/class, maybe if it is a template i need its parameters, ...). But in the vast majority of times, it just comes down to "class Slider;" or something along that. If something requires some more hassle to be just declared, one can always declare a special forward declare header like the Standard does with iosfwd too.
Not including the header file will not only reduce compile time but also will avoid polluting the namespace. Files including the header will thank you for including as little as possible so they can keep using a clean environment.
This is the rough plan:
/* --- --- --- Y.hpp */
class X;
class Y {
X *x;
};
/* --- --- --- Y.cpp */
#include <x.hpp>
#include <y.hpp>
...
There are smart pointers that are specifically designed to work with pointers to incomplete types. One very well known one is boost::shared_ptr.
Yes, it sure does help. Another thing to add to your repertoire is precompiled headers if you are worried about compilation time.
Look up FAQ 39.12 and 39.13
The standard library does this for some of the iostream classes in the standard header <iosfwd>. However, it is not a generally applicable technique - notice there are no such headers for the other standard library types, and it should not (IMHO) be your default approach to designing class heirarchies.
Although this eems to be a favourite "optimisation" for programmers, I suspect that like most optimisations, few of them have actually timed the build of their projects both with and without such declarations. My limited experiments in this area indicate that the use of pre-compiled headers in modern compilers makes it unecessary.
There is a HUGE difference in compile times for larger projects, even ones with carefully managed dependencies. You better get the habit of forward declaring and keep as much as possible out of header files, because at a lot of software shops which uses C++ it's required. The reason for why you don't see it all that much in the standard header files is because those make heavy use of templates, at which point forward declaring becomes hard. For MSVC you can use /P to take a look at how the preprocessed file looks before actual compilation. If you haven't done any forward declaration in your project it would probably be an interesting experience to see how much extra processing needs to be done.
In general, no.
I used to forward declare as much as I could, but no longer.
As far as Qt is concerned, you may notice that there is a <QtGui> include file that will pull in all the GUI Widgets. Also, there is a <QtCore>, <QtWebKit>, <QtNetwork> etc. There's a header file for each module. It seems the Qt team believes this is the preferred method also. They say so in their module documentation.
True, the compilation time may be increased. But in my experience its just not that much. And if it were, using precompiled headers would be the next step.
When you write ...
include "foo.h"
... you thereby instruct a conventional build system "Any time there is any change whatsover in the library file foo.h, discard this compilation unit and rebuild it, even if all that happened to foo.h was the addition of a comment, or the addition of a comment to some file which foo.h includes; even if all that happened was some ultra-fastidious colleague re-balanced the curly braces; even if nothing happened other than a pressured colleague checked in foo.h unchanged and inadvertently changed its timestamp."
Why would you want to issue such a command? Library headers, because in general they have more human readers than application headers, have a special vulnerability to changes that have no impact on the binary, such as improved documentation of functions and arguments or the bump of a version number or copyright date.
The C++ rules allow namespace to be re-opened at any point in a compilation unit (unlike a struct or class) in order to support forward declaration.
Forward declarations are very useful for breaking the circular dependencies, and sometimes may be ok to use with your own code, but using them with library code may break the program on another platform or with other versions of the library (this will happen even with your code if you're not careful enough). IMHO not worth it.