I understand the idea that precompiling headers can speed up build times, but there are a handful of questions that have thus far prevented me from grokking them.
Why does using precompiled headers require the developer to configure anything?
Why can't the compiler (or linker/IDE?) just have individual precompiled header object files, in the same way it does for the source files (i.e. .obj files)? Dependencies are indicated by which source/header files include which other files, and it can already detect when source files change, so a regular build is normally not a full rebuild. Instead of requiring me to specify which headers get precompiled, etc., why isn't this all just always automatically on, and transparent to the developer?
As I understand the precompiled headers methodology in Visual Studio, the idea is that you get this one big header file (stdafx.h) that includes all the other header files that you want to be precompiled, and then you include that in all your source files that use any of those headers.
a. Am I understanding correctly?
b. Doesn't this break encapsulation? Often effectively including various (likely) unrelated items that you don't, which makes it harder to tell what libraries you're actually using, and what comes from where.
It seems to me that this implementation forces bad practices. What am I missing?
How do I utilize precompiled headers in Visual Studio (2013)?
Is there a cross-platform way to use or facilitate precompiled headers?
Thanks.
Why can't the compiler (or linker/IDE?) just have individual precompiled header object files, in the same way it does for the source files (i.e. .obj files)?
The answer to 1 and 2 is in the way how precompiled headers work. Assume you have a my_source_file.c:
#include "header1.h"
#include "header2.h"
int func(int x) { return x+1; }
my_other_source_file.c:
#include "header1.h"
int func2(int x) { return x-1; }
When you call compiler.exe my_source_file.c the compiler starts parsing your file. All the internal variables of the compiler (things like which types have been defined, what variables declared, etc) are called the compiler state.
After it has parsed header1.h it can save the state to the disk. Then, when compiling my_other_source_file.c, instead of parsing header1.h again, it can just load the state and continue.
That state is a precompiled header. It is literally just a dump of all the compiler variables in the moment after it has parsed the entire header.
Now, the question is why can't you have two state dumps, for header1.h and header2.h and just load them both.. Well, the states are not independent. The second file would be the state of header1.h + header2.h. So, what is usually done is you have one state which is after all the common header files have been compiled, and use that.
In theory, you could have one for every combination and use the appropriate one, but that is much more hassle than it's worth.
Some things that are side effects of how this is done:
Different compilers (including even minor versions) have different variables, so you can't reuse the precomps.
Since the dumped state started from the top of the file, your precomp must be the first include. There must be nothing that could influence the state (i.e. not #defines, typedefs, declarations) before including the precomp.
Any defines passed by the command line (-DMY_DEFINE=0) will not be picked up in the precompiled header.
Any defines passed by the command line while precompiling will be in effect for all source files that use the precomp.
For 3), refer to MSFT documentation.
For 4), most compilers support precompiled headers, and they generally work in the same way. You could configure your makefiles/build scripts to always precompile a certain header (e.g. stdafx.h) which would include all the other headers. As far as your source code goes, you'd always just #include "stdafx.h", regardless of the platform.
Why can't the compiler (or linker/IDE?) just have individual
precompiled header object files
C and C++ have no concept of modules. The traditional compiler has a preprocessor phase (which may be invoked as a separate program) that will include the files and the whole thing will get compiled to intermediate code. The compiler per se does not see includes (or comments, or trigraphs, etc.).
Add to this that the behaviour of a header file can change depending on the context in which it is included (think macros, for example) and you end up with either many precompiled versions of the same header, or an intermediate form that is basically the language itself.
Am I understanding correctly?
Mostly. The actual name is irrelevant, as it can be specified in the project options. stdafx.h is a relic of the early development of MFC, which was originally named AFX (Application Framework eXtensions). The preprocessor also treats includes of the precompiled header differently, as they are not looked up in the include paths. If the name matches what is in the project settings, the .pch is used automatically.
Doesn't this break encapsulation
Not really. Encapsulation is an object-oriented feature and has nothing to do with include files. It might increase coupling and dependencies by making some names available across all files, but in general, this is not a problem. Most includes in a precompiled header are standard headers or third-party libraries, that is, headers that may be large and fairly static.
As an example, a project I'm currently working on includes GTK, standard headers, boost and various internal libraries. It can be assumed that these headers never change. Even if they changed once a day, I probably compile every minute or so on average, so it is more than worth it.
The fact that all these names are available project-wide makes no difference. What would I gain by including boost/tokenizer.hpp in only one .cpp file? Perhaps some intellectual satisfaction of knowing that I can only use boost::char_separator in that particular file. But it certainly creates no problem. All these headers are part of a collection of utilities that my program can use. I am completely dependent on them, because I made a design decision early on to integrate them. I am tightly coupled with them by choice.
However, this program needs to access system-specific graphical facilities, and it needs to be portable on (at least) Debian and Windows. Therefore, I centralized all these operations in two files: windows.cpp and x11.cpp. They both include their own X11/Xlib.h and windows.h. This makes sure I don't use non-portable stuff elsewhere (which would however quickly be caught as I keep switching back and forth) and it satisfies my obsession with design. In reality, they could have been in the precompiled header. It doesn't make much of a difference.
Finally, none of the headers that are part of this specific program are in the precompiled header. This is where coupling and dependencies come into play. Reducing the number of available names forces you to think about design and architecture. If you try to use something and get an error saying that that name isn't declared, you don't blindly include the file. You stop and think: does it make sense for this name to be available here, or am I mixing up my user interface and data acquisition? It helps you separate the various parts of your program.
It also serves as a "compilation firewall", where modifying a header won't require you to rebuild the whole thing. This is more of a language issue than anything else, but in practice, it's still damn useful.
Trying to localize the GTK includes, for example, would not be helpful: all of my user interface uses it. I have no intention of supporting a different kind of toolkit. Indeed, I chose GTK because it was portable and I wouldn't have to port the interface myself.
What would be the point of only including the GTK headers in the user interface files? Obviously, it will prevent me from using GTK in files where I don't need to. But this is not solving any problem. I'm not inadvertently using GTK in places I shouldn't. It only slows down my build time.
How do I utilize precompiled headers in Visual Studio
This has been answered elsewhere. If you need more help, I suggest you ask a new question, as this one is already pretty big.
Is there a cross-platform way to use or facilitate precompiled headers?
A precompiled header is a feature provided by your compiler or build system. It is not inherently tied to a platform. If you are asking whether there is a portable way of using precompiled headers across compilers, then no. They are highly compiler-dependent.
Related
The usual approach is to have one precompiled header in a project that contains the most common includes.
The problem is, that it is either too small or two big. When it is too small, it doesn't cover all the used headers so these have to be processed over and over in every module. When it is too large, it slows down the compilation too much for two reasons:
The project needs to be recompiled too often when you change something in header contained in the precompiled header.
The precompiled header is too large, so including it in every file actually slows down compilation.
What if I made all of the header files in a project precompiled. This would add some additional compiler work to precompile them, but then it would work very nicely, as no header would have to be processed twice (even preparing the precompiled header would use precompiled headers recursively), no extra stuff would have to be put into modules and only modules that are actually needed to be recompiled would be recompiled. In other words, for extra work O(N) complexity I would (theoretically) optimise O(n^2) comlexity of C++ includes. The precosseor to O(N), the processing of precompiled data would still be O(N^2), but at least minimised.
Did anyone tried this? Can it boost compile times in real life scenarios?
With GCC, the reliable way to use precompiled headers is to have one single (big) header (which #include-s many standard headers ...), and perhaps include some small header after the precompiled one.
See this answer for a more detailed explanation (for GCC specifically).
My own experience with GCC and Clang with precompiled headers is that you only can give a single pre-compiled header per compilation. See also the GCC documentation, I quote:
A precompiled header file can be used only when these conditions apply:
Only one precompiled header can be used in a particular compilation.
...
In practice, it's possible to compile every header to a precompiled header. (Recommended if you want to verify if everything is included, not recommended if you want to speed up compilation)
Based on your code, you can decide to use a different precompiled header based on the code that needs to be compiled. However, in general, it's a balancing act between compile time of the headers, compile-time of the CPP files and maintenance.
Adding a simple precompiled header that already contains several standard headers like string, vector, map, utility ... can already speed up your compilation with a remarkable percentage. (A long time ago, I've noticed a 15-20% on a small project)
The main gain you get from precompiled headers is that it:
only have to read 1 file instead of more, which improves on disk access
reads a binary format that's optimized for reading instead of plain text
it doesn't need to do all of the error checking as this was already done on creation
Even if you add a few headers that you don't use everywhere, it can still be much faster.
Lately, I also found the Clang build analyzer, it ain't ideal for big projects (see issue on github), though, it can give you some insights on where the time is being spent and what it can improve. (Or what you can improve in the codebase)
In all fairness, I don't use precompiled headers at this point in time. However, I do want to see it enabled on the project I'm working on.
Some other interesting reads:
https://medium.com/#unicorn_dev/speeding-up-the-build-of-c-and-c-projects-453ce85dd0e1
https://llunak.blogspot.com/2019/05/why-precompiled-headers-do-not-improve.html
https://www.bitsnbites.eu/faster-c-builds/
Is there some kind of generally agreed-upon standard for how C++ libraries should be structured, file-wise? I'm thinking my library would be static. It's fairly expansive, so right now, I have classes split into different files. Each source file has a header file to go with it.
I don't want the end user to have to #include every one of my header files, obviously. I thought about having a single header file named "libraryname.h" that just #includes all the header files for the user, but I've never seen any other libraries do that, and I fear that's for a reason.
I've seen libraries use a single header file and multiple source files, which seems simple but also a bit cluttered. I've also seen libraries completely nix the idea of separating source and header files and just have one file with #define guards that has all the code. That seems like a pretty good way to dramatically increase compile time, which I'd like to avoid, but if there's a really compelling reason to do libraries that way, I'd love to know it. Any advice on what style to use would be appreciated!
Having a single header file really slows down your build (i.e. a single change in one of your class declarations requires a full library build).
Also, you'll find that most source files will not need all the headers. You can also use forward declarations too which helps.
If your compiler supports precompiled headers, then that is the place to put all standard C++ library includes. But, don't put your headers in there or you'll force a whole library rebuild on a single change.
I've been reading about the C++ modules proposal (latest draft) but I don't fully understand what problem(s) it aims to solve.
Is its purpose to allow a module built by one compiler to be used by any other compiler (on the same OS/architecture, of course)? That is, does the proposal amount to standardizing the C++ ABI?
If not, is there another proposal being considered that would standardize the C++ ABI and allow compilers to interoperate?
Pre-compiled headers (PCH) are special files that certain compilers can generate for a .cpp file. What they are is exactly that: pre-compiled source code. They are source code that has been fed through the compiler and built into a compiler-dependent format.
PCHs are commonly used to speed up compilation. You put commonly used headers in the PCH, then just include the PCH. When you do a #include on the PCH, your compiler does not actually do the usual #include work. It instead loads these pre-compiled symbols directly into the compiler. No running a C++ preprocessor. No running a C++ compiler. No #including a million different files. One file is loaded and symbols appear fully formed directly in your compiler's workspace.
I mention all that because modules are PCHs in their perfect form. PCHs are basically a giant hack built on top of a system that doesn't allow for actual modules. The purpose of modules is ultimately to be able to take a file, generate a compiler-specific module file that contains symbols, and then some other file loads that module as needed. The symbols are pre-compiled, so again, there is no need to #include a bunch of stuff, run a compiler, etc. Your code says, import thing.foo, and it appears.
Look at any of the STL-derived standard library headers. Take <map> for example. Odds are good that this file is either gigantic or has a lot of #inclusions of other files that make the resulting file gigantic. That's a lot of C++ parsing that has to happen. It must happen for every .cpp file that has #include <map> in it. Every time you compile a source file, the compiler has to recompile the same thing. Over. And over. And over again.
Does <map> change between compilations? Nope, but your compiler can't know that. So it has to keep recompiling it. Every time you touch a .cpp file, it must compile every header that this .cpp file includes. Even though you didn't touch those headers or source files that affect those headers.
PCH files were a way to get around this problem. But they are limited, because they're just a hack. You can only include one per .cpp file, because it must be the first thing included by .cpp files. And since there is only one PCH, if you do something that changes the PCH (like add a new header to it), you have to recompile everything in that PCH.
Modules have essentially nothing to do with cross-compiler ABI (though having one of those would be nice, and modules would make it a bit easier to define one). Their fundamental purpose is to speed up compile times.
Modules are what Java, C#, and a lot of other modern languages offer. They immensely reduce compile time simply because the code that's in today's header doesn't have to be parsed over and over again, everytime it's included. When you say #include <vector>, the content of <vector> will get copied into the current file. #include really is nothing else than copy and paste.
In the module world, you simply say import std.vector; for example and the compiler loads the query/symbol table of that module. The module file has a format that makes it easy for the compiler to parse and use it. It's also only parsed once, when the module is compiled. After that, the compiler-generated module file is just queried for the information that is needed.
Because module files are compiler-generated, they'll be pretty closely tied to the compiler's internal representation of the C++ code (AST) and will as such most likely not be portable (just like today's .o/.so/.a files, because of name mangling etc.).
Modules in C++ have to be primarily better thing than today solutions, that is, when a library consists of a *.so file and *.h file with API. They have to solve the problems that are today with #includes, that is:
require macroguards (macros that prevent that definitions are provided multiple times)
are strictly text-based (so they can be tricked and in normal conditions they are reinterpreted, which gives also a chance to look differently in different compilation unit to be next linked together)
do not distinguish between dependent libraries being only instrumentally used and being derived from (especially if the header provides inline function templates)
Despite to what Xeo says, modules do not exist in Java or C#. In fact, in these languages "loading modules" relies on that "ok, here you have the CLASSPATH and search through it to find whatever modules may provide symbols that the source file actually uses". The "import" declaration in Java is no "module request" at all - the same as "using" in C++ ("import ns.ns2.*" in Java is the same as "using namespace ns::ns2" in C++). I don't think such a solution can be used in C++. The closest approximation I can imagine are packages in Vala or modules in Tcl (those from 8.5 version).
I imagine that C++ modules are rather not possible to be cross-platform, nor dynamically loaded (requires a dedicated C++ dynamic module loader - it's not impossible, but today hard to define). They will definitely by platform-dependent and should also be configurable when requested. But a stable C++ ABI is practically only required within the range of one system, just as it is with C++ ABI now.
How can I check if gcc precompiled headers are supported with autoconf? Is there a macro like AC_CHECK_GCH? The project I'm working on has a lot of templates and includes, I have tried writing a .h with the most commonly used includes and compiling it manually. It could be nice to integrate it with the rest of autotools.
It's not clear what you are hoping to accomplish. Are you hoping to distribute a precompiled header in your tarball? doing so would be almost completely useless. (I actually think it would be completely useless, but I say "almost" because I might be missing something.)
The system headers on the target box (the machine on which your project is being built) are almost certainly different than the ones on your development box. If they are the same, then there's no need to be using autoconf.
If the user of your package happens to be using gcc and wants to use precompiled headers, then they will put the .gch files in the appropriate location and gcc will use them. You don't need to do anything in your package.
I work as the Mac coder on a c++ application which I share with PC coders who use VS2008. When they make changes to a source file that requires an non-included header file they get no warnings, as most of their headers are in a precompiled header. What setting can they use to have them be warned that they failed to add the required include?
Would make my life easier as GCC requires the includes be actually present.
Er... Your question as stated is based on an incorrect premise.
All headers in VS compiler are required to be included. There's no way around it.
Precompiled headers feature does not affect this general principle it any way. The only difference is that in projects that plan to use precompiled headers the headers are normally included indirectly, through a special intermediate header file. Nevertheless, you can't just forget to include some header file, regardless of whether the project is using precompiled headers or not. All headers must be included in all cases, directly or indirectly.
A project that's using precompiled headers will compile perfectly fine on any compiler that knows nothing about any precompiled headers at all.
So, the situation you describe simply cannot happen in practice. If it does, they you must be leaving out some important detail about the problem.
I would make the precompiled headers conditional on a define that is only present on the PC code, or vice versa for the mac code.