This question already has answers here:
Why should I not #include <bits/stdc++.h>?
(9 answers)
Closed 4 years ago.
I have read from a codeforces blog that if we add #include <bits/stdc++.h> in a C++ program then there is no need to include any other header files. How does #include <bits/stdc++.h> work and is it ok to use it instead of including individual header files?
It is basically a header file that also includes every standard library and STL include file. The only purpose I can see for it would be for testing and education.
Se e.g. GCC 4.8.0 /bits/stdc++.h source.
Using it would include a lot of unnecessary stuff and increases compilation time.
Edit: As Neil says, it's an implementation for precompiled headers. If you set it up for precompilation correctly it could, in fact, speed up compilation time depending on your project. (https://gcc.gnu.org/onlinedocs/gcc/Precompiled-Headers.html)
I would, however, suggest that you take time to learn about each of the sl/stl headers and include them separately instead, and not use "super headers" except for precompilation purposes.
#include <bits/stdc++.h> is an implementation file for a precompiled header.
From a software engineering perspective, it is a good idea to minimize the include. If you use <bits/stdc++.h> it actually includes a lot of files, which your program may not need, thus increase both compile-time and program size unnecessarily. [edit: as pointed out by #Swordfish in the comments that the output program size remains unaffected. But still, it's good practice to include only the libraries you actually need, unless it's some competitive competition ]
But in contests, using this file is a good idea, when you want to reduce the time wasted in doing chores; especially when your rank is time-sensitive.
It works in most online judges, programming contest environments, including ACM-ICPC (Sub-Regionals, Regionals, and World Finals) and many online judges.
The disadvantages of it are that it:
increases the compilation time.
uses an internal non-standard header file of the GNU C++ library, and so will not compile in MSVC, XCode, and many other compilers
That header file is not part of the C++ standard, is therefore non-portable, and should be avoided.
Moreover, even if there were some catch-all header in the standard, you would want to avoid it in lieu of specific headers, since the compiler has to actually read in and parse every included header (including recursively included headers) every single time that translation unit is compiled.
Unfortunately that approach is not portable C++ (so far).
All standard names are in namespace std and moreover you cannot know which names are NOT defined by including and header (in other words it's perfectly legal for an implementation to declare the name std::string directly or indirectly when using #include <vector>).
Despite this however you are required by the language to know and tell the compiler which standard header includes which part of the standard library. This is a source of portability bugs because if you forget for example #include <map> but use std::map it's possible that the program compiles anyway silently and without warnings on a specific version of a specific compiler, and you may get errors only later when porting to another compiler or version.
In my opinion there are no valid technical excuses that explain why this is necessary for the general user: the compiler binary could have all standard namespace built in and this could actually increase the performance even more than precompiled headers (e.g. using perfect hashing for lookups, removing standard headers parsing or loading/demarshalling and so on).
The use of standard headers simplifies the life of who builds compilers or standard libraries and that's all. It's not something to help users.
However this is the way the language is defined and you need to know which header defines which names so plan for some extra neurons to be burnt in pointless configurations to remember that (or try to find and IDE that automatically adds the standard headers you use and removes the ones you don't... a reasonable alternative).
Related
This question already has answers here:
Why isn't the C++ standard library already pre-included in any C++ source?
(7 answers)
Closed 1 year ago.
Why must I #include various libraries, such as iostream?
I have learned that libraries, such as iostream specifically, are part of the C++ 'Standard Library' "which are written in the core language and part of the C++ ISO Standard itself."
This is pretty neat, but, if they are so 'standard' and such a 'core part' of the language, why do I need to #include them? Why is it that I can write a script that uses +, -, *, /, or other things without #including any libraries? [As a person completely unqualified to have an opinion on this matter,] I feel like being able to add two operands together is just as essential as being able to retrieve input from an operator (std::cin).
What exactly is the issue? Is it that my specific compiler (using Visual Studio exclusively so far) doesn't have those functions built-in? But the designers chose to innately include simple arithmetic operators instead? I understand the need to do this for an obscure or lesser known library, but not so much iostream.
Secondly, how can I 'look inside' a standard header file, such as iostream? How can I inspect the source code of common header files to see how string compare or cin works? I understand the beauty of abstraction and not having to know the nitty gritty details, but I'm obsessed with knowing how everything works.
Third, are there penalties associated with #including header files? For example, if I #include iostream, but I only use a single operator, such as std::cin, is the entire header file included in the final product? Would my program/file or the machine running it be burdened with the unused portions of iostream?
When looking at a C++ "implementation", you are really looking at two separate parts: The compiler, and the library. Basics like integer arithmetics, pointers, references and the like are defined by the core language, and implemented in the compiler core.
Higher level functions are defined in libraries. Of these, the Standard Library is only one, and defined in the same standard document as the language core.
But the standard library can be, and often is, a separate project that only happens to work in tandem with the compiler. You might use the compiler with a different library implementation, or may use the library implementation with a different compiler.
That you have to #include headers before you can use the library has something to do with efficiency. Both the compiler and you (or any other developer reading your source later on) only has to "know" about those parts of any libraries that are actually #included by your source. The compiler has less looking-up to do when parsing your code, and a developer knows pretty exactly which part of the documentation to look at for further information -- especially if you resist the urge to write using namespace ... in your source. This becomes even more important later on when you will be using several different libraries with different namespaces.
You should not use the header files for information. Highly abstracted C++ source can be somewhat of a handful to understand, and the headers are usually not written for the end user anyway. You are very much better off by referring to actual documentation, like cppreference.com, instead.
As for the third part of your question, including a header will introduce all the identifiers from that header (and, with C++, likely several other headers as well) into your translation unit. (With C, the library implementation is actually forbidden to introduce any identifiers from other standard headers unless that header was explicitly included.) This should not bother you, though. Things that are not needed do not result in actual code in your resulting binary. With really small example programs and <iostream> specifically, the "burden" might seem comparatively large because something "simple" like std::cin requires significant support (files, streams, buffers and the like). But this effect rapidly diminishes as your program becomes larger, because whatever support code is added to your binary is only added once.
And finally, most of the above is the same in other languages as well, with only minor differences. Java, C# or Python might not have "headers" per se, but in the end the issues with the standard library being separate from the compiler, identifier namespacing, and (as far as compiled languages are concerned) effect on resulting binary are similar across the board.
This is a follow-up question for this which says that
In C++, unlike C, standard headers are allowed to #include other standard headers.
Is there any way to know which headers were automatically included, since it may be difficult to guess which symbols are defined in which headers.
Motivation: My homework compiles and works correctly on my computer but TA told me it was not compiling and needed couple of headers (mutex and algorithm) to compile. How I can be sure the code I submit in future be bulletproof.
My compiler is not giving any warning about implicit declaration.
I'm using clang++ -std=c++11 to compile my code.
The standard lists the symbols made available by each header. There are no guarantees beyond that, neither that symbols which are obviously used nor that there not all symbols are declared. You'll need to include each header for any name you are using. You should not rely on indirect includes.
On the positive side, there is no case in the standard library where any of the standard library headers requires extra headers.
If you want to know what other headers a particular header file pulls, the easiest way to do so is to run the include file through the compiler's preprocessor phase only, instead of compiling it fully. For example, if you want to know what <iostream> pulls in, create a file containing only:
#include <iostream>
then preprocess it. With gcc, the -E option runs the preprocessor only, without compiling the file, and dumps the preprocessed file to standard output. The resulting output begins with:
# 1 "t.C"
That's my one-line source file.
# 1 "<built-in>"
# 1 "<command-line>"
# 1 "/usr/include/stdc-predef.h" 1 3 4
Apparently, gcc automatically pulls in this header file, no matter what. This can be ignored.
# 1 "<command-line>" 2
# 1 "t.C"
# 1 "/usr/include/c++/6.2.1/iostream" 1 3
Ok, now we finally get to the actual #include statement in my one-line source file. That's where my <iostream> is:
# 36 "/usr/include/c++/6.2.1/iostream" 3
# 37 "/usr/include/c++/6.2.1/iostream" 3
# 1 "/usr/include/c++/6.2.1/x86_64-redhat-linux/bits/c++config.h" 1 3
Ok, so iostream itself #includes this "c++-config.h" header file, obviously an internal compiler header.
If I keep going, I can see that <iostream> pulls in, unsurprisingly, <ios>, <type_traits>, as well as C header files like stdio.h.
It shouldn't be too hard to write a quick little script that takes a header file, runs the compiler in preprocessing phase, and produces a nice, formatted list of all header files that got pulled in.
As far as I know, there is no way to do what you want.
If you try to compile your code on several example platforms, and it is successful, there is a greater chance that it will compile on any other platform, but there is no easy way to be sure.
In my experience, MinGW C++ headers use fewer #includes to each other. So MinGW can be a practical tool for checking portability.
This question has considerable overlap with Are there tools that help organizing #includes? , and the latter would nowadays be considered as "Off topic" because it's asking for external tools/resources. Strictly speaking, this question is only about a way to find indirect includes, but the goal is certainly the same.
The problem of using "the right" include statements is distressingly hard. What was described in the question is mainly about indirect inclusion: One header includes a certain symbol, but on another machine with another compiler and another header, the symbol might not be included.
Some of the comments and other answers suggested seemingly pragmatic but unrealistic approaches:
Try it out with another compiler: This is brittle, and does not tell you anything about a possible third compiler
Read the included headers, to see which other headers they include: No. The idea of headers is exactly the opposite, namely not having to read them, but just to use the API that they offer
Generate preprocessor output and write a tool to solve the problem semi-automatically: Nope. This won't solve the problem of different header versions anyhow.
Explicitly include the headers that you need for the thing that you are using: This is basically impossible to maintain. When you move one function from your file to another during a refactoring, you never know which includes you can safely remove in one file, and which ones you have to add to the other one.
(And when you change your includes, you'll likely break third-party code that included your headers, because everybody is facing the same problem...)
All this does not cover the caveat that indirect inclusion is not always a problem, but sometimes intended: When you want to use std::vector, you'd #include <vector>, and not #include <bits/stl_vector.h>, even though the latter contains the definition...
So the only realistic solution for this is to rely on tools that support the developer. Some tools are mentioned in the answers of Are there tools that help organizing #includes? , and originally, I mentioned CDT in a comment to the question three years ago, but I think it's worth being mentioned here as well:
Eclipse CDT (C/C++ Development Tooling)
This is an IDE that offers an "Organize Includes" feature. The features is described in the article at https://www.eclipse.org/community/eclipse_newsletter/2013/october/article3.php , which points out the difficulties and caveats (and I mentioned some of them above), and how they are tackled in CDT.
I have tried this out (quite a while ago), although only in a very simple test project. So I can say that it basically works, but considering the complexity of the task, this comes with a disclaimer: It's close to magic, but there might still be cases where it fails.
C++ is not intended to be parsed and compiled. This statement is true, because one of my previous answers here on stack overflow contained the line #define not certainly.
The article also points to another tool:
Include What You Use
I have not tried this out, but it seems to be quite actively maintained. The documentation pages also list some of the difficulties and goals. From a first glance, it does not seem to be as powerful and configurable as CDT (and of course, does not come in an IDE), but some might want to try it out.
Lets say I want to use hex() function. I know it is defined in <ios> header and I also know that it is included in <iostream> header. The difference is that in <iostream> are much more functions and other stuff I don't need.
From a performance stand-point, should I care about including/defining less functions, classes etc. than more?
There is no run time performance hit.
However, there could be excessive compile time hit if tons of unnecessary headers are included.
Also, when this is done, you can create unnecessary recompiles if, for instance, a header is changed but a file that doesn't use it includes it.
In small projects (with small headers included), this doesn't matter. As a project grows, it may.
If the standard says it is defined in header <ios> then include header <ios> because you can't guarantee it will be included in/through any other header.
TL;DR: In general, it is better to only include what you need. Including more can have an adverse effect on binary size and startup (should be insignificant), but mostly hurts compilation-time without precompiled headers.
Well, naturally you have to include at least those headers together guaranteed to cover all your uses.
It might sometimes happen to "work" anyway, because the standard C++ headers are all allowed to include each other as the implementer wants, and the headers are allowed to include additional symbols in the std-namespace anyway (see Why is "using namespace std" considered bad practice?).
Next, sometimes including an additional header might lead to creation of additional objects (see std::ios_base::Init), though a well-designed library minimizes such (that is the only instance in the standard library, as far as I know).
But the big issue isn't actually size and efficiency of the compiled (and optimized) binary (which should be unaffected, aside from the previous point, whose effect should be miniscule), but compilation-time while actively developing (see also How does #include <bits/stdc++.h> work in C++?).
And the latter is (severely, so much that the comittee is working on a modules-proposal, see C++ Modules - why were they removed from C++0x? Will they be back later on?) adversely affected by adding superfluous headers.
Unless, naturally, you are using precompiled-headers (see Why use Precompiled Headers (C/C++)?), in which case including more in the precompiled headers and thus everywhere instead of only where needed, as long as those headers are not modified, will actually reduce compile-times most of the time.
There is a clang-based tool for finding out the minimum headers, called include-what-you-use.
It analyzes the clang AST to decide that, which is both a strength and a weakness:
You don't need to teach it about all the symbols a header makes available, but it also doesn't know whether things just worked out that way in that revision, or whether they are contractual.
So you need to double-check its results.
Including unnecessary headers has following downsides.
Longer compile time, linker has to remove all the unused symbols.
If you have added extra headers in CPP, it will only affect your code.
But if you are distributing your code as a library and you have added unnecessary headers in your header files. Client code will be burdened with locating the headers that you have used.
Do not trust indirect inclusion, use the header in which required function is actually defined.
Also in a project as a good programming practice headers should be included in order of reducing dependency.
//local header -- most dependent on other headers
#include <project/impl.hpp>
//Third party library headers -- moderately dependent on other headers
#include <boost/optional.hpp>
//standard C++ header -- least dependent on other header
#include <string>
And things that won't be affected is run-time, linker will get rid of unused symbols during compilation.
Including unneeded header files has some value.
It does take less coding effort to include a cut and paste of the usually needed includes. Of course, later coding is now encumbered with not knowing what was truly needed.
Especially in C, with its limited name space control, including unneeded headers promptly detects collisions. Say code defined a global non-static variable or function that happened to match the standard, like erfc() to do some text processing. By including <math.h>, the collision is detected with double erfc(double x), even though this .c file does no FP math yet other .c files do.
#include <math.h>
char *erfc(char *a, char *b);
OTOH, had this .c file not included <math.h>, at link time, the collision would be detected. The impact of this delayed notice could be great if the code base for years did not need FP math and now does, only to detect char *erfc(char *a, char *b) used in many places.
IMO: Make a reasonable effort to not include unneeded header files, but do not worry about including a few extra, especially if they are common ones. If an automated method exist, use it to control header file inclusion.
In C++ the standard library is wrapped in the std namespace and the programmer is not supposed to define anything inside that namespace. Of course the standard include files don't step on each other names inside the standard library (so it's never a problem to include a standard header).
Then why isn't the whole standard library included by default instead of forcing programmers to write for example #include <vector> each time? This would also speed up compilation as the compilers could start with a pre-built symbol table for all the standard headers.
Pre-including everything would also solve some portability problems: for example when you include <map> it's defined what symbols are taken into std namespace, but it's not guaranteed that other standard symbols are not loaded into it and for example you could end up (in theory) with std::vector also becoming available.
It happens sometimes that a programmer forgets to include a standard header but the program compiles anyway because of an include dependence of the specific implementation. When moving the program to another environment (or just another version of the same compiler) the same source code could however stop compiling.
From a technical point of view I can image a compiler just preloading (with mmap) an optimal perfect-hash symbol table for the standard library.
This should be faster to do than loading and doing a C++ parse of even a single standard include file and should be able to provide faster lookup for std:: names. This data would also be read-only (thus probably allowing a more compact representation and also shareable between multiple instances of the compiler).
These are however just shoulds as I never implemented this.
The only downside I see is that we C++ programmers would lose compilation coffee breaks and Stack Overflow visits :-)
EDIT
Just to clarify the main advantage I see is for the programmers that today, despite the C++ standard library being a single monolithic namespace, are required to know which sub-part (include file) contains which function/class. To add insult to injury when they make a mistake and forget an include file still the code may compile or not depending on the implementation (thus leading to non-portable programs).
Short answer is because it is not the way the C++ language is supposed to be used
There are good reasons for that:
namespace pollution - even if this could be mitigated because std namespace is supposed to be self coherent and programmer are not forced to use using namespace std;. But including the whole library with using namespace std; will certainly lead to a big mess...
force programmer to declare the modules that he wants to use to avoid inadvertently calling a wrong standard function because standard library is now huge and not all programmers know all modules
history: C++ has still strong inheritance from C where namespace do not exist and where the standard library is supposed to be used as any other library.
To go in your sense, Windows API is an example where you only have one big include (windows.h) that loads many other smaller include files. And in fact, precompiled headers allows that to be fast enough
So IMHO a new language deriving from C++ could decide to automatically declare the whole standard library. A new major release could also do it, but it could break code intensively using using namespace directive and having custom implementations using same names as some standard modules.
But all common languages that I know (C#, Python, Java, Ruby) require the programmer to declare the parts of the standard library that he wants to use, so I suppose that systematically making available every piece of the standard library is still more awkward than really useful for the programmer, at least until someone find how to declare the parts that should not be loaded - that's why I spoke of a new derivative from C++
Most of the C++ standard libraries are template based which means that the code they'll generate will depend ultimately in how you use them. In other words, there is very little that could be compiled before instantiate a template like std::vector<MyType> m_collection;.
Also, C++ is probably the slowest language to compile and there is a lot parsing work that compilers have to do when you #include a header file that also includes other headers.
Well, first thing first, C++ tries to adhere to "you only pay for what you use".
The standard-library is sometimes not part of what you use at all, or even of what you could use if you wanted.
Also, you can replace it if there's a reason to do so: See libstdc++ and libc++.
That means just including it all without question isn't actually such a bright idea.
Anyway, the committee are slowly plugging away at creating a module-system (It takes lots of time, hopefully it will work for C++1z: C++ Modules - why were they removed from C++0x? Will they be back later on?), and when that's done most downsides to including more of the standard-library than strictly neccessary should disappear, and the individual modules should more cleanly exclude symbols they need not contain.
Also, as those modules are pre-parsed, they should give the compilation-speed improvement you want.
You offer two advantages of your scheme:
Compile-time performance. But nothing in the standard prevents an implementation from doing what you suggest[*] with a very slight modification: that the pre-compiled table is only mapped in when the translation unit includes at least one standard header. From the POV of the standard, it's unnecessary to impose potential implementation burden over a QoI issue.
Convenience to programmers: under your scheme we wouldn't have to specify which headers we need. We do this in order to support C++ implementations that have chosen not to implement your idea of making the standard headers monolithic (which currently is all of them), and so from the POV of the C++ standard it's a matter of "supporting existing practice and implementation freedom at a cost to programmers considered acceptable". Which is kind of the slogan of C++, isn't it?
Since no C++ implementation (that I know of) actually does this, my suspicion is that in point of fact it does not grant the performance improvement you think it does. Microsoft provides precompiled headers (via stdafx.h) for exactly this performance reason, and yet it still doesn't give you an option for "all the standard libraries", instead it requires you to say what you want in. It would be dead easy for this or any other implementation to provide an implementation-specific header defined to have the same effect as including all standard headers. This suggests to me that at least in Microsoft's opinion there would be no great overall benefit to providing that.
If implementations were to start providing monolithic standard libraries with a demonstrable compile-time performance improvement, then we'd discuss whether or not it's a good idea for the C++ standard to continue permitting implementations that don't. As things stand, it has to.
[*] Except perhaps for the fact that <cassert> is defined to have different behaviour according to the definition of NDEBUG at the point it's included. But I think implementations could just preprocess the user's code as normal, and then map in one of two different tables according to whether it's defined.
I think the answer comes down to C++'s philosophy of not making you pay for what you don't use. It also gives you more flexibility: you aren't forced to use parts of the standard library if you don't need them. And then there's the fact that some platforms might not support things like throwing exceptions or dynamically allocating memory (like the processors used in the Arduino, for example). And there's one other thing you said that is incorrect. As long as it's not a template class, you are allowed to add swap operators to the std namespace for your own classes.
First of all, I am afraid that having a prelude is a bit late to the game. Or rather, seeing as preludes are not easily extensible, we have to content ourselves with a very thin one (built-in types...).
As an example, let's say that I have a C++03 program:
#include <boost/unordered_map.hpp>
using namespace std;
using boost::unordered_map;
static unordered_map<int, string> const Symbols = ...;
It all works fine, but suddenly when I migrate to C++11:
error: ambiguous symbol "unordered_map", do you mean:
- std::unordered_map
- boost::unordered_map
Congratulations, you have invented the least backward compatible scheme for growing the standard library (just kidding, whoever uses using namespace std; is to blame...).
Alright, let's not pre-include them, but still bundle the perfect hash table anyway. The performance gain would be worth it, right?
Well, I seriously doubt it. First of all because the Standard Library is tiny compared to most other header files that you include (hint: compare it to Boost). Therefore the performance gain would be... smallish.
Oh, not all programs are big; but the small ones compile fast already (by virtue of being small) and the big ones include much more code than the Standard Library headers so you won't get much mileage out of it.
Note: and yes, I did benchmark the file look-up in a project with "only" a hundred -I directives; the conclusion was that pre-computing the "include path" to "file location" map and feeding it to gcc resulted in a 30% speed-up (after using ccache already). Generating it and keeping it up-to-date was complicated, so we never used it...
But could we at least include a provision that the compiler could do it in the Standard?
As far as I know, it is already included. I cannot remember if there is a specific blurb about it, but the Standard Library is really part of the "implementation" so resolving #include <vector> to an internal hash-map would fall under the as-if rule anyway.
But they could do it, still!
And lose any flexibility. For example Clang can use either the libstdc++ or the libc++ on Linux, and I believe it to be compatible with the Dirkumware's derivative that ships with VC++ (or if not completely, at least greatly).
This is another point of customization: if the Standard library does not fit your needs, or your platforms, by virtue of being mostly treated like any other library you can replace part of most of it with relative ease.
But! But!
#include <stdafx.h>
If you work on Windows, you will recognize it. This is called a pre-compiled header. It must be included first (or all benefits are lost) and in exchange instead of parsing files you are pulling in an efficient binary representation of those parsed files (ie, a serialized AST version, possibly with some type resolution already performed) which saves off maybe 30% to 50% of the work. Yep, this is close to your proposal; this is Computer Science for you, there's always someone else who thought about it first...
Clang and gcc have a similar mechanism; though from what I've heard it can be so painful to use that people prefer the more transparent ccache in practice.
And all of these will come to naught with modules.
This is the true solution to this pre-processing/parsing/type-resolving madness. Because modules are truly isolated (ie, unlike headers, not subject to inclusion order), an efficient binary representation (like pre-compiled headers) can be pre-computed for each and every module you depend on.
This not only means the Standard Library, but all libraries.
Your solution, more flexible, and dressed to the nines!
One could use an alternative implementation of the C++ Standard Library to the one shipped with the compiler. Or wrap headers with one's definitions, to add, enable or disable features (see GNU wrapper headers). Plain text headers and the C inclusion model are a more powerful and flexible mechanism than a binary black box.
Suppose i have the following code (literally) in a C++ source file:
// #include <iostream> // superfluous, commented-out
using std::cout;
using std::endl;
int main()
{
cout << "Hello World" << endl;
return 0;
}
I can compile this code even though #include <iostream> is commented-out:
g++ -include my_cpp_std_lib_hack source.cpp
Where my_cpp_std_lib_hack is a file in some central location that includes all the files of the C++ Standard Library:
#include <ciso646>
#include <climits>
#include <clocale>
...
#include <valarray>
#include <vector>
Of course, i can use proper compilation options for all compilers i care about (that being MS Visual Studio and maybe a few others), and i also use precompiled headers.
Using such a hack gives me the following advantages:
Fast compilation (because all of the Standard Library is precompiled)
No need to add #includes when all i want is adding some debugging output
No need to remember or look up all the time where the heck std::max is declared
A feeling that the STL is magically built-in to the language
So i wonder: am i doing something very wrong here?
Will this hack break down when writing large projects?
Maybe everyone else already uses this, and no one told me?
So i wonder: am i doing something very wrong here?
Yes. Sure, your headers are precompiled, but the compiler still has to do things like name lookups on the entire included mass of stuff which slows down compilation.
Will this hack break down when writing large projects?
Yes, that's pretty much the problem. Plus, if anyone else looks at that code, they're going to be wondering where std::cout (well, assume that's a user defined type) came from. Without the #includes they're going to have no idea whatsoever.
Not to mention, now you have to link against a ton of standard library features that you may have (probably could have) avoided linking against in the first place.
If you want to use precompilation that's fine, but someone should be able to build each and every implementation file even when precompilation is disabled.
The only thing "wrong" is that you are relying upon a compiler-specific command-line flag to make the files compilable. You'd need to do something different if not using GCC. Most compilers probably do provide an equivalent feature, but it is best to write portable source code rather than to unnecessarily rely on features of your specific build environment.
Other programmers shouldn't have to puzzle over your Makefiles (or Ant files, or Eclipse workspaces, or whatever) to figure out how things are working.
This may also cause problems for users of IDE's. If the IDE doesn't know what files are being included, it may not be able to provide automatic completion, source browsing, refactoring, and other such features.
(FWIW, I do think it is a good idea to have one header file that includes all of the Standard Library headers that you are using in your project. It makes precompilation easier, makes it easier to port to a non-standard environment, and also helps deal with those issues that sometimes arise when headers are included in different orders in different source files. But that header file should be explicitly included by each source file; there should be no magic.)
Forget the compilation speed-up - a precompiled header with templates isn't really "precompiled" except for the name and the parse, as far as I've heard. I won't believe in the compilation speed up until I see it in the benchmarks. :)
As for the usefulness:
I prefer to have an IDE which handles my includes for me (this is still bad for C++, but Eclipse already adds known includes with ctrl+shift+n with... well, acceptable reliability :)).
Doing 'clandestine' includes like this would also make testing more difficult. You want to compile a smallest-possible subset of code when testing a particular component. Figuring out what that subset is would be difficult if the headers/sources aren't being honest about their dependencies, so you'd probably just drag your my_cpp_std_lib_hack into every unit test. This would increase compilation time for your test suites a lot. Established code bases often have more than three times as much test code as regular code, so this is likely to become an issue as your code base grows.
From the GCC manual:
-include file
Process file as if #include "file" appeared as the first line of the
primary source file. However, the
first directory searched for file is
the preprocessor's working directory
instead of the directory containing
the main source file. If not found
there, it is searched for in the
remainder of the #include "..." search
chain as normal.
So what you're doing is essentially equivalent to starting each file with the line
#include "my_cpp_std_lib_hack"
which is what Visual Studio does when it gathers up commonly-included files in stdafx.h. There are some benefits to that, as outlined by others, but your approach hides this include in the build process, so that nobody who looked directly at one of your source files would know of this hidden magic. Making your code opaque in this way does not seem like a good style to me, so if you're keen on all the precompiled header benefits I suggest you explicitly include your hack file.
You are doing something very wrong. You are effectively including lots of headers that may not be needed. In general, this is a very bad idea, because you are creating unnecessary dependencies, and a change in any header would require recompilation of everything. Even if you are avoiding this by using precompiled headers, you are still linking to lots of object that you may not need, making your executable much larger than it needs to be.
There is really nothing wrong with the standard way of using headers. You should include everything you are using, and no more (forward declarations are your friends). This makes code easier to follow, and helps you keep dependencies under control.
We try not to include the unused or even the rarely used stuff for example in VC++ there is
#define WIN32_LEAN_AND_MEAN //exclude rarely used stuff
and what we hate in MFC is that if u want to make a simple application u will produce large executable file with the whole library (if statically linked), so it's not a good idea what if u only want to use the cout while the other no??
another thing i don't like to pass arguments via command line coz i may leave the project for a while, and forget what are the arguments... e.g. i prefer using
#pragma (comment, "xxx.lib")
than using it in command line, it reminds me at least with what file i want
That's is my own opinion make your code stable and easy to compile in order to to rot as code rotting is a very nasty thing !!!!!