Sometimes in C++ the order of the includes matters. That is the case of openGL using :
1.- Right way:
#include <windows.h> // Header File For Windows
#include <gl\glu.h> // Header File For The GLu32 Library
2.- Wrong way:
#include <gl\glu.h> // Header File For The GLu32 Library
#include <windows.h> // Header File For Windows
Does this happen just for some specific headers or is it kind of a
random problem difficult to prevent a priori?
If that is the case:
How can I know the right order of the includes?
Just some specific headers. Some might call it a design flaw.
You can't. Look at the error messages you get and sort them out carefully. On windows, putting windows.h first is probably a good idea.
The order does not matter for C++ Standard Library includes.
For other libraries it should not usually matter (unless they specifically say so).
For specific platforms, it may matter and it is usually specified clearly when it does.
For ex:
On Windows #include <windows.h> comes before all the other includes.
Also,
#include <stdafx.h>
which is a MSVC++ specific header needs to be included before everything else if you're using precompiled headers.
For windows you need the #include <windows.h> first.
Then in the header files avoid #include - prefer forward declarations instead.
Save less compilation when you just change one header file.
At one time, a fair number of well-known C programmers advised that no header should include any other header -- it should be up to the user to include the correct headers in the correct order to make things work. This worked (and continues to work) pretty well for small projects that don't involve too many headers.
For larger projects, however, keeping track of all the header dependencies can/does become substantially more difficult, to the point that it's nearly unmanageable in many modern code bases. Most modern headers themselves include any other headers upon which they depend.
Unfortunately, that means we frequently end up with a rather confusing mixture of the two. There's not much you can do beyond just dealing with it when it arises, by finding what headers you need to include and in what order.
Related
I've been learning more about headers, translation units, and precompiled headers lately. I think I understand them well now. I do have a couple queries though, I'll just ask one here. This is just a theoretical example so it may or may not work in practice, but please try to understand what I'm asking.
Say I have a precompiled header like so:
// stdafx.h
#define WIN32_LEAN_AND_MEAN
#include <Windows.h>
and some source file.
// foo.cpp
#include "stdafx.h"
#include <Windows.h>
void bar()
{
// I need to access something in Windows.h that requires
// WIN32_LEAN_AND_MEAN to NOT be defined.
}
It doesn't have to be precompiled headers, but any similar structure in general, though I'm not sure it'd apply otherwise. I can work around this problem with Windows.h easily enough, but I'm sure there are much more difficult situations.
Is it possible in anyway to remove the currently included Windows.h from the precompiled header and reinclude it with adjusted #defines. #undef won't work to my understanding.
The only solution that comes to mind is to create a separate source file and mark it as not using precompiled headers. I'd like to learn of any alternatives.
Also, what would be a more general way to express what I'm asking for the sake of future endeavors and search engine results?
Lets say I want to use hex() function. I know it is defined in <ios> header and I also know that it is included in <iostream> header. The difference is that in <iostream> are much more functions and other stuff I don't need.
From a performance stand-point, should I care about including/defining less functions, classes etc. than more?
There is no run time performance hit.
However, there could be excessive compile time hit if tons of unnecessary headers are included.
Also, when this is done, you can create unnecessary recompiles if, for instance, a header is changed but a file that doesn't use it includes it.
In small projects (with small headers included), this doesn't matter. As a project grows, it may.
If the standard says it is defined in header <ios> then include header <ios> because you can't guarantee it will be included in/through any other header.
TL;DR: In general, it is better to only include what you need. Including more can have an adverse effect on binary size and startup (should be insignificant), but mostly hurts compilation-time without precompiled headers.
Well, naturally you have to include at least those headers together guaranteed to cover all your uses.
It might sometimes happen to "work" anyway, because the standard C++ headers are all allowed to include each other as the implementer wants, and the headers are allowed to include additional symbols in the std-namespace anyway (see Why is "using namespace std" considered bad practice?).
Next, sometimes including an additional header might lead to creation of additional objects (see std::ios_base::Init), though a well-designed library minimizes such (that is the only instance in the standard library, as far as I know).
But the big issue isn't actually size and efficiency of the compiled (and optimized) binary (which should be unaffected, aside from the previous point, whose effect should be miniscule), but compilation-time while actively developing (see also How does #include <bits/stdc++.h> work in C++?).
And the latter is (severely, so much that the comittee is working on a modules-proposal, see C++ Modules - why were they removed from C++0x? Will they be back later on?) adversely affected by adding superfluous headers.
Unless, naturally, you are using precompiled-headers (see Why use Precompiled Headers (C/C++)?), in which case including more in the precompiled headers and thus everywhere instead of only where needed, as long as those headers are not modified, will actually reduce compile-times most of the time.
There is a clang-based tool for finding out the minimum headers, called include-what-you-use.
It analyzes the clang AST to decide that, which is both a strength and a weakness:
You don't need to teach it about all the symbols a header makes available, but it also doesn't know whether things just worked out that way in that revision, or whether they are contractual.
So you need to double-check its results.
Including unnecessary headers has following downsides.
Longer compile time, linker has to remove all the unused symbols.
If you have added extra headers in CPP, it will only affect your code.
But if you are distributing your code as a library and you have added unnecessary headers in your header files. Client code will be burdened with locating the headers that you have used.
Do not trust indirect inclusion, use the header in which required function is actually defined.
Also in a project as a good programming practice headers should be included in order of reducing dependency.
//local header -- most dependent on other headers
#include <project/impl.hpp>
//Third party library headers -- moderately dependent on other headers
#include <boost/optional.hpp>
//standard C++ header -- least dependent on other header
#include <string>
And things that won't be affected is run-time, linker will get rid of unused symbols during compilation.
Including unneeded header files has some value.
It does take less coding effort to include a cut and paste of the usually needed includes. Of course, later coding is now encumbered with not knowing what was truly needed.
Especially in C, with its limited name space control, including unneeded headers promptly detects collisions. Say code defined a global non-static variable or function that happened to match the standard, like erfc() to do some text processing. By including <math.h>, the collision is detected with double erfc(double x), even though this .c file does no FP math yet other .c files do.
#include <math.h>
char *erfc(char *a, char *b);
OTOH, had this .c file not included <math.h>, at link time, the collision would be detected. The impact of this delayed notice could be great if the code base for years did not need FP math and now does, only to detect char *erfc(char *a, char *b) used in many places.
IMO: Make a reasonable effort to not include unneeded header files, but do not worry about including a few extra, especially if they are common ones. If an automated method exist, use it to control header file inclusion.
I recently started working on a project where I came across this:
#include <string.h> // includes before include guards
#include "whatever.h"
#ifndef CLASSNAME_H // header guards
#define CLASSNAME_H
// The code
#endif
My question: Considering all (included) header files were written in that same style: Could this lead to problems (cyclic reference, etc.). And: Is there any (good) reason to do this?
Potentially, having #include outside the include guards could lead to circular references, etc. If the other files are properly protected, there isn't an issue. If the other files are written like this one, there could be problems.
No, there isn't a good reason that I know of to write the code with the #include lines outside the include guards.
The include guards should be around the whole contents of the header; I can't think of an exception to this (when header guards are appropriate in the first place — the C header <assert.h> is one which does not have header guards for a good reason).
As long as you don't have circular includes (whatever1.h includes whatever2.h which includes whatever1.h) this should not be a problem, as the code itself is still protected against multiple inclusion.
It will however almost certainly impact compile time (how much depends on the project size) for two reasons:
Modern compilers usually detect "classical" include guards and just ignore any further #includes of that file (just like #pragma once). The structure you are showing prevents that optimization.
Each compilation unit becomes much larger as each file will be included much more often - right before the preprocessor then deletes all the inactive blocks again.
In any case, I can't think of any benefit such a structure would have. Maybe it is the result of some strange historical reasons, like some obscure analysis tool that was used on your codebase in pre-standard C++ time.
I know in C++ you have to use the "#include" statement to include libraries and headers, like:
#include <iostream>
#include <string>
#include "my_header.h"
But after some time, that can look very ugly, having a load of includes in your files. I wanted to know if there is some way where we can call just 1 file to include EVERYTHING we'll need in the C++ project. Kind of like...
"include_everything.h"
#include <iostream>
#include <string>
#include "header1.h"
#include "header2.h"
and then in our .cpp file, could we go?
#include "include_everything.h"
// write code here...
If it is possible, is it lawful? Does it abide by the C++ coding laws?
Thanks in advance,
Gumptastic
If that is the only rule you decide not to follow you will probably be fine. But, depending on what some of those headers contain (macros, #defines, etc) you could get a horrible mess in the end.
I've seen cases where people included what they thought they needed (often ended up being much more than was needed as they failed to consider forward declarations). On a large software project, each CPP file ended up loading nearly every include file anyway (due to chaining). Often the same file would get loaded multiple time and only excluded once the #ifndef statement at the top was triggered. The OS ended up opening over 100k files for each compile even though only there were only 50k files in the project. In a horrible situation like that it might help.
It can save time for developers as they, generally, won't have to search out where something is defined and add it to their include list for every file they work on. Do it once and forget. Let the compiler tell you if a new file needs to be added to 'everything'.
Aside from the macro problem you might run into issues with overlapping names and such if you don't properly use namespaces. Also, template classes often have real code in the header file. This can create churn.
It might also cause problems if people are using global variables and they are eclipsing local variables. It might be hard to figure out where that global 'i' is defined and being pulled in from if you have to search all files in 'everything' instead of just the files in your include list.
You can do it. If its just you and includes bug you, go for it. If you are your buds are doing something, sure. You are going to serious looks from people if you try to do this on a medium sized project or larger.
At best, I wouldn't suggest using to go beyond grouping tightly bundled headers together into a more coherent API.
You can do this, but it might affect compilation time.
The C(++) #include compiler instruction means nothing more than "Before you compile, copy and paste the content of that file here". That means when you include more headers than necessary, it will take a bit longer to compile each source file. But when that's not a concern for you, go ahead.
Yes, you can do it. You may recursively include headers.
However, If you are including so many headers that this is a problem, either you're including ithings you don't need to include or your source files are way too expansive.
Consequently, it's very rare to do this and I'd go so far as to consider it a big code smell. The only time you really want to do it is when you're going to precompile that include_everything.h.
Does including the same header files multiple times increase the compilation time?
For example, suppose every file in my project uses <iostream> <string> <vector> and <algorithm>. And if I include a lot of files in my source code, then does that increase the compile time?
I always thought that the guard headers served important purpose of avoiding double definitions but as a by product also eliminates double code.
Actually, someone I know proposed some ideas to remove such multiple inclusions. However, I consider them to be completely against the good design practices in c++. But was still wondering what might be the reasons of him to suggest the changes?
Most of these answers are wrong... For modern compilers, there is zero overhead for including the same file multiple times, assuming the header uses the usual "include guard" idiom.
The GCC preprocessor, for example, has special code to recognize the include guard idiom. It will not even open the header file (never mind reading it) for the second and subsequent #include directives.
I am not sure about other compilers, but I would be very surprised if most of them did not implement the same optimization.
Another technique besides precompiled headers is the compiler firewall idiom, explained here:
http://www.gotw.ca/publications/mill04.htm
http://www.gotw.ca/publications/mill05.htm
Every time #include <something.h> occurs in your source file, 'something.h' have to be found along the include path and read. But there is #ifndef _SOMETHING_H_ check, so the content of such something.h would not be compiled.
Thus there is some overhead, but it is really small.
If compile times were an issue, people used to use the optimisation recommended by Praetorian, originally recommened in Large Scale Software Design. However, most modern compilers automatically optimise for this case. For example, see the help from gcc
The best is to use precompiled headers. I do not know which compiler you are using, but most of them have this feature. I suggest you to refer to your compiler-manual on how to achieve this.
It basically collects all headerfiles and compiles it into a object file which then can be used by the linker. That speeds up compiling very much.
Minor Drawback:
You need to have 1 "uberheader" which is included in every compilation-unit (.cpp).
In that uberheader, only include static headers from libraries, not your own. Then the compiler does not need to recompile it very often.
It helps esp. when using header-only libraries such as boost or glm, eigen etc.
HTH
Yes, including the same header multiple times means that the file needs to be opened before the preprocessor guards kick in and prevent multiple definitions. The Mozilla source code uses the following trick to prevent this:
Foo.h
#ifndef FOO_H
#define FOO_H
// whatever
#endif /* FOO_H */
In all files that need to include foo.h
#ifndef FOO_H
#include "foo.h"
#endif
This prevents foo.h from having to be opened multiple times. Of course, this depends on everyone following a particular naming convention for their preprocessor guards.
You can't do this with standard headers, since there is no common naming convention for their preprocessor guards.
EDIT:
After reading your question again, I think you're asking about the same header being included in different source files. What I talked about above does not help with that. Each header file will still have to be opened and included at least once in every translation unit. The only way I know of to prevent this is to use precompiled headers, as #scorcher24 mentioned in his answer. But I'd stay away from this solution, because there is no standard way of generating precompiled headers across compilers, unless the compile times are absolutely prohibitive.
Some compilers, most notably Microsoft's, have a #pragma once directive that you can use to automatically skip an include file once it's already been included. This removes any performance penalty.
http://en.wikipedia.org/wiki/Pragma_once
It can be an issue. As others have said, most modern compilers
handle the case intelligently, and will only re-open the file in
degenerate cases. Most is not all, however, and one of the major
exceptions is Microsoft, which a lot of people do have to support. The
surest solution (if this is really a problem in your environment) is to
use the Lakos convention, putting the include guards around the
#include as well as in the header. This means, of course, a standard
convention for generating the guard names. (For external includes, wrap
them in your own header, which respects your local convention.)
Alternatively, you can use both the guards and #pragma once. The
guards will always work, and most compilers will avoid the extra opens,
and #pragma once will usually avoid the extra opens with Microsoft.
(#pragma once cannot be implemented reliably in complex networked
situation, but as long as all of your files are on your local drive,
it's quite reliable.)