This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
C++ style question: what to #include?
When I #include a header file, and I also need other files that are already #included from the first one, should I rely on the first #include or should I #include all of them?
I know that it will work anyway, but I want to know what is the best practice.
If I don't rely, it means that I can have a list of few dozens of #includes in my file. Does it make sense?
Well, if someone else is maintaining the first header file, then no, you can't rely on it!
For exactly this reason, I prefer to explicitly include all dependencies (i.e. headers declaring symbols that are directly used) in source files. I doubt you'll find a One True Best Practice, though. There are pros and cons of each approach.
But once you choose an approach, please apply it consistently! There's nothing worse than a project with a mish-mash of different include styles.
That depends on the policy you design. I always follow the following one:
In headers, always include anything that is needed for that header to be compiled with a clean .c/.cpp file.
In implementation files, always include everything that is directly used.
You should include only the base header file ofcourse. but even if you happen to include your files, youe header files have inclusion gaurds which should prevnet from multiple inclusions.
When I #include a header file, and I
also need other files that are already
#included from the first one, should I rely on the first #include or should I
#include all of them?
In general no, because which header files a header file drags in, is in general an implementation detail.
However, it is in practice not possible to write code that will include all necessary headers for all platforms. Especially in C++ where standard library headers are free to drag in other headers from the standard library. For example, your code may compile because, unknown to you, <iostream> for your compiler drags in <string>.
So, a reasonable effort to include all relevant headers is reasonable, and as that implies, an unreasonable effort to do so, is unreasonable. :-)
Cheers & hth.,
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 months ago.
Improve this question
Is it a good programming practice to have #include statements in a header file. I personally feel that, especially, when you're going through a code base someone else wrote, you end up missing or spend time looking for definitions which could have found sooner if it was in the c file.
In some (from my experience - most) cases, it's impossible to do what you say. You often end up returning various other classes from your code, and - obviously - you need to include the information about that in the function declarations. In that case, the compiller has to already know what those other objects are, so you either have to already include a header with the declaration, or provide a forward declaration.
Since you'll end up including the header anyhow, there's no real point in doing an additional forward declaration. That's of course a thing of choice, but it doesn't make your code clearer in my opinion.
Also - most of the IDE's have an option to find a symbol in the included files.
If (and only if) you're in a point when you only need classes/functions inside your definitions, then you may vote to include the header in the *.c file. It may be clear at first glance, but you may find that - when modifying the class someday - you'll end up moving the #include to the *.h file anyway.
The short answer is yes. If a particular header that is defining/declaring classes, types, structs, etc. that are composed of classes, types, structs, etc. that are defined/declared in other header files then the most expedient and effective practice is to include those header files into the header file.
By doing so, the header files that are dependencies on the header file you are creating will be there.
There may be issues of files being included multiple times which is why most header files contain either #if to ensure the file is included only once or using something like a #pragma to ensure only included once.
To sum up, you should design your header files so that if they are included multiple times by several uses of a #include of your header file that the header file will only appear once in the preprocessor output. By including the header files on which your header file depends in the header file, you localize the use of the header and make sure that any dependencies will be available.
And in addition by using the #include of dependency header files into your header file, if the include path is incorrect so that a dependency header file is not available, it will be easy to find the header which is depending on the unavailable header file.
Header files should manage their own dependencies; having to get the order of #includes in a .c file just right is an annoying way to waste an afternoon, and it will be a perpetual maintenance headache going forward. Have you headers #include whatever they need directly, use #include guards in your own headers, and life will be much easier.
It is not in bad style if it is necessary to make the header file self-contained in the sense that it does not depend on any other header being manually included. For example, if the header contains declarations that use data types from stdint.h then it should have #include <stdint.h> rather than expect everyone to include it separately, and in the correct order.
Meanwhile unnecessary #includes are generally in bad style regardless of where they are (.h or .c). Arguably an exception to this might be an entire collection of libraries that could in theory be used individually but are intended to be used as a whole, such as a complete GUI toolkit – in that case I think it's cleaner to have some master header that pulls in all the declarations rather than have the user learn which header is required for which specific function.
I prefer to not have #include in header files, if only to make the dependencies visible on one place for each source file. But that is certainly arguable.
Golden rule is readability. Silver rule is follow existing practice.
Don't #include anything you don't need.
Don't #include anything you don't need because including unnecessary files leads to unnecessary dependencies and also leads to larger compile times for larger projects. As a consequence, if you modified an existing class to replace vector with list, take an extra few seconds to look through the file and make sure you don't have any vector remnants left over, and delete the #include <vector> from the top of the file.
Use forward declarations instead of #include where possible.
Use forward declarations where possible because it reduces dependencies and minimizes compile times. A forward declaration will do if your class header does not declare an object of the class; if you only use it by pointer or (in C++) by reference, then you can get by with a forward declaration.
Use #ifdef or #pragma guards to design your header files such that they're not included multiple times.
// MyClass.h
#ifndef MYCLASS_HEADER
#define MYCLASS_HEADER
// [header declaration]
#endif // MYCLASS_HEADER
Alternatively on a supporting compiler such as Visual Studio:
// MyClass.h
#pragma once
// [header declaration]
These guards will allow you to #include "MyClass.h" as often as desired without worrying about including it multiple times in the same translation unit.
Include/forward-declare in the header if the included file is needed by the header.
If the header needs a forward declaration or to fully include the header, then doing so is a no-brainer.
Include in the .c/.cpp file if the included file is not needed by the header, but is needed by the implementation.
If the header has no use for the header--because a forward-declaration is sufficient or because only the .c/.cpp file needs it--then don't #include in the header. Remember: Push off whatever you can into the .c/.cpp file to reduce dependencies and minimize compile times.
If I have multiple files that #include each other, and all #include <iostream>, is this considered bad, and if so, how would I avoid it?
No, there is nothing wrong with it. Every file that needs to directly use functionality from <iostream>, should include it directly. Header guards will take care of multiple inclusions.
There is a potential problem when you have circular dependencies. For example, see this question: Resolve header include circular dependencies
However, since <iostream> is unlikely to be including or depending on any of your files, circular dependencies are not a problem in this case.
The first question is whether you really need to include iostream. In most cases headers don't really need iostream, but the lesser ostream (no need for cin, cout... just the type std::ostream& for operator<<). Even there, the correct header would be iosfwd which contains just forward declarations of those elements.
That is, of course, unless you need the full declarations for the types or the real iostreams... then just include them.
No, this is not a problem. I have never heard at least of any.
Pre-processor should do the required work and I think it's also good style to have every class/sourcefile needing <iostream> to include it.
Therefor everyone knows that this file uses functionality provided by iostream.
By the way: using namespace std; should in any case be avoided to ensure everyone sees the corresponding used namespace.
There's no problem is doing this, there is include guard, which ensures just one time inclusion of the standard header file
Not an issue as long as you make sure you don't have too many headers stacked in each other. Too many and certain OS can't handle it, namely older ones. But unless you got an ancient computer with very old software, it should be perfectly fine! Good Luck to you!
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
where should “include” be put in C++
Obviously, there are two "schools of thought" as to whether to put #include directives into C++ header files (or, as an alternative, put #include only into cpp files). Some people say it's ok, others say it only causes problems. Does anybody know whether this discussion has reached a conclusion what is to be preferred?
I am not aware of any schools of thoughts concerning this. Put them in the header when they are needed there, otherwise forward declare and put them in the .cpp files that require them. There is no benefit in including headers where they are not needed.
What I found effective is following a few simple rules:
Headers shall be self-sufficient, i.e., they shall declare classes they need names for and include headers for any definition they use.
Headers should minimize dependencies as much as possible without violation the previous point.
Getting the first point rught is fairly easy: Include the header first thing from the source implementing what it declares. Getting the second point exactly right isn't trivial, though, and I think it requires tool support to get it exactly right. However, a few unnecessary dependencies generally aren't that bad.
As a rule of thumb, you don't include the headers in a header as long as full definition of them is necessary there. Most of the time you play around with pointers of classes in a header file so it's just fine to forward declare them there.
I think the issue was settle a long time ago: headers should be self-contained (that is should not depend on the user to have included other headers before -- that aspect is settle for so long that some aren't even aware there was a debate on this, but your put includes only in .cpp seems to hint at this) but minimal (i.e. should not include definitions when a declaration would be enough for self-containment).
The reason for self-containment is maintenance: should an header be modified and now depend on something new, you'd have to track all the place it is used to include the new dependency. BTW, the standard trick to ensure self-containment is to include the header providing the declarations for things defined in a .cpp first in the .cpp.
These are not schools of thought so much as religions. In reality, both approaches have their advantages and disadvantages, and there are certain practices to be followed for either approach to be successful. But only one of these approaches will "scale" to large projects.
The advantage of not including headers inside headers is faster compilation. However, this advantage does not come from headers being read only once, because even if you include headers inside headers, smart compilers can work that out. The speed advantage comes from the fact that you include only those headers which are strictly necessary for a given source file. Another advantage is that if we look at a source file, we can see exactly what its dependencies are: the flat list of header files gives that to us plainly.
However, this practice is hard to maintain, especially in large projects with many programmers. It's quite an inconvenience when you want to use module foo, but you cannot just #include "foo.h": you need to include 35 other headers.
What ends up happening is this: programmers are not going to waste their time discovering the exact, minimal set of headers that they need just to add module foo. To save time, they will go to some example source file similar to the one they are working on, and cut and paste all of the #include directives. Then they will try compiling it, and if it doesn't build, then they will cut and paste more #include directives from yet elsewhere, and repeat that until it works.
The net result is that, little by little, you lose the advantage of faster compiling, because your files are now including unnecessary headers. Moreover, the list of #include directives no longer shows the true dependencies. Moreover, when you do incremental compiles now, you compile more than is necessary due to these false dependencies.
Once every source file includes nearly every header, you might as well have a big everything.h which includes all the headers, and then #include "everything.h" in every source file.
So this practice of including just specific headers is best left to small projects that are carefully maintained by a handful of developers who have plenty of time to maintain the ethic of minimal include dependencies by hand, or write tools to hunt down unnecessary #include directives.
Does including the same header files multiple times increase the compilation time?
For example, suppose every file in my project uses <iostream> <string> <vector> and <algorithm>. And if I include a lot of files in my source code, then does that increase the compile time?
I always thought that the guard headers served important purpose of avoiding double definitions but as a by product also eliminates double code.
Actually, someone I know proposed some ideas to remove such multiple inclusions. However, I consider them to be completely against the good design practices in c++. But was still wondering what might be the reasons of him to suggest the changes?
Most of these answers are wrong... For modern compilers, there is zero overhead for including the same file multiple times, assuming the header uses the usual "include guard" idiom.
The GCC preprocessor, for example, has special code to recognize the include guard idiom. It will not even open the header file (never mind reading it) for the second and subsequent #include directives.
I am not sure about other compilers, but I would be very surprised if most of them did not implement the same optimization.
Another technique besides precompiled headers is the compiler firewall idiom, explained here:
http://www.gotw.ca/publications/mill04.htm
http://www.gotw.ca/publications/mill05.htm
Every time #include <something.h> occurs in your source file, 'something.h' have to be found along the include path and read. But there is #ifndef _SOMETHING_H_ check, so the content of such something.h would not be compiled.
Thus there is some overhead, but it is really small.
If compile times were an issue, people used to use the optimisation recommended by Praetorian, originally recommened in Large Scale Software Design. However, most modern compilers automatically optimise for this case. For example, see the help from gcc
The best is to use precompiled headers. I do not know which compiler you are using, but most of them have this feature. I suggest you to refer to your compiler-manual on how to achieve this.
It basically collects all headerfiles and compiles it into a object file which then can be used by the linker. That speeds up compiling very much.
Minor Drawback:
You need to have 1 "uberheader" which is included in every compilation-unit (.cpp).
In that uberheader, only include static headers from libraries, not your own. Then the compiler does not need to recompile it very often.
It helps esp. when using header-only libraries such as boost or glm, eigen etc.
HTH
Yes, including the same header multiple times means that the file needs to be opened before the preprocessor guards kick in and prevent multiple definitions. The Mozilla source code uses the following trick to prevent this:
Foo.h
#ifndef FOO_H
#define FOO_H
// whatever
#endif /* FOO_H */
In all files that need to include foo.h
#ifndef FOO_H
#include "foo.h"
#endif
This prevents foo.h from having to be opened multiple times. Of course, this depends on everyone following a particular naming convention for their preprocessor guards.
You can't do this with standard headers, since there is no common naming convention for their preprocessor guards.
EDIT:
After reading your question again, I think you're asking about the same header being included in different source files. What I talked about above does not help with that. Each header file will still have to be opened and included at least once in every translation unit. The only way I know of to prevent this is to use precompiled headers, as #scorcher24 mentioned in his answer. But I'd stay away from this solution, because there is no standard way of generating precompiled headers across compilers, unless the compile times are absolutely prohibitive.
Some compilers, most notably Microsoft's, have a #pragma once directive that you can use to automatically skip an include file once it's already been included. This removes any performance penalty.
http://en.wikipedia.org/wiki/Pragma_once
It can be an issue. As others have said, most modern compilers
handle the case intelligently, and will only re-open the file in
degenerate cases. Most is not all, however, and one of the major
exceptions is Microsoft, which a lot of people do have to support. The
surest solution (if this is really a problem in your environment) is to
use the Lakos convention, putting the include guards around the
#include as well as in the header. This means, of course, a standard
convention for generating the guard names. (For external includes, wrap
them in your own header, which respects your local convention.)
Alternatively, you can use both the guards and #pragma once. The
guards will always work, and most compilers will avoid the extra opens,
and #pragma once will usually avoid the extra opens with Microsoft.
(#pragma once cannot be implemented reliably in complex networked
situation, but as long as all of your files are on your local drive,
it's quite reliable.)
Concerning headers in a library, I see two options, and I'm not sure if the choice really matters. Say I created a library, lets call it foobar. Please help me choose the most appropriate option:
Have one include in the very root of the library project, lets call it foobar.h, which includes all of the headers in the library, such as "src/some_namespace/SomeClass.h" and so on. Then from outside the library, in the file that I want to use anything to do with the foobar library, just #include <foobar.h>.
Don't have a main include, and instead include only the headers we need in the places that I am to use them, so I may have a whole bunch of includes in a source file. Since I'm using namespaces sometimes as deep as 3, including the headers seems like a bit of a chore.
I've opted for option 1 because of how easy it is to implement. OpenGL and many other libraries seem to do this, so it seemed sensible. However, the standard C++ library can require me to include several headers in any given file, why didn't they just have one header file? Unless it's me being and idiot, and they're separate libraries...
Update:
Further to answers, I think it makes sense to provide both options, correct? I'd be pretty annoyed if I wanted to use a std::string but had to include a mass of header files; that would be silly. On the other hand, I'd be irritated if I had to type a mass of #include lines when I wanted to use most of a library anyway.
Forward headers:
Thanks to all that advised me of forward headers, this has helped me make the header jungle less complicated! :)
stl, boost and others who have a lot of header files to include they provide you with independent tools and you can use them independently.
So if you library is a set of uncoupling tools you have to give a choice to include them as separate parts as well as to include the whole library as the one file.
Think a bit about how your libary will be used, and organize it that way. If someone is unlikely to use one small part without using the whole thing, structure it as one big include. If a small part is independent and useful on its own, make sure you can include just enough for that part. If there's some logical grouping that makes sense, create include files for each group.
As with most programming questions, there's no one-size-fits-all answer.
All #included headers have to be processed. This isn't as bad as it could be, since modern compilers provide some sort of option for not processing them repeatedly (perhaps with something like #pragma once, or an ifndef guard). Still, every #included header has to be processed once for each translation unit, and that can add up fast.
The usual practice is for header files to #include only those header files they need, and to use forward declarations (class foo;) as much as possible. That way, you don't get the overhead.
If you want to #include everything and its brother, you can provide your own header file that #includes everything. You don't have to explicitly write everything out in every header and source file. That option is something you can provide, but if everything in std came as one monolithic header, you wouldn't have an option.
Every time you #include a header file you make the compiler do some pretty hard work. The fewer headers you #include, the less work it has to do and the faster your compilations will be.
All include files should have own sense. And you should choose header structure from lib-users positions: how users should use my library? what structure will best for users?
examples:
if you library provide string algorithms - it will be better make one header with all - string_algorithms.h;
if you library provide some one facade object - it will be better to use one header file ( maybe few other files with extensions or helpers );
if you provide complex of objects which will be used independently make different header files (containers lib provide different containers);
Forward declare instead of including all those header files at once, then include as and when you need.
However you decide on the header file(s) that you make available (one, several or some combination thereof) for the library's public API, it's always a good idea to have at least one separate header for the private API. (No need to expose the prototypes of the non-exported functions and classes or the definitions that are only intended to be used internally.)