I got a comprehension issue about precompiled headers, and the usage of the #include directive.
So I got my "stdafx.h" here and include there for example vector, iostream and string. The associated "stdafx.cpp" only includes the "stdafx.h", that's clear.
So if I design my own header file that uses for example "code" that's in vector or iostream, I have to include the header file because the compiler doesn't know the declarations at that time. So why are some posts here (include stdafx.h in header or source file?) saying, it's not good to include the "stdafx.h" in other header files even if this file includes the needed declarations of e.g. vectors? So basically it wouldn't matter to include directly a vector or the precompiled header file, both do the same thing.
I know of course, that I don't have to include a header file in another header file if the associated source file includes the needed header file, because the declarations are known at that time. Well, that only works if the header file is included somewhere.
So my question is: Should I avoid including the precompiled header file in any source file and why? And I am a bit confused, because I'm reading contradictory expressions on the web that I shouldn't include anything in header files anyway, or is it O.K. to include in header files?
So what's right now?
This will be a bit of a blanket statement with intent. The typical setup for PCH in a Visual Studio project follows this general design, and is worth reviewing. That said:
Design your header files as if there is no PCH master-header.
Never build include-order dependencies in your headers that you expect the including source files will fulfill prior to your headers.
The PCH master-header notwithstanding (I'll get to that in a moment), always include your custom headers before standard headers in your source files. This makes your custom header is more likely to be properly defined and not reliant on the including source file's previous inclusion of some standard header file.
Always set up appropriate include guards or pragmas to avoid multiple inclusion. They're critical for this to work correctly.
The PCH master-header is not to be included in your header files. When designing your headers, do so with the intent that everything needed (and only that which is needed) by the header to compile is included. If an including source file needs additional includes for its implementation, it can pull them in as needed after your header.
The following is an example of how I would setup a project that uses multiple standard headers in both the .h and .cpp files.
myobject.h
#ifndef MYAPP_MYOBJECT_H
#define MYAPP_MYOBJECT_H
// we're using std::map and std::string here, so..
#include <map>
#include <string>
class MyObject
{
// some code
private:
std::map<std::string, unsigned int> mymap;
};
#endif
Note the above header should compile in whatever .cpp it is included, with or without PCH being used. On to the source file...
myobject.cpp
// apart from myobject.h, we also need some other standard stuff...
#include "myobject.h"
#include <iostream>
#include <fstream>
#include <algorithm>
#include <numeric>
// code, etc...
Note myobject.h does not expect you to include something it relies on. It isn't using <iostream> or <algorithm>, etc. in the header; we're using it here.
That is a typical setup with no PCH. Now we add the PCH master
Adding the PCH Master Header
So how do we set up the PCH master-header to turbo-charge this thing? For the sake of this answer, I'm only dealing with pulling in standard headers and 3rd-party library headers that will not undergo change with the project development. You're not going to be editing <map> or <iostream> (and if you are, get your head examined). Anyway...
See this answer for how a PCH is typically configured in Visual Studio. It shows how one file (usually stdafx.cpp) is responsible for generating the PCH, the rest of your source files then use said-PCH by including stdafx.h.
Decide what goes in the PCH. As a general rule, that is how your PCH should be configured. Put non-volatile stuff in there, and leave the rest for the regular source includes. We're using a number of system headers, and those are going to be our choices for our PCH master.
Ensure each source file participating in the PCH turbo-mode is including the PCH master-header first, as described in the linked answer from (1).
So, first, the PCH master header:
stdafx.h
#ifndef MYAPP_STDAFX_H
#define MYAPP_STDAFX_H
// MS has other stuff here. keep what is needed
#include <algorithm>
#include <numeric>
#include <iostream>
#include <fstream>
#include <map>
#include <string>
#endif
Finally, the source files configured to use this then do this. The minimal change needed is:
UPDATED: myobject.cpp
#include "stdafx.h" // <=== only addition
#include "myobject.h"
#include <iostream>
#include <fstream>
#include <algorithm>
#include <numeric>
// code, etc...
Note I said minimal. In reality, none of those standard headers need appear in the .cpp anymore, as the PCH master is pulling them in. In other words, you can do this:
UPDATED: myobject.cpp
#include "stdafx.h"
#include "myobject.h"
// code, etc...
Whether you choose to or not is up to you. I prefer to keep them. Yes, it can lengthen the preprocessor phase for the source file as it pulls in the headers, runs into the include-guards, and throws everything away until the final #endif. If your platform supports #pragma once (and VS does) that becomes a near no-op.
But make no mistake: The most important part of all of this is the header myobject.h was not changed at all, and does not include, or know about, the PCH master header. It shouldn't have to, and should not be built so it has to.
Precompiled headers are a method to shorten the build time. The idea is that the compiler could "precompile" declarations and definitions in the header and not have to parse them again.
With the speed of todays computers, the precompilation is only significant for huge projects. These are projects with a minimum of over 50k lines of code. The definition of "signification" is usually tens of minutes to build.
There are many issues surrounding Microsoft's stdafx.h. In my experience, the effort and time spent with discovering and resolving the issues, makes this feature more of a hassle for smaller project sizes. I have my build set up so most of the time, I am compiling only a few files; the files that don't change are not compiled. Thus, I don't see any huge impact or benefit to the precompiled header.
When using the precompiled header feature, every .cpp file must begin by including the stdafx.h header. If it does not, a compiler error results. So there is no point in putting the include in some header file. That header file cannot be included unless the stdafx.h has already been included first.
Related
I am currently abstracting OpenGL in C++ and I was just wondering if a certain practice I am doing is considered clean and efficient or the complete opposite. I have a header file which is included in close to almost every single header file for the abstraction called "pd.h" and in this file I include everything my program needs as such:
#pragma once
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtc/type_ptr.hpp>
#include <algorithm>
#include <iostream>
#include <fstream>
#include <string>
#include <sstream>
#include <vector>
Is this a very unclean and inefficient way for structuring my code? Because this is just how I've taught myself to do things and for some reason looking at it now it looks abit dodgy, and if it is not a good practice could someone explain why I shouldn't be doing this?
This is something that works, but it has two drawbacks, assuming you use a build system which compiles each .cpp file into its own .o file and only later links these into the executable.
Each time your pd.h file changes, all including files need to be recompiled. This means that your whole project has to be recompiled when you change this header. If you know that this file will not change often, then this is not a big drawback for a small project.
Build time is increased for each .cpp file because all those header files need to be processed, even though they are not needed. Compilers can precompile headers (check out for VS), though this is not part of the ISO C++ standard. Not including headers that are not used is an approach which is simpler to use and scales better.
The size of the executable will not change, nor the performance of the resulting application. Just the compilation time is increased.
So instead have the include only when you really need it. Is it sufficient to have it in the .cpp file? Do so. Only if it needs to be included in a header file, do so.
Sometimes you can get away with a forward declaration only. This is the case when the compiler does not need to know the size of the object because you are only defining a pointer to it in the current header.
When you use something, this is the order that I would try:
Include that header in the source file only
Use a forward declaration in the header file (might need to keep the include in the source file)
Include that header in the header file
There is a header iosfwd which has forward declarations for the iostream header, which can be helpful if you just provide overloads for operator<< for your class. These only take std::ostream & and therefore the compiler does not need to know the size.
Is there any valid reason for placing #include directives BEFORE the include guards in a header file like this:
#include "jsarray.h"
#include "jsanalyze.h"
#include "jscompartment.h"
#include "jsgcmark.h"
#include "jsinfer.h"
#include "jsprf.h"
#include "vm/GlobalObject.h"
#include "vm/Stack-inl.h"
#ifndef jsinferinlines_h___
#define jsinferinlines_h___
//main body mostly inline functions
#endif
Note, this example is a taken from a real life high profile open source project that should be developed by seasoned programmers - the Mozilla Spidermonkey open source Javascript engine used in Firefox 10 (the same header file also exists in the latest version).
In high profile projects, I expect there must be some valid reasons behind their design. What are the valid reasons to have #include before the include guard? Is it a pattern applicable to inline-function-only header files? Also note that this header file (jsinferinlines.h) is actually including itself through the last #include "vm/Stack-inl.h" (this header file includes a lot of other headers and one of them actually includes this jsinferinlines.h again) directive before the include guard, this makes even less sense to me.
Because you expect those headers to have include guards of their own, therefore it doesn't really make any difference.
Also note that this header file (jsinferinlines.h) is actually including itself through the last #include "vm/Stack-inl.h" (this header file includes a lot of other headers and one of them actually includes this jsinferinlines.h again) directive before the include guard, this makes even less sense to me.
It won't make any different because the workflow, when including that file, will be:
include all header up to "vm/Stack-inl.h"
include the "vm/Stack-inl.h" up to the last #include "jsinferinlines.h" (your file)
include the file jsinferinlines.h again (skipping all the previous includes, because include guards)
reach past #include "vm/Stack-inl.h"
finally process from #ifndef jsinferinlines_h___ on
But in general, mutual header recursion is bad and should be avoided at all costs.
There were lots of include cycles in SpiderMonkey headers back then, and placing the header guard at the top caused difficult to understand compile errors - I suspect that putting the header guard below the includes was just an incremental step toward sanitizing the includes.
I couldn't tell you the exact sequence of includes that caused it to make a difference, though.
Nowadays there shouldn't be any include cycles in SM headers, and their style is enforced through a python script written by Nicholas Nethercote. If you look at jsinferinlines.h today you'll see the header guard is at the top where it belongs.
Recently, I encountered such approach of managing headers. Could not find much info on its problems on internet, so decided to ask here.
Imagine you have a program where you have main.c, and also other sources and headers like: person.c, person.h, settings.c, settings.h, maindialog.c, maindialog.h, othersource.c, othersource.h
sometimes settings.c might need person.c and main maindialog.c.
Sometimes some other source might need to include other source files.
Typically one would do inside settings.c:
//settings.c
#include "person.h"
#include "maindialog.h"
But, I encountered approach where one has global.h and inside it:
//global.h
//all headers we need
#include "person.h"
#include "maindialog.h"
#include "settings.h"
#include "otherdialog.h"
Now, from each other source file you only have to include "global.h"
and you are done, you get functionality from respective source files.
Does this approach with one global.h header has some real problems?
This is to please both pedantic purists and lazy minimalists. If all sub-headers are done the correct way there is no functional harm including them via global.h, only possible compile time increase.
Subheader would be somewhat like
#ifndef unique_token
#define unique_token
#pragma once
// the useful payload
#endif
If there are common headers that all the source files need, (like config.h) you can do so. (but use it like precompiled headers...)
But by including unnecessary headers, you actually increasing the compilation time. so if you have a really big project and for every source file you including all the headers, compilation time can be very long...
I suggest not include headers unless you must.
Even if you use type, but only for pointer or reference (especially in headers), prefer to use fwd declaration instead of include.
I've seen some C++ projects (which are quite large, but not much) in many source code hosting sites, and see some source files and headers doesn't include any required header (as programmer's class) or the standard ones (vector, string) , (even compiles perfectly).
One of these projects are Ogitor (an Ogre Scene Editor) i see their source and doesn't include some essential headers files, i see only the class definitions and very few forward class declarations, i don't make any change and compiles perfectly too!.
I want to get an approach on how to C/C++ developers achieve this, i don't want get a mess with a lot of #include directives, also giving me of incomplete class definition errors, i heard something of "precompiled headers" has something to do this.
Precompiled headers are meant to speed up compilation time by being, well... precompiled. They may, but don't have to include other headers, and when they do, then you can skip their inclusion in the file that includes the precompiled header.
The reason you're not seeing all essential headers being included in every file which uses them is because they are indirectly included through other headers. There is no other way of making a header file accessible in a compilation unit, whether you use precompiled headers or not.
Example:
header1.h
#include <vector>
#include <string>
header2.h
#include "header1.h"
#include <array>
header3.h
#include "header2.h"
std::vector<std::string> from_header1;
std::array<int> from_header2;
I have a quick question regarding header files, include statements, and good coding style. Suppose I have 2 classes with associated source and header files, and then a final source file where main() is located.
Within Foo.hpp I have the following statements:
#include <string>
#include <iostream>
#include <exception>
Now withing Bar.hpp I have the following statements:
#include "Foo.hpp"
#include <string>
And finally withing Myprogram.cpp I have the following statements:
#include "Bar.hpp"
#include <string>
#include <iostream>
#include <exception>
I know the include statements in <> in Myprogram.cpp and Bar.hpp aren't necessary for the program to compile and function, but what is the best practice or right way of doing things? Is there any reason to not explicitly include the necessary header files in each file?
You should include all necessary files in every file that needs them. If MyProgram.cpp needs string, include it, instead of relying on it being included by Bar.hpp. There's no guarantee 2 weeks from now Bar.hpp will still include it, and then you'll be left with compiler errors.
Note the necessary - i.e. make sure you actually need an include, when a forward declaration or even leaving it out completely will do.
Also, note that some system headers might include others - apart from a few exceptions, there's no requirement. So if you need both <iostream> and <string> include both, even if you can compile only with one of them.
The order in which the includes appear (system includes vs user includes) is up to the coding standard you follow - consistency is more important than the choice itself.
Include all you need in every file, but do not include any file that you do not need. Normally, it is the job of the included file to make sure it is not included twice, using precompiler flags, etc...
For example if is needed by Foo.cpp, but not by Foo.h, include it in Foo.cpp not in Foo.h. If required in both, include in both.
Tangentially, as a best practice, never use using directives in a header file. If you need you can use using directives in implementation files (.cpp).