Single header file with all the necessary #include statements - c++

I am currently working on program with a lot of source files. Sometimes it is difficult to keep track of what libraries I have already #included. Theoretically, I could make a single header file called Headers.h that just contains all the #include statements I need, then make all other header files #include "Headers.h".
Why is this a good/bad idea?

Pros:
Slightly less maintenance as you don't have to keep track of which of your files are including headers from which libraries or other compoenents.
Cons:
Definitions in included files might conflict with each other. Especially in C where you don't have namespaces (you tagged with C and C++)
Macros in particular can cause hard to debug problems, where a macro definition unexpectedly conflicts with some name in your file or one of the other included files
Depending on which compiler you use, compilation times might blow out. If using a compiler that pre-compiles headers it might actually reduce compilation time, but if not the opposite will happen
You will often unnecessarily trigger rebuilds of files. If you have your build system set up correctly, then each source file will get rebuilt if any of the included files gets modified. If you always include all headers in your project, then a change to any of your headers will force recompilation of all your source files. Not likely to be an issue for system headers but it will be if you include your own headers in the master file as well.
On the whole I would not recommend that approach. The last con listed above it particularly important.
Best practice would be to include only headers that are needed for the code in each file.

In complement of Harmic's answer, indeed the main issue is the build system (most builders work on file timestamp, not on file contents. omake is a notable exception).
Notice that if you only care about many dependencies, GNU make can be used with autodependencies, together with -M* options passed to GCC (i.e. to g++ and actually to the preprocessor).
However, many libraries are offering to their user a single header (e.g. <gtk/gtk.h>)
Also, a single header file is more friendly to precompiled headers technology. In particular, GCC wants a single header for precompilation.
See also ccache.

Tracking all the required includes would be more difficult as they are abstracted from their c source files and not really supporting modularisation pus all the cons from #harmic

Related

Including all header files in application

I was recently looking through the source code of a C++ application and saw that each class did not #include its needed components, but instead #include'd a "Precompiled.h" header. In this Precompiled header was an inclusion of almost every header in the application (not all of them, it was clear that the length and order of the list was deliberate). Essentially, this would mean that every class had an inclusion of every other class in the application.
Is this wise? Why or why not?
Usually if you write an application, you should only include header files which are really needed in cpp files. If you got a really big application, you should use forward declaration in the header and include necessary files in the cpp file. With that, changes in code only affects a minimum on cpp files, so the compiler had only to compile what really has changed.
The situation can totally flip, when it comes to libraries or code which does not change very often. The filename "Precompiled.h" is already a hint. The compiler can precompile the headers to a special object file, often called PCH file. With that, the compiler has not to resolve every include on every compile time. On heavy nested includes, this has high impact on compile speed, because instead of many files to load and parse, there is only one preparsed file. To archive that you have to declare one or more headers as a kind of center file for building a precompiled header. How you do that differs between different compilers.
For example Visual studio uses the header file "stdafx.h" as the center of the precompilation of header files. Because of that, only header files should include there which are not altered very often. Also the file had to be included first in every cpp file. That is because the compiler can not detect any more if a include file which is included before may have influence to the precompiled file. To avoid that, includes before the precompiled includes are not allowed.
Back to your question. Including every file in one header file to use it as precompiled header makes no sense at all, as it conteract the meaning of a precompiled header file.
It is a very bad idea.
For a .cpp file only include the minimum number of #include files.
Thereby when one of them changes the make (or moral equilivant) will not require the whole lot to be recompiled.
Saves lots of time during development.
PS Use forward declarations in preference to #include

how to include certain header files by default so that i don't have to type them in every programs

how to include certain header files by default so that i don't have to type them in every programs:
In dev c++ and code::blocks
Make a global header file that in turn includes whatever files you need in every project, and then you only have to include that single file.
However I would recommend against it, unless all your different project are very similar. Different projects have different needs and also need different header files.
You could issue a compiler directive in your project file or make script to do "per project" includes, but in general I would avoid that.
Source code should be as clear as possible to any reader just by its content. Whenever I have source code that dramatically changes its semantics, eg. by headers that are unknown to me, this can be quite confusing.
On top of that, if you "inject" those headers for certain compilation units that don't need them, that will negatively impact compile time.
As a substitution, what about introducing a common.h/hpp header that includes those certain header files? You can then include your common header in all files that need them and change this common set of headers for all depending files at once. It also opens the door to use precompiled header files, which may be worth a look for you.
From GCC documentation (AFAIK GCC is default compiler used by the development environment you are citing)
-include file
Process file as if #include "file" appeared as the first line of the primary source file. However, the first directory searched for
file is the preprocessor's working directory instead of the directory
containing the main source file. If not found there, it is searched
for in the remainder of the #include "..." search chain as normal.
If multiple -include options are given, the files are included in the order they appear on the command line.
-imacros file
Exactly like -include, except that any output produced by scanning file is thrown away. Macros it defines remain defined. This allows you
to acquire all the macros from a header without also processing its
declarations.
All files specified by -imacros are processed before all files specified by -include.
But it is usually a bad idea to use these.
Dev c++ works with MingW compiler, which is gcc compiler for Windows. Gcc supports precompiled headers, so you can try that. Precompiled headers are header files that you want compiled and added to every object file in a project. Try searching for that in Google for some information.
Code::blocks supports them too, when used with gcc, even better, so there it may even be easier.
If your editor of choice supports macros, make one that adds your preferred set of include files. Once made, all you have to do is invoke your macro to save yourself the repetitive typing and you're golden.
Hope this helps.

Combining C++ header files

Is there an automated way to take a large amount of C++ header files and combine them in a single one?
This operation must, of course, concatenate the files in the right order so that no types, etc. are defined before they are used in upcoming classes and functions.
Basically, I'm looking for something that allows me to distribute my library in two files (libfoo.h, libfoo.a), instead of the current bunch of include files + the binary library.
As your comment says:
.. I want to make it easier for library users, so they can just do one single #include and have it all.
Then you could just spend some time, including all your headers in a "wrapper" header, in the right order. 50 headers are not that much. Just do something like:
// libfoo.h
#include "header1.h"
#include "header2.h"
// ..
#include "headerN.h"
This will not take that much time, if you do this manually.
Also, adding new headers later - a matter of seconds, to add them in this "wrapper header".
In my opinion, this is the most simple, clean and working solution.
A little bit late, but here it is. I just recently stumbled into this same problem myself and coded this solution: https://github.com/rpvelloso/oneheader
How does it works?
Your project's folder is scanned for C/C++ headers and a list of headers found is created;
For every header in the list it analyzes its #include directives and assemble a dependency graph in the following way:
If the included header is not located inside the project's folder then it is ignored (e.g., if it is a system header);
If the included header is located inside the project's folder then an edge is create in the dependency graph, linking the included header to the current header being analyzed;
The dependency graph is topologically sorted to determine the correct order to concatenate the headers into a single file. If a cycle is found in the graph, the process is interrupted (i.e., if it is not a DAG);
Limitations:
It currently only detects single line #include directives (e.g., #include );
It does not handles headers with the same name in different paths;
It only gives you a correct order to combine all the headers, you still need to concatenate them (maybe you want remove or modify some of them prior to merging).
Compiling:
g++ -Wall -ggdb -std=c++1y -lstdc++fs oneheader.cpp -o oneheader[.exe]
Usage:
./oneheader[.exe] project_folder/ > file_sequence.txt
(Adapting an answer to my dupe question:)
There are several other libraries which aim for a single-header form of distribution, but are developed using multiple files; and they too need such a mechanism. For some (most?) it is opaque and not part of the distributed code. Luckily, there is at least one exception: Lyra, a command-line argument parsing library; it uses a Python-based include file fuser/joiner script, which you can find here.
The script is not well-documented, but they way you use it is with 3 command-line arguments:
--src-include - The include file to convert, i.e. to merge its include directives into its body. In your case it's libfoo.h which includes the other files.
--dst-include - The output file to write - the result of the merging.
--src-include-dir - The directory relative to which include files are specified (i.e. an "include search path" of one directory; the script doesn't support the complex mechanism of multiple include paths and search priorities which the C++ compiler offers)
The script acts recursively, so if file1.h includes another file under the --src-include-dir, that should be merged in as well.
Now, I could nitpick at the code of that script, but - hey, it works and it's FOSS - distributed with the Boost license.
If your library is so big that you cannot build and maintain a single wrapping header file like Kiril suggested, this may mean that it is not architectured well enough.
So if your library is really huge (above a million lines of source code), you might consider automating that, with tools like
GCC make dependency generator preprocessor options like -M -MD -MF etc, with another hand made script sorting them
expensive commercial static analysis tools like coverity
customizing a compiler thru plugins or (for GCC 4.6) MELT extensions
But I don't understand why you want an automated way of doing this. If the library is of reasonable size, you should understand it and be able to write and maintain a wrapping header by hand. Automating that task will take you some efforts (probably weeks, not minutes) so is worthwhile only for very large libraries.
If you have a master include file that includes all others available, you could simply hack a C preprocessor re-implementation in Perl. Process only ""-style includes and recursively paste the contents of these files. Should be a twenty-liner.
If not, you have to write one up yourself or try at random. Automatic dependency tracking in C++ is hard. Like in "let's see if this template instantiation causes an implicit instantiation of the argument class" hard. The only automated way I see is to shuffle your include files into a random order, see if the whole bunch compiles, and re-shuffle them until it compiles. Which will take n! time, you might be better off writing that include file by hand.
While the first variant is easy enough to hack, I doubt the sensibility of this hack, because you want to distribute on a package level (source tarball, deb package, Windows installer) instead of a file level.
You really need a build script to generate this as you work, and a preprocessor flag to disable use of the amalgamate (that could be for your uses).
To simplify this script/program, it helps to have your header structures and include hygiene in top form.
Your program/script will need to know your discovery paths (hint: minimise the count of search paths to one if possible).
Run the script or program (which you create) to replace include directives with header file contents.
Assuming your headers are all guarded as is typical, you can keep track of what files you have already physically included and perform no action if there is another request to include them. If a header is not found, leave it as-is (as an include directive) -- this is required for system/third party headers -- unless you use a separate header for external includes (which is not at all a bad idea).
It's good to have a build phase/translation that includes header alone and produces zero warnings or errors (warnings as errors).
Alternatively, you can create a special distribution repository so they never need to do more than pull from it occasionally.
What you want to do sounds "javascriptish" to me :-) . But if you insist, there is always "cat" (or the equivalent in Windows):
$ cat file1.h file2.h file3.h > my_big_file.h
Or if you are using gcc, create a file my_decent_lib_header.h with the following contents:
#include "file1.h"
#include "file2.h"
#include "file3.h"
and then use
$ gcc -C -E my_decent_lib_header.h -o my_big_file.h
and this way you even get file/line directives that will refer to the original files (although that can be disabled, if you wish).
As for how automatic is this for your file order, well, it is not at all; you have to decide the order yourself. In fact, I would be surprised to hear that a tool that orders header dependencies correctly in all cases for C/C++ can be built.
usually you don't want to include every bit of information from all your headers into the special header that enables the potential user to actually use your library. The non-trivial removal of type definitions, further includes or defines, that are not necessary for the user of your interface to know can not be automatedly done. As far as I know.
Short answer to your main question:
No.
My suggestions:
manually make a new header, that contains all relevant information (nothing more, nothing less) for the user of your library interface. Add nice documentation comments for each component it contains.
use forward declarations where possible, instead of full-fledged included definitions. Put the actual includes in your implementation files. The less include statements you have in your headers, the better.
don't build a deeply nested hierarchy of includes. This makes it extremely hard to keep an overview on the contents of every bit you include. The user of your library will look into the header to learn how to use it. And he will probably not be able to distinguish relevant code from irrelevant on the first sight. You want to maximize the ratio of relevant code per total code in the main header for your library.
EDIT
If you really do have a toolkit library, and the order of inclusion really does not matter, and you have a bunch of independent headers, that you want to enumerate just for convenience into a single header, then you can use a simple script. Like the following Python (untested):
import glob
with open("convenience_header.h", 'w') as f:
for header in glob.glob("*.h"):
f.write("#include \"%s\"\n" % header)

C++ Single Header File Structure

I want to speed up the build time of my c++ project, and I am wondering if my current structure may cause unnecessary recompilations.
I have *.cc and corresponding *.h files, but all my *.cc files include a single header file which is main.h.
In main.h, I include everything necessary and extern global variables and declare the functions I use. Basically, I'm not using any namespaces.
Is this a bad design that could cause unnecessary recompiles and slow builds?
It depends. If main.h is seldom modified, you could use precompiled headers, which will greatly improve compilation time.
On the other hand, if main.h is regularly used, it's probably not a good design.
An additional problem introduced by putting everything in one include file is that it doesn't really promote structure in your application. In well-designed applications you often have a layered structure. By putting everything in one include file, you obfuscate the structure in your application. This may work for a small application, but if your application grows, you will end up one day with a complete spaghetti, where everything depends on everything else.
Try to split the include file in multiple parts. Typically you will have one .cpp and one .h file per class. Try to use forward declarations as much as possible in your include file, and only include (in .h and .cpp) what's really needed.
That design will definitely lead to slow build time. What make files and IDEs do when you start a build is they check which source (cc) files have been modified since the last time you compiled. It also checks whether any files that a source file depends on have been modified. A source file depends on all the header files it includes, and all the header files those header files include, etc. If it detects any modifications then it recompiles that source file.
Since your set up means that each source files includes every single header file, any time you modify even a single header file you need to recompile every source file.
You'll definitely want to try and separate things a bit more and get rid of your main.h file. Usually people try and minimize the number of header files included in a header file and prefer to keep the includes in source files, by the way.

Are there general guidelines for solving undefined reference/unresolved symbol issues?

I'm having several "undefined reference" (during linkage) and "unresolved symbol" (during runtime after dlopen) issues where I work. It is quite a large makefile system.
Are there general rules and guidelines for linking libraries and using compiler flags/options to evade these types of errors?
IF YOU WERE USING MSVC :
You cannot evade this type of error by setting a flag : it means some units (.cpp) dont' have definitions of declared identifiers. It's certainly caused by missing includes or missing object definitions (often static objects) somewhere.
While developing you can follow those guidelines ( from those articles ) to be sure all your cpp includes all the headers they need but no more :
Every cpp file includes its own header file first. This is the most
important guideline; everything else
follows from here. The only exception
to this rule are precompiled header
includes in Visual Studio; those
always have to be the first include in
the file. More about precompiled
headers in part two of this article.
A header file must include all the header files necessary to parse it.
This goes hand in hand with the first
guideline. I know some people try to
never include header files within
header files claiming efficiency or
something along those lines. However,
if a file must be included before a
header file can be parsed, it has to
be included somewhere. The advantage
of including it directly in the header
file is that we can always decide to
pull in a header file we’re interested
in and we’re guaranteed that it’ll
work as is. We don’t have to play the
“guess what other headers you need”
game.
A header file should have the bare minimum number of header files
necessary to parse it. The previous
rule said you should have all the
includes you need in a header file.
This rule says you shouldn’t have any
more than you have to. Clearly, start
by removing (or not adding in the
first place) useless include
statements. Then, use as many forward
declarations as you can instead of
includes. If all you have are
references or pointers to a class, you
don’t need to include that class’
header file; a forward reference will
do nicely and much more efficiently.
But as commenter have suggested, it seem you're using g++...
Setting up a build system where X depends on Y which depends on Z helps. It's when you get into circles (Z depends on X) that things get ugly.
Oftentimes it's the order libraries are linked ("-lZ -lY -lX" vs "-lX -lY -lZ") that causes grief. More rarely, you have the same library-name in multiple places on your search path, or your linking against outdated versions that have not yet been recompiled.
"nm --demangle" can let you see where things are defined/used.
"ldd" can be used to see what dynamic libraries you depend on.
The gcc/g++ flag -print-file-name=LIBRARY can help track down exactly which library is being used.
Afterthought: (Since you ask about rules/guidelines.)
It is possible to set up a makefile system such that:
If module=D depends on modules A,B,&C.
Then trying to make module=D would first make modules A,B,&C.
And, more importantly, module=D would automatically determine its libraries (-lA,etc), library paths (-LA), and include paths (-IA) from the makefiles for modules A,B,&C.
That can get a little hairy to set up. Last time I did it, I favored merely caching the information rather than forking an excessive number of make subprocesses. Coupled with makefile-importing and a little perl script to remove duplicates. Kludgey, I know. (Powers that be didn't want to spend time on infrastructure.) But it can be done.
Then again, I was using GNU-make, which has a few extensions.