How should I detect unnecessary #include files in a large C++ project? - c++

I am working on a large C++ project in Visual Studio 2008, and there are a lot of files with unnecessary #include directives. Sometimes the #includes are just artifacts and everything will compile fine with them removed, and in other cases classes could be forward declared and the #include could be moved to the .cpp file. Are there any good tools for detecting both of these cases?

While it won't reveal unneeded include files, Visual studio has a setting /showIncludes (right click on a .cpp file, Properties->C/C++->Advanced) that will output a tree of all included files at compile time. This can help in identifying files that shouldn't need to be included.
You can also take a look at the pimpl idiom to let you get away with fewer header file dependencies to make it easier to see the cruft that you can remove.

PC Lint works quite well for this, and it finds all sorts of other goofy problems for you too. It has command line options that can be used to create External Tools in Visual Studio, but I've found that the Visual Lint addin is easier to work with. Even the free version of Visual Lint helps. But give PC-Lint a shot. Configuring it so it doesn't give you too many warnings takes a bit of time, but you'll be amazed at what it turns up.

There's a new Clang-based tool, include-what-you-use, that aims to do this.

!!DISCLAIMER!! I work on a commercial static analysis tool (not PC Lint). !!DISCLAIMER!!
There are several issues with a simple non parsing approach:
1) Overload Sets:
It's possible that an overloaded function has declarations that come from different files. It might be that removing one header file results in a different overload being chosen rather than a compile error! The result will be a silent change in semantics that may be very difficult to track down afterwards.
2) Template specializations:
Similar to the overload example, if you have partial or explicit specializations for a template you want them all to be visible when the template is used. It might be that specializations for the primary template are in different header files. Removing the header with the specialization will not cause a compile error, but may result in undefined behaviour if that specialization would have been selected. (See: Visibility of template specialization of C++ function)
As pointed out by 'msalters', performing a full analysis of the code also allows for analysis of class usage. By checking how a class is used though a specific path of files, it is possible that the definition of the class (and therefore all of its dependnecies) can be removed completely or at least moved to a level closer to the main source in the include tree.

I don't know of any such tools, and I have thought about writing one in the past, but it turns out that this is a difficult problem to solve.
Say your source file includes a.h and b.h; a.h contains #define USE_FEATURE_X and b.h uses #ifdef USE_FEATURE_X. If #include "a.h" is commented out, your file may still compile, but may not do what you expect. Detecting this programatically is non-trivial.
Whatever tool does this would need to know your build environment as well. If a.h looks like:
#if defined( WINNT )
#define USE_FEATURE_X
#endif
Then USE_FEATURE_X is only defined if WINNT is defined, so the tool would need to know what directives are generated by the compiler itself as well as which ones are specified in the compile command rather than in a header file.

Like Timmermans, I'm not familiar with any tools for this. But I have known programmers who wrote a Perl (or Python) script to try commenting out each include line one at a time and then compile each file.
It appears that now Eric Raymond has a tool for this.
Google's cpplint.py has an "include what you use" rule (among many others), but as far as I can tell, no "include only what you use." Even so, it can be useful.

If you're interested in this topic in general, you might want to check out Lakos' Large Scale C++ Software Design. It's a bit dated, but goes into lots of "physical design" issues like finding the absolute minimum of headers that need to be included. I haven't really seen this sort of thing discussed anywhere else.

Give Include Manager a try. It integrates easily in Visual Studio and visualizes your include paths which helps you to find unnecessary stuff.
Internally it uses Graphviz but there are many more cool features. And although it is a commercial product it has a very low price.

You can build an include graph using C/C++ Include File Dependencies Watcher, and find unneeded includes visually.

If your header files generally start with
#ifndef __SOMEHEADER_H__
#define __SOMEHEADER_H__
// header contents
#endif
(as opposed to using #pragma once) you could change that to:
#ifndef __SOMEHEADER_H__
#define __SOMEHEADER_H__
// header contents
#else
#pragma message("Someheader.h superfluously included")
#endif
And since the compiler outputs the name of the cpp file being compiled, that would let you know at least which cpp file is causing the header to be brought in multiple times.

PC-Lint can indeed do this. One easy way to do this is to configure it to detect just unused include files and ignore all other issues. This is pretty straightforward - to enable just message 766 ("Header file not used in module"), just include the options -w0 +e766 on the command line.
The same approach can also be used with related messages such as 964 ("Header file not directly used in module") and 966 ("Indirectly included header file not used in module").
FWIW I wrote about this in more detail in a blog post last week at http://www.riverblade.co.uk/blog.php?archive=2008_09_01_archive.xml#3575027665614976318.

Adding one or both of the following #defines
will exclude often unnecessary header files and
may substantially improve
compile times especially if the code that is not using Windows API functions.
#define WIN32_LEAN_AND_MEAN
#define VC_EXTRALEAN
See http://support.microsoft.com/kb/166474

If you are looking to remove unnecessary #include files in order to decrease build times, your time and money might be better spent parallelizing your build process using cl.exe /MP, make -j, Xoreax IncrediBuild, distcc/icecream, etc.
Of course, if you already have a parallel build process and you're still trying to speed it up, then by all means clean up your #include directives and remove those unnecessary dependencies.

Start with each include file, and ensure that each include file only includes what is necessary to compile itself. Any include files that are then missing for the C++ files, can be added to the C++ files themselves.
For each include and source file, comment out each include file one at a time and see if it compiles.
It is also a good idea to sort the include files alphabetically, and where this is not possible, add a comment.

If you aren't already, using a precompiled header to include everything that you're not going to change (platform headers, external SDK headers, or static already completed pieces of your project) will make a huge difference in build times.
http://msdn.microsoft.com/en-us/library/szfdksca(VS.71).aspx
Also, although it may be too late for your project, organizing your project into sections and not lumping all local headers to one big main header is a good practice, although it takes a little extra work.

If you would work with Eclipse CDT you could try out http://includator.com to optimize your include structure. However, Includator might not know enough about VC++'s predefined includes and setting up CDT to use VC++ with correct includes is not built into CDT yet.

The latest Jetbrains IDE, CLion, automatically shows (in gray) the includes that are not used in the current file.
It is also possible to have the list of all the unused includes (and also functions, methods, etc...) from the IDE.

Some of the existing answers state that it's hard. That's indeed true, because you need a full compiler to detect the cases in which a forward declaration would be appropriate. You cant parse C++ without knowing what the symbols mean; the grammar is simply too ambiguous for that. You must know whether a certain name names a class (could be forward-declared) or a variable (can't). Also, you need to be namespace-aware.

Maybe a little late, but I once found a WebKit perl script that did just what you wanted. It'll need some adapting I believe (I'm not well versed in perl), but it should do the trick:
http://trac.webkit.org/browser/branches/old/safari-3-2-branch/WebKitTools/Scripts/find-extra-includes
(this is an old branch because trunk doesn't have the file anymore)

If there's a particular header that you think isn't needed anymore (say
string.h), you can comment out that include then put this below all the
includes:
#ifdef _STRING_H_
# error string.h is included indirectly
#endif
Of course your interface headers might use a different #define convention
to record their inclusion in CPP memory. Or no convention, in which case
this approach won't work.
Then rebuild. There are three possibilities:
It builds ok. string.h wasn't compile-critical, and the include for it
can be removed.
The #error trips. string.g was included indirectly somehow
You still don't know if string.h is required. If it is required, you
should directly #include it (see below).
You get some other compilation error. string.h was needed and isn't being
included indirectly, so the include was correct to begin with.
Note that depending on indirect inclusion when your .h or .c directly uses
another .h is almost certainly a bug: you are in effect promising that your
code will only require that header as long as some other header you're using
requires it, which probably isn't what you meant.
The caveats mentioned in other answers about headers that modify behavior
rather that declaring things which cause build failures apply here as well.

Related

Show #include hierarchy in C++Builder

I'm having trouble with someone else's code, what seems to be header files included out of order. (E.g., I'm getting redefinition errors, some of which are even in the same file!) It would be useful to see the #include tree the C++Builder compiler is using, similar to Visual Studio's -showIncludes flag. Is there any such functionality; if so, how do I access it? I am specifically using C++Builder 2007.
This usually happens if you including multiple times files which contains global constants, variables and sometimes even #defines. This is very common for MDI apps where the master Form contains include of the child Forms and some of them use the same libs ...
The include hierarchy would not help for this unless you are planning to edit all source files #include order which can lead to problems later on (especially compatibility)...
To remedy this you should encapsulate all such files with
#ifndef _file_name_h
#define _file_name_h
// here your source and includes
#endif
statements. Like in this example:
OpenGL drawing a cube
That will prevent multiple definitions and compilations on pre-compiler level as the source will be processed only the first time (while #define _file_name_h is still not defined).
Sadly, there are no Borland C Compiler options for displaying the hierarchy of #included files. See Embarcadero's BCC32 CLI docs.
However, an alternative (granted, not as clean) is to use the Borland C Compiler Preprocessor, e.g.
CPP32 -Sr source.cpp # outputs source.i with comments and indentation retained

Show an error (in Visual studio), when someone tries to use STL in a current .cpp or .h file

In our company sometimes we write .cpp and .h files, which are used in projects for old WM (we use Embedded Visual C++ 3.0 or something for this) and in more modern code (VS 2010).
This Embedded Visual C++ does not support STL.
So if one of developers, who works in VS2010, changes a file, which is shared, and adds some function, which uses std::vector, for instance, on his side everything will be OK, but the build (which is quite long) will fail.
So to see this mistake sooner, I would like to add something like
#if defined(%%STL%%)
#error("!!!!")
#endif
in all files, which are compiled with old toolset. In this case the developer could see compile time error even in VS2010.
But I could not find what I can put instead %%STL%% there.
Any ideas? Or maybe someone knows a better way how I can do this?
Based on a comment to the question, you could go through each of the header files that aren't supported and see what symbols they define for their include guard. Then check for those symbols being defined.
E.G. The Microsoft C++ header <algorithm> defines _ALGORITHM_ so you can check for that:
#ifdef _ALGORITHM_
#error("<algorithm> included")
#endif
A bunch of these could be collected up and put into a single header file that you could include in each shared source file, at the end.
There is quite a nice solution (at least I do not see pitfalls)
%%STL%% should be _STD_BEGIN
this macro is used for "namespace std {" in VS stl implementation

Combining C++ header files

Is there an automated way to take a large amount of C++ header files and combine them in a single one?
This operation must, of course, concatenate the files in the right order so that no types, etc. are defined before they are used in upcoming classes and functions.
Basically, I'm looking for something that allows me to distribute my library in two files (libfoo.h, libfoo.a), instead of the current bunch of include files + the binary library.
As your comment says:
.. I want to make it easier for library users, so they can just do one single #include and have it all.
Then you could just spend some time, including all your headers in a "wrapper" header, in the right order. 50 headers are not that much. Just do something like:
// libfoo.h
#include "header1.h"
#include "header2.h"
// ..
#include "headerN.h"
This will not take that much time, if you do this manually.
Also, adding new headers later - a matter of seconds, to add them in this "wrapper header".
In my opinion, this is the most simple, clean and working solution.
A little bit late, but here it is. I just recently stumbled into this same problem myself and coded this solution: https://github.com/rpvelloso/oneheader
How does it works?
Your project's folder is scanned for C/C++ headers and a list of headers found is created;
For every header in the list it analyzes its #include directives and assemble a dependency graph in the following way:
If the included header is not located inside the project's folder then it is ignored (e.g., if it is a system header);
If the included header is located inside the project's folder then an edge is create in the dependency graph, linking the included header to the current header being analyzed;
The dependency graph is topologically sorted to determine the correct order to concatenate the headers into a single file. If a cycle is found in the graph, the process is interrupted (i.e., if it is not a DAG);
Limitations:
It currently only detects single line #include directives (e.g., #include );
It does not handles headers with the same name in different paths;
It only gives you a correct order to combine all the headers, you still need to concatenate them (maybe you want remove or modify some of them prior to merging).
Compiling:
g++ -Wall -ggdb -std=c++1y -lstdc++fs oneheader.cpp -o oneheader[.exe]
Usage:
./oneheader[.exe] project_folder/ > file_sequence.txt
(Adapting an answer to my dupe question:)
There are several other libraries which aim for a single-header form of distribution, but are developed using multiple files; and they too need such a mechanism. For some (most?) it is opaque and not part of the distributed code. Luckily, there is at least one exception: Lyra, a command-line argument parsing library; it uses a Python-based include file fuser/joiner script, which you can find here.
The script is not well-documented, but they way you use it is with 3 command-line arguments:
--src-include - The include file to convert, i.e. to merge its include directives into its body. In your case it's libfoo.h which includes the other files.
--dst-include - The output file to write - the result of the merging.
--src-include-dir - The directory relative to which include files are specified (i.e. an "include search path" of one directory; the script doesn't support the complex mechanism of multiple include paths and search priorities which the C++ compiler offers)
The script acts recursively, so if file1.h includes another file under the --src-include-dir, that should be merged in as well.
Now, I could nitpick at the code of that script, but - hey, it works and it's FOSS - distributed with the Boost license.
If your library is so big that you cannot build and maintain a single wrapping header file like Kiril suggested, this may mean that it is not architectured well enough.
So if your library is really huge (above a million lines of source code), you might consider automating that, with tools like
GCC make dependency generator preprocessor options like -M -MD -MF etc, with another hand made script sorting them
expensive commercial static analysis tools like coverity
customizing a compiler thru plugins or (for GCC 4.6) MELT extensions
But I don't understand why you want an automated way of doing this. If the library is of reasonable size, you should understand it and be able to write and maintain a wrapping header by hand. Automating that task will take you some efforts (probably weeks, not minutes) so is worthwhile only for very large libraries.
If you have a master include file that includes all others available, you could simply hack a C preprocessor re-implementation in Perl. Process only ""-style includes and recursively paste the contents of these files. Should be a twenty-liner.
If not, you have to write one up yourself or try at random. Automatic dependency tracking in C++ is hard. Like in "let's see if this template instantiation causes an implicit instantiation of the argument class" hard. The only automated way I see is to shuffle your include files into a random order, see if the whole bunch compiles, and re-shuffle them until it compiles. Which will take n! time, you might be better off writing that include file by hand.
While the first variant is easy enough to hack, I doubt the sensibility of this hack, because you want to distribute on a package level (source tarball, deb package, Windows installer) instead of a file level.
You really need a build script to generate this as you work, and a preprocessor flag to disable use of the amalgamate (that could be for your uses).
To simplify this script/program, it helps to have your header structures and include hygiene in top form.
Your program/script will need to know your discovery paths (hint: minimise the count of search paths to one if possible).
Run the script or program (which you create) to replace include directives with header file contents.
Assuming your headers are all guarded as is typical, you can keep track of what files you have already physically included and perform no action if there is another request to include them. If a header is not found, leave it as-is (as an include directive) -- this is required for system/third party headers -- unless you use a separate header for external includes (which is not at all a bad idea).
It's good to have a build phase/translation that includes header alone and produces zero warnings or errors (warnings as errors).
Alternatively, you can create a special distribution repository so they never need to do more than pull from it occasionally.
What you want to do sounds "javascriptish" to me :-) . But if you insist, there is always "cat" (or the equivalent in Windows):
$ cat file1.h file2.h file3.h > my_big_file.h
Or if you are using gcc, create a file my_decent_lib_header.h with the following contents:
#include "file1.h"
#include "file2.h"
#include "file3.h"
and then use
$ gcc -C -E my_decent_lib_header.h -o my_big_file.h
and this way you even get file/line directives that will refer to the original files (although that can be disabled, if you wish).
As for how automatic is this for your file order, well, it is not at all; you have to decide the order yourself. In fact, I would be surprised to hear that a tool that orders header dependencies correctly in all cases for C/C++ can be built.
usually you don't want to include every bit of information from all your headers into the special header that enables the potential user to actually use your library. The non-trivial removal of type definitions, further includes or defines, that are not necessary for the user of your interface to know can not be automatedly done. As far as I know.
Short answer to your main question:
No.
My suggestions:
manually make a new header, that contains all relevant information (nothing more, nothing less) for the user of your library interface. Add nice documentation comments for each component it contains.
use forward declarations where possible, instead of full-fledged included definitions. Put the actual includes in your implementation files. The less include statements you have in your headers, the better.
don't build a deeply nested hierarchy of includes. This makes it extremely hard to keep an overview on the contents of every bit you include. The user of your library will look into the header to learn how to use it. And he will probably not be able to distinguish relevant code from irrelevant on the first sight. You want to maximize the ratio of relevant code per total code in the main header for your library.
EDIT
If you really do have a toolkit library, and the order of inclusion really does not matter, and you have a bunch of independent headers, that you want to enumerate just for convenience into a single header, then you can use a simple script. Like the following Python (untested):
import glob
with open("convenience_header.h", 'w') as f:
for header in glob.glob("*.h"):
f.write("#include \"%s\"\n" % header)

C++ Compiler Macro for Status of "Use Precompiled Headers"

It there a predefined c++ compiler macro that I can use to tell, whether a file is compiled with "Use Precompiled Headers", "Create Precompiled Headers", "Dont Use Precompiled Headers"?
See #IronMensan 's answer for the purpose of such a macro!
I don't think there is anything, though I certainly understand the desire for one. Whenever I have to build my cross-platform library on a system that dozen't support PCH, it takes forever since a lot of files are pulling in way more than they really need and it would be nice to trim that out. Unfortunately I can't because of how Visual Studio handles PCH. Namely that the inclusion of the PCH must be the first non-comment line of the file. From the way you worded your question, I suspect that you are also working with Visual Studio.
I am not sure if this will work for you but you could try something like this:
#include MY_PCH_FILE
And use
/DMY_PCH_FILE="myfile.h"
on the command line to control what the first include file is. After that you have full control over what gets included and proper header guards along with the optimization in most modern compilers to detect header guards could reduce build times. You can change the definition of the macro for individual file in the build settings of your project, in a similar manor to how you can change the PCH settings for each file.
Though I must admit that I am not sure what you are trying to do and I suspect this is really an XY problem
Visual Studio/MSC does not provide a predefined macro that carries the setting of the /Y[-cdu] compiler switch for inspection from source code.
However, there is a solution to the problem you are trying to solve, i.e. controlling whether or not the first non-comment line of a source file should be #include "<my pch.h>": MSC offers the /FI (Name Forced Include File) compiler switch.
This option has the same effect as specifying the file with double quotation marks in an #include directive on the first line of every source file specified on the command line [...]
This compiler switch can either be specified on the compiler's command line, or on a per-project basis through the IDE's GUI (Project -> Properties: C/C++ -> Advanced: Forced Include File).
With a combination of the /Y[-cdu] and /FI compiler switches you can both control the use and meet the requirements for using precompiled headers, from outside the source code.
In this case, I think you can create manualy yourself the macro.
You can define USE_PRECOMPILEDHDR and FORCED_INCLUDEHDR when you use precompilation like this
#if USE_PRECOMPILEDHDR
#ifndef FORCED_INCLUDEHDR
#include "stdafx.h"
#endif
#else
//..manualy include all your headers
#endif
But as other saying, except if you change for another compiler, you have no reason to use guards for this.
This feature is unlikely to exist. The whole point of precompiled headers is that the headers will be compiled with exactly the same compiler options as when compiling for real. If the compiler were to offer a way for your code to tell the difference, then you could make your code behave differently (at a preprocessor level) depending on whether the compiler is precompiling or actually compiling.
If you're looking to include header files based on whether or not precompiled headers are enabled, you should use an Include Guard instead.

precompiled header files usage for library builders

According to this answer boost and STL headers belong into the precompiled header file (stdafx.h in the MSVC world). So I changed the headers of my dynamic link library project and moved all STL/Boost headers into the stdafx.h of my project.
Before
#include <boost/smart_ptr.hpp>
namespace XXX
{
class CLASS_DECL_BK CExampleClass // CLASS_DECL_BK is just a standard dll import/export macro
{
private:
boost::scoped_ptr<Replica> m_replica;
}
}
After
namespace XXX
{
class CLASS_DECL_BK CExampleClass
{
private:
boost::scoped_ptr<Replica> m_replica;
}
}
Now I have the advantage of decreased compile times, but all the users of my library are getting build errors (e.g. unknown boost::scoped_ptr...) because of the missing includes (which are now moved to my stdafx.h).
What could be a solution for this dilemma?
I want reduced compile times and compile errors after including my headers files are not acceptable for any users of the dll.
Could this help?
leave all includes directives as they are but duplicate them in my 'stdafx.h'? Since the stdafx.h is always included first inside any cpp file of my project I should be fine, and the users won't get any errors. Or do I loose the speed advantage if multiple includes of the same header occur in one translation unit (got header guards)?
Thanks for any hints!
You should get nearly the same speed increase when you leave the header-includes in place in the library headers and just additionally put them into stdafx.h.
Alternatively, you could add an additional define (a bulk external include guard)
// stdafx.h
#define MY_LIB_STD_HEADERS_ALREADY_INCLUDED
// library_file.h
#ifndef MY_LIB_STD_HEADERS_ALREADY_INCLUDED
#include <boost/smart_ptr.hpp>
...
#endif
But I would only do that if you are sure it helps. Just take a stopwatch and run a few re-compilations. (No need to link.) Then you'll see if there's any difference.
Aside
I'm not sure if adding all boost headers that are needed somewhere in the project is such a good idea. I'd say shared_ptrand friends, boost/foreach, maybe Boost.Format, ... are a good idea, but I'd already think twice for the Boost.RegExp headers. Note: I did not do any speed measurements, but I dimly remember a problem with the size of the pch file and some compiler hiccup. I really should do some tests.
Also check if the Boost Library in question provides forwarding headers and whether you should include them instead. Bloating the precompiled header file can have it's downsides.
you could create a build configuration for this purpose (Debug, Release, CheckDependencies). a simple way to change this would be to use the preprocessor to include/exclude the includes based on the current configuration. using this, you can test and build using debug or release (which contains the larger set of includes), then build all configurations before distribution.
to clarify, the conditional include MON_LIBRARY_VALIDATE_DEPENDENCIES is not to be used in the library headers or sources, only in the precompiled header:
// my pch:
#include <file1.hpp>
#include <file2.hpp>
// ...
#if !defined(MON_LIBRARY_VALIDATE_DEPENDENCIES)
#include <boost/stuff.hpp>
// ...
#endif
then you would append MON_LIBRARY_VALIDATE_DEPENDENCIES to the list of preprocessor definitions in the CheckDependencies configuration.
regarding guards: it should not be a problem if you are using guards under normal circumstances - compilers use optimizations to detect these cases which means they can avoid opening the file in many cases if it's been multiply included and guarded correctly. in fact, attempts to outsmart the compiler in this arena can actually slow down to process. i'd say just leave it as typical unless your library/dependencies are huge and you really have noticeable problems.
Making your compilation units selfcontained (let them include everything they use) is very desirable. This will prevent compilation errors from others that do not use precompiled headers, and as you assume, header guards will keep the cost of these extra includes minimal.
This will also have the desirable side effect that a glance at the headers will tell the users which other headers are in use in the unit, and the options of doing a compilation of the single unit without any fuzz.