Is there a way for the preprocessor to detect if the code in current
translation unit uses(or is creating) precompiled headers?
---
The actual problem I'm facing right now is that I'm on a project that is
abusing PCH by precompiling virtually all header files. That means there is none of
the clear dependency management you can get from #includes and the compile times is awful.
Practically every change will trigger a full rebuild.
The application is way to big to just fix it in one go, and some of the old guys refuses
to belive that precompiling everyting is bad in any way. I will have to prove it first.
So I must do it step by step and make sure my changes does not affect
code that is compiled the old PCH way.
My plan is to do ifdef out the PCH.h and work on the non PCH version whenever I have some time to spare.
#ifdef USES_PCH
#include "PCH.h"
#elif
// include only whats needed
#endif
I would like to avoid defining USES_PCH at command line and manually keep it in
sync with /Y that, besides from not being very elegant, would be a pain. There is a lot of configurations
and modules to juggle with and a lot of files that don't follow project defaults.
If Visual C++ defined a constant to indicate whether precompiled headers were in use, it would probably be listed in Predefined Macros. And it's not documented there, so it probably doesn't exist. (If it does exist, it's probably undocumented and may change in a future version.)
This will not work, when using precompiled headers in Visual C++, you cannot even have any code before including a precompiled header. I was trying to do something similar, when I came across your question. After a little trial and error, I have found that there can be no code prior to the #include directive for the precompiled header when using the /Yu compiler option.
#ifdef USES_PCH
#include "stdafx.h"
#endif
result: fatal error C1020: unexpected #endif
As far as I know, it can't, but there are some heuristics: VC++ uses StdAfx.h, Borland uses #pragma hdrstop, etc.
Related
First of all i want to say that I read about precompiled headers and I understand that this is an optimization that saves me the time of compiling headers over and over on every built.
I'm reading the documentation of boost and I see that in the instructions they say:
In Configuration Properties > C/C++ > Precompiled Headers, change Use Precompiled Header (/Yu) to Not Using Precompiled Headers
And then they explain it:
There's no problem using Boost with precompiled headers; these instructions merely avoid precompiled headers because it would require Visual Studio-specific changes to the source code used in the examples.
Can some explain me the sentence I marked in bold? which visual studio specific changes they are talking about ? (Here is the link to the documentation I'm reading: http://www.boost.org/doc/libs/1_55_0/more/getting_started/windows.html#pch)
Why and when I would want to turn off the precompiled headers?
what is the difference between "Create" and "Use" in the precompiled header options.
Originally a comment, but I may as well post it. Note: this is specific to VC++:
The bold sentence is their way of saying the samples don't follow the mantra of a unified use-this-lead-in-header-for-pch-generation model. IOW, their samples aren't PCH-friendly, but you can still use pch with boost in your projects if properly configured.
You would turn them off for a variety of reasons. Some source modules, particularly ones from 3rd-parties, don't follow the PCH model of including "the" pch-through-header at their outset. Their samples are such code (and thus the advise to turn them off for their samples). Sometimes source files require different preprocessor configurations only for this files and not all files int he project; another reason to disable PCH for those files.
You typically use a source/header pair to generate "the One"; the precompiled header image. This header file typically includes:
Any system standard lib headers used by your project
3rd-party SDK headers
Just about everything else that is NOT in active development for your project.
The single source file tagged as Create typically includes one line of code : #include "YourHeaderFile.h", where YourHeaderFile.h is the header you filled with stuff from the list above. Tagging it as "Create" through header YourHeaderFile.h tells VC it is the file needed for rebuilding the PCH through that header when compiling other source files. All other source files are tagged as Use (except the ones where PCH is turned off) and should include, as their first line of code, the same #include "TheHeaderFile.h".
In short (hard to believe), <boost> is telling you their samples aren't setup like described above, and as such you should turn PCH off when building them.
When you use pre-compiled headers, you need to do something like:
#include <foo>
#include <bar>
#include <baz>
#pragma hdrstop
// other code here
Everything before the #pragma goes into the precompiled header. Everything after it depends on the precompiled header. The VC++ specific "magic" to make pre-compiled header work is that #pragma.
There's a little more to the story than just that though. To make pre-compiled headers work well, you want to include exactly the same set of headers in exactly the same order in every source file.
That leads to (typically) creating one header that includes all the other common headers and has the #pragma hdrstop right at its end, then including that in all the other source files.
Then, when the compiler does its thing, there are two phases: first you need to create a pre-compiled header. This means running the compiler with one switch. The compiler only looks at what comes before the #pragma hdrstop, builds a symbol table (and such) and puts the data into a .pch file.
Then comes the phase when you do a build using the pre-compiled header. In this phase, the compiler simply ignores everything in the the file up to the #pragma hdrstop. When it gets to that, it reads the compiler's internal state from the .pch file, and then starts compiling that individual file.
This means each source file typically includes a lot of headers it doesn't actually need. That, in turn, means that if you don't use pre-compiled headers, you end up with compilation that's much slower than if you hadn't done anything to support pre-compiled headers at all.
In other words, although the only part that's absolutely required is the #pragma hdrstop, which is fairly innocuous, a great deal more file re-structuring is needed to get much benefit from them--and those changes are likely to actively harmful to compilation time if you're using anything that doesn't support pre-compiled headers (and in the same way VC++ does them at that).
When precompiled headers is on every cpp source file must start with #include "stdafx.h"
So you would turn it off if you do not want to edit all the boost source files.
When precompiled headers is on stdafx.cpp "creates" the precompiled header. All other files "use" the precompiled header.
It there a predefined c++ compiler macro that I can use to tell, whether a file is compiled with "Use Precompiled Headers", "Create Precompiled Headers", "Dont Use Precompiled Headers"?
See #IronMensan 's answer for the purpose of such a macro!
I don't think there is anything, though I certainly understand the desire for one. Whenever I have to build my cross-platform library on a system that dozen't support PCH, it takes forever since a lot of files are pulling in way more than they really need and it would be nice to trim that out. Unfortunately I can't because of how Visual Studio handles PCH. Namely that the inclusion of the PCH must be the first non-comment line of the file. From the way you worded your question, I suspect that you are also working with Visual Studio.
I am not sure if this will work for you but you could try something like this:
#include MY_PCH_FILE
And use
/DMY_PCH_FILE="myfile.h"
on the command line to control what the first include file is. After that you have full control over what gets included and proper header guards along with the optimization in most modern compilers to detect header guards could reduce build times. You can change the definition of the macro for individual file in the build settings of your project, in a similar manor to how you can change the PCH settings for each file.
Though I must admit that I am not sure what you are trying to do and I suspect this is really an XY problem
Visual Studio/MSC does not provide a predefined macro that carries the setting of the /Y[-cdu] compiler switch for inspection from source code.
However, there is a solution to the problem you are trying to solve, i.e. controlling whether or not the first non-comment line of a source file should be #include "<my pch.h>": MSC offers the /FI (Name Forced Include File) compiler switch.
This option has the same effect as specifying the file with double quotation marks in an #include directive on the first line of every source file specified on the command line [...]
This compiler switch can either be specified on the compiler's command line, or on a per-project basis through the IDE's GUI (Project -> Properties: C/C++ -> Advanced: Forced Include File).
With a combination of the /Y[-cdu] and /FI compiler switches you can both control the use and meet the requirements for using precompiled headers, from outside the source code.
In this case, I think you can create manualy yourself the macro.
You can define USE_PRECOMPILEDHDR and FORCED_INCLUDEHDR when you use precompilation like this
#if USE_PRECOMPILEDHDR
#ifndef FORCED_INCLUDEHDR
#include "stdafx.h"
#endif
#else
//..manualy include all your headers
#endif
But as other saying, except if you change for another compiler, you have no reason to use guards for this.
This feature is unlikely to exist. The whole point of precompiled headers is that the headers will be compiled with exactly the same compiler options as when compiling for real. If the compiler were to offer a way for your code to tell the difference, then you could make your code behave differently (at a preprocessor level) depending on whether the compiler is precompiling or actually compiling.
If you're looking to include header files based on whether or not precompiled headers are enabled, you should use an Include Guard instead.
According to this answer boost and STL headers belong into the precompiled header file (stdafx.h in the MSVC world). So I changed the headers of my dynamic link library project and moved all STL/Boost headers into the stdafx.h of my project.
Before
#include <boost/smart_ptr.hpp>
namespace XXX
{
class CLASS_DECL_BK CExampleClass // CLASS_DECL_BK is just a standard dll import/export macro
{
private:
boost::scoped_ptr<Replica> m_replica;
}
}
After
namespace XXX
{
class CLASS_DECL_BK CExampleClass
{
private:
boost::scoped_ptr<Replica> m_replica;
}
}
Now I have the advantage of decreased compile times, but all the users of my library are getting build errors (e.g. unknown boost::scoped_ptr...) because of the missing includes (which are now moved to my stdafx.h).
What could be a solution for this dilemma?
I want reduced compile times and compile errors after including my headers files are not acceptable for any users of the dll.
Could this help?
leave all includes directives as they are but duplicate them in my 'stdafx.h'? Since the stdafx.h is always included first inside any cpp file of my project I should be fine, and the users won't get any errors. Or do I loose the speed advantage if multiple includes of the same header occur in one translation unit (got header guards)?
Thanks for any hints!
You should get nearly the same speed increase when you leave the header-includes in place in the library headers and just additionally put them into stdafx.h.
Alternatively, you could add an additional define (a bulk external include guard)
// stdafx.h
#define MY_LIB_STD_HEADERS_ALREADY_INCLUDED
// library_file.h
#ifndef MY_LIB_STD_HEADERS_ALREADY_INCLUDED
#include <boost/smart_ptr.hpp>
...
#endif
But I would only do that if you are sure it helps. Just take a stopwatch and run a few re-compilations. (No need to link.) Then you'll see if there's any difference.
Aside
I'm not sure if adding all boost headers that are needed somewhere in the project is such a good idea. I'd say shared_ptrand friends, boost/foreach, maybe Boost.Format, ... are a good idea, but I'd already think twice for the Boost.RegExp headers. Note: I did not do any speed measurements, but I dimly remember a problem with the size of the pch file and some compiler hiccup. I really should do some tests.
Also check if the Boost Library in question provides forwarding headers and whether you should include them instead. Bloating the precompiled header file can have it's downsides.
you could create a build configuration for this purpose (Debug, Release, CheckDependencies). a simple way to change this would be to use the preprocessor to include/exclude the includes based on the current configuration. using this, you can test and build using debug or release (which contains the larger set of includes), then build all configurations before distribution.
to clarify, the conditional include MON_LIBRARY_VALIDATE_DEPENDENCIES is not to be used in the library headers or sources, only in the precompiled header:
// my pch:
#include <file1.hpp>
#include <file2.hpp>
// ...
#if !defined(MON_LIBRARY_VALIDATE_DEPENDENCIES)
#include <boost/stuff.hpp>
// ...
#endif
then you would append MON_LIBRARY_VALIDATE_DEPENDENCIES to the list of preprocessor definitions in the CheckDependencies configuration.
regarding guards: it should not be a problem if you are using guards under normal circumstances - compilers use optimizations to detect these cases which means they can avoid opening the file in many cases if it's been multiply included and guarded correctly. in fact, attempts to outsmart the compiler in this arena can actually slow down to process. i'd say just leave it as typical unless your library/dependencies are huge and you really have noticeable problems.
Making your compilation units selfcontained (let them include everything they use) is very desirable. This will prevent compilation errors from others that do not use precompiled headers, and as you assume, header guards will keep the cost of these extra includes minimal.
This will also have the desirable side effect that a glance at the headers will tell the users which other headers are in use in the unit, and the options of doing a compilation of the single unit without any fuzz.
I have a Visual Studio C++ based program that uses pre-compiled headers (stdafx.h). Now we are porting the application to Linux using gcc 4.x.
The question is how to handle pre-compiled header in both environments.
I've googled but can not come to a conclusion.
Obviously I want leave stdafx.h in Visual Studio since the code base is pretty big and pre-compiled headers boost compilation time.
But the question is what to do in Linux. This is what I found:
Leave the stdafx.h as is. gcc compiles code considerable faster than VC++ (or it is just my Linux machine is stronger ... :) ), so I maybe happy with this option.
Use approach from here - make stdafx.h look like (set USE_PRECOMPILED_HEADER for VS only):
#ifdef USE_PRECOMPILED_HEADER
... my stuff
#endif
Use the approach from here - compile VC++ with /FI to implicitly include stdafx.h in each cpp file. Therefore in VS your code can be switched easily to be compiled without pre-compiled headers and no code will have to be changed.
I personally dislike dependencies and the mess stdafx.h is pushing a big code base towards. Therefore the option is appealing to me - on Linux you don't have stdafx.h, while still being able to turn on pre-compiled headers on VS by /FI only.
On Linux compile stdafx.h only as a precompiled header (mimic Visual Studio)
Your opinion? Are there other approaches to treat the issue?
You're best off using precompiled headers still for fastest compilation.
You can use precompiled headers in gcc as well. See here.
The compiled precompiled header will have an extension appended as .gch instead of .pch.
So for example if you precompile stdafx.h you will have a precompiled header that will be automatically searched for called stdafx.h.gch anytime you include stdafx.h
Example:
stdafx.h:
#include <string>
#include <stdio.h>
a.cpp:
#include "stdafx.h"
int main(int argc, char**argv)
{
std::string s = "Hi";
return 0;
}
Then compile as:
> g++ -c stdafx.h -o stdafx.h.gch
> g++ a.cpp
> ./a.out
Your compilation will work even if you remove stdafx.h after step 1.
I used option 3 last time I needed to do this same thing. My project was pretty small but this worked wonderfully.
I'd either go for option 4 or option 2. I've experimented with precompiled headers on both various VS versions and GCC on Linux (blog posts about this here and here). In my experience, VS is a lot more sensitive to the length of the include paths, number of directories in the include path and the number of include files than G++ is. When I measured build times properly arranged precompiled headers would make a massive difference to the compile time under VS whereas G++ was pretty much unimpressed by this.
Actually, based on the above what I did the last time I worked on a project where this was necessary to rein in the compile time was to precompile the equivalent of stdafx.h under Windows where it made sense and simply used it as a regular file under Linux.
Very simple solution.
Add a dummy file entry for "stdafx.h" in Linux environment.
I would only use option 1 in a big team of developers.
Options 2, 3, and 4 will often halt the productivity of other members of your team, so you can save a few minutes a day in compile time.
Here's why:
Let's assume that half of your developers use VS and half use gcc.
Every now and then some VS developer will forget to include a header in a .cpp file.
He won't notice, because the stdafx.h implicitly includes it. So, he pushes his changes in the version control, and then a few other members of the gcc team will get compiler errors.
So, for every 5 minutes-a-day you gain by using precompiled headers, 5 other people waste by fixing your missing headers.
If you don't share the same code across all of your compilers, you will run into problems like that every day. If you force your VS developers to check for compilation on gcc before pushing changes, then you will throw away all your productivity gains from using precompiled headers.
Option 4 sounds appealing, but what if you want to use another compiler at some point in time ? Option 4 only works if you only use VS and gcc.
Notice that option 1 may make gcc compilation suffer a few seconds. Although it may not be noticeable.
It's simple, really:
Project->Project Settings (Alt + F7)
Project-Settings-Dialog:
C++ -> Category: Precompiled Headers -> Precompiled Headers radio buttons --> disable
Since stdafx.h is by default all the Windows-specific stuff, I've put an empty stdafx.h on my other platform. That way your source code stays identical, while effectively disabling stdafx on Linux without having to remove all the #include "stdafx.h" lines from your code.
If you are using CMake in your project, then there are modules which automate it for you, very convenient, for example see cmake-precompiled-header here. To use it just include the module and call:
include( cmake-precompiled-header/PrecompiledHeader.cmake )
add_precompiled_header( ${target} ${header} FORCEINCLUDE SOURCE_CXX ${source} )
Another module called Cotire creates the header file to be precompiled (no need to manually write StdAfx.h) and speeds up builds in other ways - see here.
I've done both option 2 (#ifdef) and option 4 (PCH for gcc) for cross platform code with no issues.
I find gcc compiles much faster than VS so the precompiled headers are generally not that important, unless you are referencing some huge header file.
I have a situation where #2 in particular didn't work for me (There are numerous VS build configs where a #ifdef around #include "stdafx.h" does not work). Other solutions were suboptimal because the files themselves were cross-project as well as being cross-platform. I did not want to force preprocessor macros to be set or force linux or even windows builds to use (or not use) pch, so...
What I did, given a file named notificationEngine.cpp, for example, was removed the #include stdafx.h line entirely, created a new file in the same directory called pchNotificationEngine.cpp with the following contents:
#include "stdafx.h"
#include "notificationEngine.cpp"
Any given project can just include the correct version of the file. This admittedly is probably not the best option for cpp files that are only used by a single project.
I am working on a large C++ project in Visual Studio 2008, and there are a lot of files with unnecessary #include directives. Sometimes the #includes are just artifacts and everything will compile fine with them removed, and in other cases classes could be forward declared and the #include could be moved to the .cpp file. Are there any good tools for detecting both of these cases?
While it won't reveal unneeded include files, Visual studio has a setting /showIncludes (right click on a .cpp file, Properties->C/C++->Advanced) that will output a tree of all included files at compile time. This can help in identifying files that shouldn't need to be included.
You can also take a look at the pimpl idiom to let you get away with fewer header file dependencies to make it easier to see the cruft that you can remove.
PC Lint works quite well for this, and it finds all sorts of other goofy problems for you too. It has command line options that can be used to create External Tools in Visual Studio, but I've found that the Visual Lint addin is easier to work with. Even the free version of Visual Lint helps. But give PC-Lint a shot. Configuring it so it doesn't give you too many warnings takes a bit of time, but you'll be amazed at what it turns up.
There's a new Clang-based tool, include-what-you-use, that aims to do this.
!!DISCLAIMER!! I work on a commercial static analysis tool (not PC Lint). !!DISCLAIMER!!
There are several issues with a simple non parsing approach:
1) Overload Sets:
It's possible that an overloaded function has declarations that come from different files. It might be that removing one header file results in a different overload being chosen rather than a compile error! The result will be a silent change in semantics that may be very difficult to track down afterwards.
2) Template specializations:
Similar to the overload example, if you have partial or explicit specializations for a template you want them all to be visible when the template is used. It might be that specializations for the primary template are in different header files. Removing the header with the specialization will not cause a compile error, but may result in undefined behaviour if that specialization would have been selected. (See: Visibility of template specialization of C++ function)
As pointed out by 'msalters', performing a full analysis of the code also allows for analysis of class usage. By checking how a class is used though a specific path of files, it is possible that the definition of the class (and therefore all of its dependnecies) can be removed completely or at least moved to a level closer to the main source in the include tree.
I don't know of any such tools, and I have thought about writing one in the past, but it turns out that this is a difficult problem to solve.
Say your source file includes a.h and b.h; a.h contains #define USE_FEATURE_X and b.h uses #ifdef USE_FEATURE_X. If #include "a.h" is commented out, your file may still compile, but may not do what you expect. Detecting this programatically is non-trivial.
Whatever tool does this would need to know your build environment as well. If a.h looks like:
#if defined( WINNT )
#define USE_FEATURE_X
#endif
Then USE_FEATURE_X is only defined if WINNT is defined, so the tool would need to know what directives are generated by the compiler itself as well as which ones are specified in the compile command rather than in a header file.
Like Timmermans, I'm not familiar with any tools for this. But I have known programmers who wrote a Perl (or Python) script to try commenting out each include line one at a time and then compile each file.
It appears that now Eric Raymond has a tool for this.
Google's cpplint.py has an "include what you use" rule (among many others), but as far as I can tell, no "include only what you use." Even so, it can be useful.
If you're interested in this topic in general, you might want to check out Lakos' Large Scale C++ Software Design. It's a bit dated, but goes into lots of "physical design" issues like finding the absolute minimum of headers that need to be included. I haven't really seen this sort of thing discussed anywhere else.
Give Include Manager a try. It integrates easily in Visual Studio and visualizes your include paths which helps you to find unnecessary stuff.
Internally it uses Graphviz but there are many more cool features. And although it is a commercial product it has a very low price.
You can build an include graph using C/C++ Include File Dependencies Watcher, and find unneeded includes visually.
If your header files generally start with
#ifndef __SOMEHEADER_H__
#define __SOMEHEADER_H__
// header contents
#endif
(as opposed to using #pragma once) you could change that to:
#ifndef __SOMEHEADER_H__
#define __SOMEHEADER_H__
// header contents
#else
#pragma message("Someheader.h superfluously included")
#endif
And since the compiler outputs the name of the cpp file being compiled, that would let you know at least which cpp file is causing the header to be brought in multiple times.
PC-Lint can indeed do this. One easy way to do this is to configure it to detect just unused include files and ignore all other issues. This is pretty straightforward - to enable just message 766 ("Header file not used in module"), just include the options -w0 +e766 on the command line.
The same approach can also be used with related messages such as 964 ("Header file not directly used in module") and 966 ("Indirectly included header file not used in module").
FWIW I wrote about this in more detail in a blog post last week at http://www.riverblade.co.uk/blog.php?archive=2008_09_01_archive.xml#3575027665614976318.
Adding one or both of the following #defines
will exclude often unnecessary header files and
may substantially improve
compile times especially if the code that is not using Windows API functions.
#define WIN32_LEAN_AND_MEAN
#define VC_EXTRALEAN
See http://support.microsoft.com/kb/166474
If you are looking to remove unnecessary #include files in order to decrease build times, your time and money might be better spent parallelizing your build process using cl.exe /MP, make -j, Xoreax IncrediBuild, distcc/icecream, etc.
Of course, if you already have a parallel build process and you're still trying to speed it up, then by all means clean up your #include directives and remove those unnecessary dependencies.
Start with each include file, and ensure that each include file only includes what is necessary to compile itself. Any include files that are then missing for the C++ files, can be added to the C++ files themselves.
For each include and source file, comment out each include file one at a time and see if it compiles.
It is also a good idea to sort the include files alphabetically, and where this is not possible, add a comment.
If you aren't already, using a precompiled header to include everything that you're not going to change (platform headers, external SDK headers, or static already completed pieces of your project) will make a huge difference in build times.
http://msdn.microsoft.com/en-us/library/szfdksca(VS.71).aspx
Also, although it may be too late for your project, organizing your project into sections and not lumping all local headers to one big main header is a good practice, although it takes a little extra work.
If you would work with Eclipse CDT you could try out http://includator.com to optimize your include structure. However, Includator might not know enough about VC++'s predefined includes and setting up CDT to use VC++ with correct includes is not built into CDT yet.
The latest Jetbrains IDE, CLion, automatically shows (in gray) the includes that are not used in the current file.
It is also possible to have the list of all the unused includes (and also functions, methods, etc...) from the IDE.
Some of the existing answers state that it's hard. That's indeed true, because you need a full compiler to detect the cases in which a forward declaration would be appropriate. You cant parse C++ without knowing what the symbols mean; the grammar is simply too ambiguous for that. You must know whether a certain name names a class (could be forward-declared) or a variable (can't). Also, you need to be namespace-aware.
Maybe a little late, but I once found a WebKit perl script that did just what you wanted. It'll need some adapting I believe (I'm not well versed in perl), but it should do the trick:
http://trac.webkit.org/browser/branches/old/safari-3-2-branch/WebKitTools/Scripts/find-extra-includes
(this is an old branch because trunk doesn't have the file anymore)
If there's a particular header that you think isn't needed anymore (say
string.h), you can comment out that include then put this below all the
includes:
#ifdef _STRING_H_
# error string.h is included indirectly
#endif
Of course your interface headers might use a different #define convention
to record their inclusion in CPP memory. Or no convention, in which case
this approach won't work.
Then rebuild. There are three possibilities:
It builds ok. string.h wasn't compile-critical, and the include for it
can be removed.
The #error trips. string.g was included indirectly somehow
You still don't know if string.h is required. If it is required, you
should directly #include it (see below).
You get some other compilation error. string.h was needed and isn't being
included indirectly, so the include was correct to begin with.
Note that depending on indirect inclusion when your .h or .c directly uses
another .h is almost certainly a bug: you are in effect promising that your
code will only require that header as long as some other header you're using
requires it, which probably isn't what you meant.
The caveats mentioned in other answers about headers that modify behavior
rather that declaring things which cause build failures apply here as well.