Why does stdafx.h work the way it does? - c++

As usual, when my brain's messing with something I can't figure out myself, I come to you guys for help :)
This time I've been wondering why stdafx.h works the way it does? To my understanding it does 2 things:
Includes standard headers which we
might (?) use and which are rarely changed
Work as a compiler-bookmark for when
code is no longer precompiled.
Now, these 2 things seems like two very different tasks to me, and I wonder why they didn't do two seperate steps to take care of them? To me it seems reasonable to have a #pragma-command do the bookmarking stuff and to optionally have a header-file a long the lines of windows.h to do the including of often-used headers... Which brings me to my next point: Why are we forced to include often-used headers through stdafx.h? Personally, I'm not aware of any often used headers I use that I'm not already doing my own includes for - but maybe these headers are necessary for .dll generation?
Thx in advance

stdafx.h is ONE way of having Visual studio do precompiled headers. It's a simple to use, easy to automatically generate, approach that works well for smaller apps but can cause problems for larger more complex apps where the fact that it encourages, effectively, the use of a single header file it can cause coupling across components that are otherwise independent. If used just for system headers it tends to be OK, but as a project grows in size and complexity it's tempting to throw other headers in there and then suddenly changing any header file results in the recompilation of everything in the project.
See here: Is there a way to use pre-compiled headers in VC++ without requiring stdafx.h? for details of an alternative approach.

You are not forced to use "stdafx.h". You can check off the Use precompiled headers in project properties (or when creating the project) and you won't need stdafx.h anymore.
The compiler uses it as a clue to be able to precompile most used headers separately in a .pch file to reduce compilation time (don't have to compile it every time).

It keeps the compile time down, as the stuff in it are always compiled first (see details in quote below):
stdafx.h is a file
that describes both standard system
and project specific include files
that are used frequently but hardly
ever changed.
Compatible compilers will
pre-compile this file to reduce
overall compile times. Visual C++ will
not compile anything before the
#include "stdafx.h" in the source file, unless the compile option
/Yu'stdafx.h' is unchecked (by
default); it assumes all code in the
source up to and including that line
is already compiled.

It will help reduce long compilations.

Related

Visual Studio 2010, Intellisense and PCH: what are the alternatives to ugly stdafx.h?

I recently switched to Visual Studio 2010 and for Intellisense not to take half a minute to show up when using boost libraries, Microsoft's suggestion seems to use precompiled headers.
Except that I never used them before (except when forced to by Ugly ATL Wizards (TM)), so I searched around to figure out how they work.
Basically, the Big Centralized stdafx.h approach seems plain wrong. I never want to include (even cheaply) a whole bunch of header files in all my sources. Since I don't use windows libraries (I make C++/CLI higher level wrappers, then use .NET for talking to the outside world), I don't have "a whole truckload of non-changing enormous headers". Just boost and standard library headers scattered around.
There is an interesting approach to this problem, but I can't quite figure out how to make this work. It seems that each source file must be compiled twice (please correct me if I'm wrong): once with /Yc and once with /Yu. This adds burden on the developper which must manually tweak the build system.
I was hoping to find some "automatically generate one precompiled header for each source file" trick, or at least some "best practices", but most people seem happy with including the world into stdafx.h.
What are the options available to me to use precompiled headers on a per source file basis ? I don't really care about build times (as long as they don't skyrocket), I just want intellisense to work fast.
For starters, you are reading the article wrong. Every file is NOT compiled twice. The file stdafx.cpp gets compiled once with /Yc (c, for create) before anything else and then every other file in your project gets compiled once with /Yu (u, for use) and imports the result of the previously created saved state from stdafx.cpp.
Secondly, the article is 7 years old and is talking about VC++ 6, so you should start off distrusting it. But even assuming the information in it still applies to VC++ 2008 or 2010, it seems like bad advice. The approach it recommends using /pragma hdrstop is solution looking for a problem. If you have headers that contain things you don't want in every file, then they simply shouldn't go in your pre-compiled header.
Your problem basically seems to be that Intellisense is slow for Boost in VS2010? I don't have a direct solution for this problem, but could Visual Assist X be an option for you? I have used it in various versions of Visual Studio now and with great pleasure. Not a direct solution, but it might work for you.
Precompiled headers aren't too bad if you use them properly.
Don't use them as a replacement for proper and precise #includes, but as a way to speed things up. Achieve this by making the precompiled header do nothing in release builds, only speeding stuff up in debug.
You are wrong, each file is only compiled once. You have one .cpp file that is compiled with /Yc and the rest are compiled with /Yu. The file with /Yc, which is stdafx.cpp by default, contains one line, #include "myMainHeader.h" (changed the name from the default) All other .cpp files must start with #include "myMainHeader.h" When your /Yc file is compiled, the entire internal state of the compiler is saved. That file is loaded when each of your other files is compiled. That is why you must start with including the PCH, so that the /Yu option doesn't change the result of compilation, only the time. Xcode does not make this requirement and will use a PCH regardless of if your .cpp file starts with the right include directive. I have used libraries that relied on this and could not be built without PCH.

Sharing Pre-compiled Headers efficiently

I have a framework which is being used by several projects (which includes several samples to show how the framework works). The framework has components such as the core, graphics, physics, gui etc. Each one is a separate library. There are several configurations as well.
A main solution file compiles the complete project with all the possible configurations so that the projects can use the libraries. Since the framework is rarely recompiled, especially by someone (including me) working on a project that utilizes the framework, it makes sense to pre-compile the many headers.
Initially I had each project/sample have its own pre-compiled header used for the whole project. Each time I would have to rebuild the same pch (for example, Debug), So I decided that a shared PCH would reduce the redundant PCH compilation. So far so good. I have a project that compiles the PCH along with the libraries. All the subsequent projects/samples are now using the same PCH. This has worked wonderfully.
The only problem is that I have seen an increase in file size. This is not a roadblock, as if a project that uses the framework is intended to be released, it can sever itself from the shared PCH and make its own. I have done this for the sake of rapid development (I have actually created a tool which creates the VS project files and source files for a new project/sample ready to be built as well as facilitate in upgrading a previous project that was using an older version of the framework).
Anyway, (I am presuming that) the increase in file size is because the independant VS project file that is creating the shared PCH is including all the headers from all the libraries. My question is whether I can use conditional compilation (#ifndef) to reduce the size of the final executable? Or maybe share multiple PCH files somehow (as far I know though, that is not possible, but I maybe wrong) If I am not making sense, please say so (in kind words :) ) as my knowledge of PCH files is very limited.
Thanks!
Note: To re-iterate and make it clear, so far, I have one solution file that is compiling all the libraries including the shared PCH. Now if I recompile all the samples and projects, they compile in a couple of seconds or more at most. Before, each project would recreate a PCH file. Also, initially I wanted a PCH for each library, but then I found out that a source file cannot use multiple PCH files, so this option was not feasible. Another option is to compile all possible combinations of PCH files, but that is too time consuming and cumbersome and error prone.
It sounds like the size problem is coming from using headers you don't actually need, but that it still makes sense to use these headers when developing because of the faster turn around.
On using #ifndefs: Precompilation is crude. You lose the ability to share the precompilation work at the point where there is a difference. If using #ifndefs to make different variants of what you include, I.e. if you have
#ifndef FOO
Then the precompiled header must stop before the point where FOO is defined differently in two files that use that precompiled header. So #ifndef is not going to solve the problem. The net result is that FOO must be the same, or you're back to separate pch files for the different projects. Neither solves things.
As to sharing multiple .pch files: A fundamental limitation of .pch files is that each .obj can only use one. Of course .pch files can have arbitrary combinations of headers. You could have a .pch for core+graphics, a .pch for core+physics, core+ai etc. This would work just dandy if none of the source files needed to 'talk' to more than core+one-module at a time. That does not sound realistic to me. Such a scheme and variants on it sound like a lot of restructuring work for no real gain. You don't want to be building zillions of combinations and keeping track of them all. It's possible, but it is not going to save you time.
In my view you're doing exactly the right thing by sacrificing executable size for fast turn-around during development/debugging, and then having a slower but leaner way of building for the actual release.
In the past I've found that you quite quickly run into diminishing returns as you put more in the precompiled headers, so if you're trying to put more in to make it more useful in a larger number of projects then it will hit a point that it slows down. On our projects the PCH files take longer than most source files to compile, but still only a few seconds maximum. I would suggest making the PCH files specific to each project you are using. You are right in saying that a source file can only refer to a single PCH file, but one way of getting around this is to use the 'force include' option (in the Advanced tab I think) to ensure that all files include the PCH file for that project.

Using precompiled headers, header file changes arent picked up, expected?

Visual Studio C++ lib project
Project is set to use precompiled headers
stdafx.cpp is set to create precompiled header
I have a header file, MyClass.h
If I build, then make a change to MyClass.h that should fail to compile, compile still succeeds.
If I do a rebuild, or if I make a change to a cpp file that includes "MyClass.h", then the compile fails as expected.
Is this expected because I'm using precompiled headers? Is there any way to fix it so a 2nds buid picks up header changes without turning off precompiled headers?
Make sure that the header file you are altering is referenced by your project in Solution Explorer. If this is the case, the full build should trigger when it is changed.
In the project properties, set "Enable Minimal Rebuild" to No
Are you sure that stdafx.cpp includes the header in question?
VisualStudio can often get pretty damn stupid over changes. It can go either way, but usually it goes the way you're running into.
I've had it catch onto changes in a header used by one file, but not the fact that it's used in others. So it compiles the one but not the others. Then I get really weird linker errors.
It could of course still be your own damn fault, but VS is, in fact, notoriously stupid. Sometimes a complete rebuild will fix the issue permanently, until next time. Other times you have somehow hosed the project file and hopefully you can get back to the original (like a source server revert). "Undo" most usually does not undo this kind of fubar.
I've noted this several times not needing to be a header that's in the precompiled header. It seems to be somewhat random but one more common correlation is that the header is full of templates. VS is just plain retarded wrt templates.

Visual C++ 'Force Includes' option

I have just come across a Visual C++ option that allows you to force file(s) to be included - this came about when I was looking at some code that was missing a #include "StdAfx.h" on each .cpp file, but was actually doing so via this option.
The option can be found on the Advanced C/C++ Configuration Properties page and equates to the /FI compiler option.
This option could prove really useful but before I rush off and start using it I thought I'd ask if there are any gotchas?
I would say the opposite to litb above if you're using precompiled headers. If you use "stdafx.h" as your precompiled header and have code like this:
#include "afile.h"
#include "stdafx.h"
then you'll be spending an age trying to figure out why "afile.h" isn't being included. When using precompiled headers, all the includes and #defines are ignored up to the "stdafx.h". So, if you force include the "stdafx.h" then the above will never happen and you'll get a project that uses the precompiled option efficiently.
As for litb's comment about finding macros, good IDE's usually have an option to jump to the definition of a symbol, whether it be a #define, function, class, etc.
I would discourage from /FI (MSDN says it's called /FI . Not sure whether i looked at the right page though), simply because people or yourself reading the files don't notice a header is magically included anyway.
You can be sure this will cause much debugging time for someone that wants to figure out where specific macros come from, even though there are no #include lines at the top of the file.
Force includes is also helpful for automatically generated source files. Our system uses a tool that generates many source files but they don't include our pre-compiled header file. With "force includes" compilation of these files is now faster.
Optionally, it would be possible to write a script to insert the #include line in those files after generation and before compilation. But why go to that trouble?
I'd side with litb: don't do the forced includes. Having the code be explicit makes it easier to know what's going on for a new user. It also makes it easier to know what's going on if you ever need to port the code to a new platform.
Note that if the code uses templates, Visual Studio usually can't track down the definitions correctly. Perhaps 2010 will be better, but even VS 2008 is problematic on this front.
I wouldn't use it that often but it has its uses. I have used it to add a header that suppressed some warnings to all cpp files so that I could turn on /W4 or /Wall for the project and not have to edit all of the cpp files to include the warning suppression header first. Once eveything was working I DID go back and edit all the cpp files but for a proof of concept /FI was useful.
Likewise you can use it to force a precompiled header into cpp files in some build configurations but not in all (in case you want to have a build configuration that DOESNT use precompiled headers and that makes sure that each cpp only includes exactly what it needs to). However using #pragma hdrstop is, IMHO, a better way to achieve this.
I've talked about all of this on my blog here: http://www.lenholgate.com/blog/2004/07/fi-stlport-precompiled-headers-warning-level-4-and-pragma-hdrstop.html in a little more detail.
Save this function for when something weird comes up - like if you want to include a header in a generated source file. Even then there are likely better ways.

C++ include file browser

I have a very large project with tons of convoluted header files that all include each other. There's also a massive number of third-party libraries that it depends on. I'm trying to straighten out the mess, but I'm having some trouble, since a lot of the time I'll remove one #include directive only to find that the stuff it was including is still included through one of the other files. Is there any tool that can help me understand this? I'd really like to be able to click on a .h file and ask it which CPP files it's included in (directly or indirectly), and the paths through which it is included, and likewise click a cpp file and ask it which .h files are included (directly and indirectly). I've never heard of a tool that does this, and a bit of quick googling hasn't turned anything up, but maybe I don't know what to search for.
http://www.profactor.co.uk/includemanager.php
For VS2003 there is /showIncludes flag (in C/C++/Advanced properties). This will print all headers each .cpp file includes and what they include, so you can go from there.
I'm sure there is same option in same place for VS2008.
if you use GCC compilers, try this
g++ -M abc.cpp
it will show all include dependencies for the file abc.cpp
Your situation reminds me of my own. I have a bunch of headers that I have created that I use as a library instead of bothering with a DLL.
Of course the cyclic-includes can become troublesome, so I find that a tool like Visual Assist X (1) helps with this sort of thing. It has a function that can find references to stuff, so that you can easily weed out where something is being defined/declared/included etc. It also has a lot of other useful features, so I consider it to be pretty useful.
There’s probably other tools/plugins that have a referencing function, but usually as one feature among the other refactoring and productivity functions of the utility.
HTH
It's pretty tedious, but you can binary-search your way to where an #include happens by using #error (and #pragma message) to narrow down which include line is pulling in the third party. I've done this in the case of a single file I was trying to track down, but it sounds like your problem is bigger so probably one of the tools others have mentioned would be more effective.