It is interesting for me, why for modern c++ compilers
like gcc, clang or cl.exe (MS) and with modern build
tools like ninja, wtf, scons etc. we still
have to manage precompiled headers by hands?
Why not, for example, during build create for headers database
like key(modification date, additional to system defines, compiler version)->data(AST), and use during project build?
From the first sight this database should increase time of compilation,
and remove need for developer to manual managing precompiled headers,
but for some reason this is not implemented, and I wonder why?
Too much efforts with small benefits?
Update:
I found mail in clang dev list about NetBeans IDE (for c++):
What we had to made as tradeoff: Parse headers "minimal amount of
times" So, we have introduced smth like "chained PCHs" long time ago.
It allows to reuse what was parsed in the context of one translation
unit for parsing other translation units if "controling macro" for
header check says "ok" to reuse (so, one header sometimes has several
versions of PCH states)
But this about IDE parser, not compiler, as I know the most IDE use light version of c++ parser, and I can not found correspoding feature in sun c++ compiler.
Related
I have recently switched a project to making use of precompiled headers due to compilation becoming slow. Before doing so, it built without any significant warnings.
However, after adding all the QT headers I use in my project (of which I use 40-50) to the stdafx.h file, during building of the solution, when the stdafx.h file gets built I receive a huge number (1000's) of warning relating to QT functions. In particular, I get a lot of "Warning C4251" e.g.
1>c:\Qt\5.9\msvc2015_64\include\QtGui/qrawfont.h(154): warning C4251: 'QRawFont::d': class 'QExplicitlySharedDataPointer' needs to have dll-interface to be used by clients of class 'QRawFont' (compiling source file TitleBar.cpp)
The other two common warning types (albeit far less) are c4800 and c4244.
I am using QT 5.9 64-bit, on a Windows 10 box running VS2015,.
I can obviously just disable them, but I don't really like to do such a thing without understanding why this is happening.
A lot of cross-platform code creates warnings. It's not always possible to disable them, for example if a parameter is unused, one compiler might warn, if it is artificially used to shut up that warning, another might warn about unreachable code. Then MS warns about things like the C standard string library, which it's often impractical to avoid. You have to remember that MS and Apple have very mixed feeling about something like QT. The last thing they want is to sell undifferentiated platforms for running QT applications on. So there's not much motivation to provide appropriate warnings.
PROBLEM SUMMARY
We took a crack at using cotire, the Compile-Time Reducer, as our precompiled header system due to the extremely long compile times caused by usage of the Boost C++ template library. We are getting poor to dangerous results-- the precompiled headers seem to be constantly rebuilt, and are occasionally masking build problems. Specifically, back-to-back builds result in differing hashes of the actual cotire precompiled headers themselves, and, when attempting incremental builds, the precompiled headers are rebuilt every time, even when no header material has changed.
BACKGROUND
The project is a Linux CMake build using g++ 6.3.1, which produces, as its artifacts, an installable shared-library and several executables. Possibly of note, we have no interest in using unity builds, due to constant requirements for rapid-iteration development. However, the project is very large, hence the interest in cotire.
It is worth noting that, to avoid unexpected interactions while troubleshooting this issue, we have disabled ccache, the compiler cache. (Our intent is to eventually enable ccache if and only if we get consistent, expected results from cotire, or after we give up and remove cotire.)
The project uses C++03 (though building with 11 did not change our results for purposes of this question) and relies extensively on the Boost C++ library. Specifically, we make constant use of Boost Signal-Slot, boost::bind, boost::function, and quite a few of the iteration capabilities. Boost implements all of these via extensive template meta-programming and variant self-inclusion.
It is possible, though not entirely straightforward, to "blacklist" header files from cotire, excluding them from the generated precompiled headers. We attempted to blacklist various subdirectories within Boost itself (Boost includes some 900 different header files, so we mostly restricted ourselves to its immediate subdirectories). This seemed to produce improved results-- blacklisting certain parts of Boost resulted in back-to-back clean builds that produced matching precompiled header hashes.
Unfortunately, after a more significant delay-- perhaps ten minutes-- a third attempt to do a clean build resulted in a different hash for the precompiled header.
At this point, our working theory was that various special preprocessor symbols such as __TIME__ and __DATE__ were somehow getting mangled into cotire's input. Searching for these in Boost's headers does indeed produce a number of hits in wave, spirit, etc. Presumably this causes cotire to believe that the headers have changed in some way, because it attempts to rebuild the entire precompiled header with every build (though not with every compilation unit; we tentatively believe we have it integrated into the build process correctly).
QUESTION
Has anyone successfully used cotire with gcc/g++ and Boost? Were any unusual steps required, as opposed to using cotire in a project without Boost?
We are interested in cotire specifically; results from, for example, Visual Studio's precompiled header system might be interesting but are unlikely to be helpful. However, if you are troubleshooting similar issues and wish to provide your observations here, please feel free to do so.
I have been learning Chapel with small programs and they are working great. But as a program becomes longer, the compilation time also becomes longer. So I looked for the way to compile multiple files one by one, but not with success yet. By searching the internet, I found this and this pages, and the latter says
All of these incremental compilation features are enabled with the new --incremental flag in the Chapel compiler, which will be made available in Chapel 1.14.0 release.
Although the Chapel compiler on my computer accepts this option, it does not seem to generate any *.o (or *.a?) when compiling a file containing only a procedure (i.e. no main()). Is this because the above project is experimental...? In that case, can we expect this feature to be included in some future version of Chapel?
(Or, the word "incremental compilation" above is not what I'm expecting for usual compilers like GCC?)
My environment: Chapel-1.14.0 installed via homebrew on Mac OSX 10.11.6.
The Chapel implementation only fully compiles code that is used through the execution of the main() routine. As a starting foray, the incremental compilation project tried to minimize the executable difference between code compiled through normal compilation and code compiled with the --incremental flag. This was to ensure that the user would not encounter a different set of errors when developing in one mode than they would the other. As a consequence, a file containing only a procedure would not be compiled until a compilation attempt when that file/procedure was used.
The project you reference was an excellent first start but exposed many considerations to the team which we had not previously considered (including the one you have raised). We're still discussing the future direction of this feature, so it isn't entirely clear what that would entail. One possible expansion is "separate compilation", where code could be compiled into a .o or .a which could be linked to other programs. Again, though, this is still under discussion.
If you have thoughts on how this feature should develop, we would love to hear them via an issue on our Github page, or via our developers or users mailing lists.
Since C++ uses headers for now, sometimes one header includes another and you fail to notice it, as a result of that the code may compile with one compiler and not with the other, or maybe one feature is implemented in one compiler but not the other, so, I was wondering if there's a way to have visual studio check if the project will compile with both VS compiler and g++ somehow? Or do I just have to manually recompile with both compilers every time? If I have to recompile with both compilers, how can I automate that process with visual studio so I don't have to do it manually?
The development cycle of gcc is faster than that of MS Visual Studio. So asking Visual Studio "will my code be compiled correctly by g++?" is meaningless - it will never know which features and bugfixes the latest version of gcc has.
Also, there is no point in asking such a question at all - if you want to know "what will happen if I do X", just do X!
What you really want to know, I guess, is something like "Are my #include directives compatible with C++ Standard?". MS Visual Studio is just an implementation - it's not a portability-checking tool. It does its best to compile the code, and do it correctly, which is a great task by itself (with c++ being so hard to compile).
gcc is yet another implementation of c++, with its own headers. Actually, from my experience, the mingw c++ headers are more "minimalistic", that is, they try to not include one-another when possible. My guess is, they try to minimize compilation time for systems that don't use precompiled headers, while MS Visual Studio regards precompiled headers as an essential feature.
So better use gcc/mingw as your "standard compliance" check.
VS2015 supports the clang compiler, which is generally the most compliant compiler of the three. You could compile your code with clang.
It's important to distinguish that a compiler and standard library implementation are not the same thing. i.e. you can mix and match compilers with the standard libraries that ship with different toolchains. So compiling with clang will not pick up issues with variance in standard library implementations.
To do that you would have to link against and supply the headers for the standard library implementation you would like to link against. Visual Studio, by default links against it's own runtime and includes msvc runtime headers, you would have to change your project configuration to achieve it but it should be possible. It would be a lot less hassle to simply compile it with the target platforms own toolchain, however.
Pre-compiled headers seem like they can save a lot of time in large projects, but also seem to be a pain-in-the-ass that have some gotchas.
What are the pros & cons of using pre-compiled headers, and specifically as it pertains to using them in a Gnu/gcc/Linux environment?
The only potential benefit to precompiled headers is that if your builds are too slow, precompiled headers might speed them up. Potential cons:
More Makefile dependencies to get right; if they are wrong, you build the wrong thing fast. Not good.
In principle, not every header can be precompiled. (Think about putting some #define's before a #include.) So which cases does gcc actually get right? How much do you want to trust this bleeding edge feature.
If your builds are fast enough, there is no reason to use precompiled headers. If your builds are too slow, I'd consider
Buying faster hardware, which is cheap compared to salaries
Using a tool like AT&T nmake or like ccache (Dirk is right on), both of which use trustworthy techniques to avoid recompilations.
I can't talk to GNU/gcc/linux, but I've dealt with pre-compiled headers in vs2005:
Pros:
Saves compile time when you have large headers that lots of modules
include.
Works well on headers (say from a third party) that change very
infrequently.
Cons:
If you use them for headers that change a lot,
it can increase compile time.
Can be fiddly to set up and maintain.
There are cases where changes to headers are apparently ignored
if you don't force the pre-compiled header to compile.
The ccache caching frontend to gcc, g++, gfortran, ... works great for me. As its website says
ccache is a compiler cache. It acts as
a caching pre-processor to C/C++
compilers, using the -E compiler
switch and a hash to detect when a
compilation can be satisfied from
cache. This often results in a 5 to 10
times speedup in common compilations.
On Debian / Ubuntu, just do 'apt-get install ccache' and create soft-links in, say, /usr/local/bin with names gcc, g++, gfortran, c++, ... that point to /usr/bin/ccache.
[EDIT] To make this more explicit in response to some early comments: This provides essentially pre-compiled headers and sources by caching a larger chunk of the compilation step. So it uses an idea that is similar to pre-compiled headers, and carries it further. The speedups can be dramatic -- a factor of 5 to 10 as the website says.
For plain C, I would avoid precompiled headers. As you say, they can potentially cause problems, and preprocessing time is really small compared to the regular compilation.
For C++, precompiled headers can potentially save a lot of time, as C++ headers often contain large template code whose compilation is expensive. I have no practical experience with them, so I recommend you measure how much savings in compilation you get in your project. To so so, compile the entire project with precompiled headers once, then delete a single object file, and measure how long it takes to recompile that file.
The GNU gcc documentation discusses possible pitfalls with pre-compiled headers.
I am using PCH in a Qt project, which uses cmake as build system, and it saves a lot of time. I grabbed some PCH cmake scripts, which needed some tweaking, since they were quite old but it generally was easier to set up than I expected. I have to add, I am not much of a cmake expert.
I am including now a big part of Qt (QtCore, QtGui, QtOpenGL) and a few stable headers at once.
Pros:
For Qt classes,no forward declarations are needed, and of course no includes.
Fast.
Easy to setup.
Cons:
You can't include the PCH include in headers. This isn't much of a problem, exept you use Qt and let the build system translate the moc files seperatly, which happens to be exactly my configuration. In this case, you need to #include the qt headers in your headers, because the mocs are genreted from headers. Solution was to put additional include guards around the #include in the header.