Recompilation upon Comment Modification - c++

I have a header which a lot of files depend on. I changed a comment and this caused full recompilation. I heard that whether the code requires recompilation depends on the comparison between compilation date and current date, but is there a way so that I can freely modify comments and keep VS2008 from recompiling everything?

There is no way to freely modify comments and keep Visual Studio (any version) from recompiling "because only comments have changed". Tracking what has actually changed is the job of the version control system (e.g. git or SVN).
Your question seems to arise from working on a solution that takes a long time to build (incrementally or fully), and there are effective ways of improving that situation.
This Visual C++ PCH howto helped me reduce our build times significantly. Also we apply all 3 points explained in this article and on top of all that, also use IncrediBuild (commercial product). Each of these steps helped us keep C++ build times in check.

Related

Faster build times in C++ [duplicate]

I once worked on a C++ project that took about an hour and a half for a full rebuild. Small edit, build, test cycles took about 5 to 10 minutes. It was an unproductive nightmare.
What is the worst build times you ever had to handle?
What strategies have you used to improve build times on large projects?
Update:
How much do you think the language used is to blame for the problem? I think C++ is prone to massive dependencies on large projects, which often means even simple changes to the source code can result in a massive rebuild. Which language do you think copes with large project dependency issues best?
Forward declaration
pimpl idiom
Precompiled headers
Parallel compilation (e.g. MPCL add-in for Visual Studio).
Distributed compilation (e.g. Incredibuild for Visual Studio).
Incremental build
Split build in several "projects" so not compile all the code if not needed.
[Later Edit]
8. Buy faster machines.
My strategy is pretty simple - I don't do large projects. The whole thrust of modern computing is away from the giant and monolithic and towards the small and componentised. So when I work on projects, I break things up into libraries and other components that can be built and tested independantly, and which have minimal dependancies on each other. A "full build" in this kind of environment never actually takes place, so there is no problem.
One trick that sometimes helps is to include everything into one .cpp file. Since includes are processed once per file, this can save you a lot of time. (The downside to this is that it makes it impossible for the compiler to parallelize compilation)
You should be able to specify that multiple .cpp files should be compiled in parallel (-j with make on linux, /MP on MSVC - MSVC also has an option to compile multiple projects in parallel. These are separate options, and there's no reason why you shouldn't use both)
In the same vein, distributed builds (Incredibuild, for example), may help take the load off a single system.
SSD disks are supposed to be a big win, although I haven't tested this myself (but a C++ build touches a huge number of files, which can quickly become a bottleneck).
Precompiled headers can help too, when used with care. (They can also hurt you, if they have to be recompiled too often).
And finally, trying to minimize dependencies in the code itself is important. Use the pImpl idiom, use forward declarations, keep the code as modular as possible. In some cases, use of templates may help you decouple classes and minimize dependencies. (In other cases, templates can slow down compilation significantly, of course)
But yes, you're right, this is very much a language thing. I don't know of another language which suffers from the problem to this extent. Most languages have a module system that allows them to eliminate header files, which area huge factor. C has header files, but is such a simple language that compile times are still manageable. C++ gets the worst of both worlds. A big complex language, and a terrible primitive build mechanism that requires a huge amount of code to be parsed again and again.
Multi core compilation. Very fast with 8 cores compiling on the I7.
Incremental linking
External constants
Removed inline methods on C++ classes.
The last two gave us a reduced linking time from around 12 minutes to 1-2 minutes. Note that this is only needed if things have a huge visibility, i.e. seen "everywhere" and if there are many different constants and classes.
Cheers
IncrediBuild
Unity Builds
Incredibuild
Pointer to implementation
forward declarations
compiling "finished" sections of the proejct into dll's
ccache & distcc (for C/C++ projects) -
ccache caches compiled output, using the pre-processed file as the 'key' for finding the output. This is great because pre-processing is pretty quick, and quite often changes that force recompile don't actually change the source for many files. Also, it really speeds up a full re-compile. Also nice is the instance where you can have a shared cache among team members. This means that only the first guy to grab the latest code actually compiles anything.
distcc does distributed compilation across a network of machines. This is only good if you HAVE a network of machines to use for compilation. It goes well with ccache, and only moves the pre-processed source around, so the only thing you have to worry about on the compiler engine systems is that they have the right compiler (no need for headers or your entire source tree to be visible).
The best suggestion is to build makefiles that actually understand dependencies and do not automatically rebuild the world for a small change. But, if a full rebuild takes 90 minutes, and a small rebuild takes 5-10 minutes, odds are good that your build system already does that.
Can the build be done in parallel? Either with multiple cores, or with multiple servers?
Checkin pre-compiled bits for pieces that really are static and do not need to be rebuilt every time. 3rd party tools/libraries that are used, but not altered are a good candidate for this treatment.
Limit the build to a single 'stream' if applicable. The 'full product' might include things like a debug version, or both 32 and 64 bit versions, or may include help files or man pages that are derived/built every time. Removing components that are not necessary for development can dramatically reduce the build time.
Does the build also package the product? Is that really required for development and testing? Does the build incorporate some basic sanity tests that can be skipped?
Finally, you can re-factor the code base to be more modular and to have fewer dependencies. Large Scale C++ Software Design is an excellent reference for learning to decouple large software products into something that is easier to maintain and faster to build.
EDIT: Building on a local filesystem as opposed to a NFS mounted filesystem can also dramatically speed up build times.
Fiddle with the compiler optimisation flags,
use option -j4 for gmake for parallel compilation (multicore or single core)
if you are using clearmake , use winking
we can take out the debug flags..in extreme cases.
Use some powerful servers.
This book Large-Scale C++ Software Design has very good advice I've used in past projects.
Minimize your public API
Minimize inline functions in your API. (Unfortunately this also increases linker requirements).
Maximize forward declarations.
Reduce coupling between code. For instance pass in two integers to a function, for coordinates, instead of your custom Point class that has it's own header file.
Use Incredibuild. But it has some issues sometimes.
Do NOT put code that get exported from two different modules in the SAME header file.
Use the PImple idiom. Mentioned before, but bears repeating.
Use Pre-compiled headers.
Avoid C++/CLI (i.e. managed c++). Linker times are impacted too.
Avoid using a global header file that includes 'everything else' in your API.
Don't put a dependency on a lib file if your code doesn't really need it.
Know the difference between including files with quotes and angle brackets.
Powerful compilation machines and parallel compilers. We also make sure the full build is needed as little as possible. We don't alter the code to make it compile faster.
Efficiency and correctness is more important than compilation speed.
In Visual Studio, you can set number of project to compile at a time. Its default value is 2, increasing that would reduce some time.
This will help if you don't want to mess with the code.
This is the list of things we did for a development under Linux :
As Warrior noted, use parallel builds (make -jN)
We use distributed builds (currently icecream which is very easy to setup), with this we can have tens or processors at a given time. This also has the advantage of giving the builds to the most powerful and less loaded machines.
We use ccache so that when you do a make clean, you don't have to really recompile your sources that didn't change, it's copied from a cache.
Note also that debug builds are usually faster to compile since the compiler doesn't have to make optimisations.
We tried creating proxy classes once.
These are really a simplified version of a class that only includes the public interface, reducing the number of internal dependencies that need to be exposed in the header file. However, they came with a heavy price of spreading each class over several files that all needed to be updated when changes to the class interface were made.
In general large C++ projects that I've worked on that had slow build times were pretty messy, with lots of interdependencies scattered through the code (the same include files used in most cpps, fat interfaces instead of slim ones). In those cases, the slow build time was just a symptom of the larger problem, and a minor symptom at that. Refactoring to make clearer interfaces and break code out into libraries improved the architecture, as well as the build time. When you make a library, it forces you to think about what is an interface and what isn't, which will actually (in my experience) end up improving the code base. If there's no technical reason to have to divide the code, some programmers through the course of maintenance will just throw anything into any header file.
Cătălin Pitiș covered a lot of good things. Other ones we do:
Have a tool that generates reduced Visual Studio .sln files for people working in a specific sub-area of a very large overall project
Cache DLLs and pdbs from when they are built on CI for distribution on developer machines
For CI, make sure that the link machine in particular has lots of memory and high-end drives
Store some expensive-to-regenerate files in source control, even though they could be created as part of the build
Replace Visual Studio's checking of what needs to be relinked by our own script tailored to our circumstances
It's a pet peeve of mine, so even though you already accepted an excellent answer, I'll chime in:
In C++, it's less the language as such, but the language-mandated build model that was great back in the seventies, and the header-heavy libraries.
The only thing that is wrong about Cătălin Pitiș' reply: "buy faster machines" should go first. It is the easyest way with the least impact.
My worst was about 80 minutes on an aging build machine running VC6 on W2K Professional. The same project (with tons of new code) now takes under 6 minutes on a machine with 4 hyperthreaded cores, 8G RAM Win 7 x64 and decent disks. (A similar machine, about 10..20% less processor power, with 4G RAM and Vista x86 takes twice as long)
Strangely, incremental builds are most of the time slower than full rebuuilds now.
Full build is about 2 hours. I try to avoid making modification to the base classes and since my work is mainly on the implementation of these base classes I only need to build small components (couple of minutes).
Create some unit test projects to test individual libraries, so that if you need to edit low level classes that would cause a huge rebuild, you can use TDD to know your new code works before you rebuild the entire app. The John Lakos book as mentioned by Themis has some very practical advice for restructuring your libraries to make this possible.

GNU tool to analyze and reduce compile time for my application

I am using SUSE10 (64 bit)/AIX (5.1) and HP I64 (11.3) to compile my application. Just to give some background, my application has around 200KLOC (2Lacs) lines of code (without templates). It is purely C++ code. From measurements, I see that compile time ranges from 45 minutes(SUSE) to around 75 minutes(AIX).
Question 1 : Is this time normal (acceptable)?
Question 2 : I want to re-engineer the code arrangement and reduce the compile time. Is there any GNU tool which can help me to do this?
PS :
a. Most of the question in stackoverflow was related to Visual Studio, so I had to post a separate question.
b. I use gcc 4.1.2 version.
c. Another info (which might be useful) is code is spread across around 130 .cpp files but code distribution varies from 1KLOC to 8 KLOCK in a file.
Thanks in advance for you help!!!
Edit 1 (after comments)
#PaulR "Are you using makefiles for this ? Do you always do a full (clean) build or just build incrementally ?"
Yes we are using make files for project building.
Sometimes we are forced to do the full build (like over-night build/run or automated run or refresh complete code since many members have changed many files). So I have posted in general sense.
Excessive (or seemingly excessive) compilation times are often caused by an overly complicated include file hierarchy.
While not exactly a tool for this purpose, doxygen could be quite helpful: among other charts it can display the include file hierarchy for every source file in the project. I have found many interesting and convoluted include dependencies in my projects.
Read John Lakos's Large-Scale C++ Design for some very good methods of analysing and re-organising the structure of the project in order to minimise dependencies. Ultimately the time taken to build a large project increases as the amount of code increases, but also as the dependencies increase (or at least the impact of changes to header files increases as the dependencies increase). So minimising those dependencies is one thing to aim for. Lakos's concept of Levelization is very helpful in working out how to split several large monolothic inter-dependent libraries into something with a much better structure.
I can't address your specific questions but I use ccache to help with compile times, which caches object files and will use the same ones if source files do not change. If you are using SuSE, it should come with your distribution.
In addition to the already mentioned ccache, have a look at distcc. Throwing more hardware at such a scalable problem is cheap and simple.
Long compile times in large C++ projects are almost always caused by inappropriate use of header files. Section 9.3.2 of The C++ Programming Language provide some useful points this. Precompiling header files can considerably reduce compile time of large projects. See the GNU documentation on Precompiled Headers for more information.
Make sure that your main make targets can be executed in parallel (make -j <CPU_COUNT+1>) and of course try to use ccache. In addition we experimented with ccache and RAM disks, if you export CCACHE_DIR and point it to a RAM disk this will speed up your compilation process as well.

Finding bugs in Subversion's mixed revision working copies

The project I work on has recently been switched from a horribly antiquated revision control system to Subversion. I felt like I had a fairly good understanding of Subversion a few years ago, but once I learned about Mercurial, I forgot about Subversion quickly.
My question is targeted at those who work with a sizable number (15+) developers on the same branch in Subversion.
Let's say you checkout rev N of the repository. You make some changes and then commit. Meanwhile, other developers have made changes to other files. You hear about another developers changes to subsystem X and decide you need them immediately. You don't want to update the entire working copy because it would pull in all sorts of stuff and then you would have to do a lengthy compile (C++ project). The risk I see is that the developer updates only subsystem X not realizing that the new code depends on a recent change in subsystem Y. The code compiles, but crashes at runtime.
How do you deal with this?
Does the developer report what they think might be a bug (even though it's not a bug)?
Do you require developers to update their entire working copy before reporting a bug? Wouldn't this deter bug reports?
Do you prevent this situation from occurring through some mechanism I haven't thought of?
Since you committed all your work in progress, you have no reason not to update your copy with the entire latest revision. The lengthy compile is part of the price of a large project. The compile time is almost always less than the time spent trying to determine whether you have a bug, or whether there's some obscure incompatibility because you didn't check everything out.
That project had distributed compiles to all the workstations in the group. Since we had about 15 computers for the task, that meant what would normally be a 6 hour or so build took about 25 minutes.
The responsibility to keep track of dependencies is the develloper who introduce it.
In this case develloper X should make sure the change work with current version of other subsystems. Or at least document which versions it works with.
The ways I have seen helping devellopers deal with this are.
Include dependency checking in the build system. Many open source project does this in a .configure script.
Configure the version control system to handle this for you. I don't know to do this in subversion.
This is obviously not a cure for long compile times but it helps to avoid unnecesary rebuilds.
Complex dependeny graphs is also an indication of questionable design. It may be good idea to refactor the code to reduce coupling between subsystems.

Will splitting code into several .cpps decrease compilation time?

Suppose a have a fairly complex class I'm working on. Half the methods are done and tested, but I'm still devolping the other half. If I put the finished code in one cpp and the rest in another, will Visual Studio (or any other IDE for that matter) compile faster when I only change code that's in the "work-in-progress" cpp?
Thanks!
Yes, I believe Visual Studio compiles incrementally, so as long as you hit Build and not Rebuild All you should get faster compile times by splitting out.
However, you should really be splitting out because of code-factoring reasons i.e. each class should have a single purpose etc. etc... I'm sure you know.
It really depends. For a very large project, link time can often be considerably more expensive than the time to compile a single file. In our codebase at work (a game based on the Unreal Engine) we actually found that making "bulk.cpp" files that include many other files (effectively fewer translation units) decreases the turn around time significantly.
Even though individual compile time for a small change was increased, overall compile time (full rebuild) and link time (which happens even for a small change) both decreased dramatically.
As long as the header file doesn't change (assuming both .cpp's include the same header) then only the changed .cpp files will be compiled.
This is true for most IDEs at the very least. I haven't had experience in directly invoking compilers like gcc so I can't comment on that.
The answer is probably yes because you'll probably be performing incremental builds (compiling only the .cpp that changes) and pre-compiled headers. If you are not using either of these features, then you'll get slower builds. I'm pretty sure that a default Visual Studio C++ projects uses both incremental builds and pre-compiled headers.
Yes it will be faster.
But more importantly: Don't worry about this, if your class is so large that it's taking a lot of time on a modern processor it's god's way of saying your class needs to get refactored into smaller pieces.

#include all .cpp files into a single compilation unit?

Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
I recently had cause to work with some Visual Studio C++ projects with the usual Debug and Release configurations, but also 'Release All' and 'Debug All', which I had never seen before.
It turns out the author of the projects has a single ALL.cpp which #includes all other .cpp files. The *All configurations just build this one ALL.cpp file. It is of course excluded from the regular configurations, and regular configurations don't build ALL.cpp
I just wondered if this was a common practice? What benefits does it bring? (My first reaction was that it smelled bad.)
What kinds of pitfalls are you likely to encounter with this? One I can think of is if you have anonymous namespaces in your .cpps, they're no longer 'private' to that cpp but now visible in other cpps as well?
All the projects build DLLs, so having data in anonymous namespaces wouldn't be a good idea, right? But functions would be OK?
It's referred to by some (and google-able) as a "Unity Build". It links insanely fast and compiles reasonably quickly as well. It's great for builds you don't need to iterate on, like a release build from a central server, but it isn't necessarily for incremental building.
And it's a PITA to maintain.
EDIT: here's the first google link for more info: http://buffered.io/posts/the-magic-of-unity-builds/
The thing that makes it fast is that the compiler only needs to read in everything once, compile out, then link, rather than doing that for every .cpp file.
Bruce Dawson has a much better write up about this on his blog: http://randomascii.wordpress.com/2014/03/22/make-vc-compiles-fast-through-parallel-compilation/
Unity builds improved build speeds for three main reasons. The first reason is that all of the shared header files only need to be parsed once. Many C++ projects have a lot of header files that are included by most or all CPP files and the redundant parsing of these is the main cost of compilation, especially if you have many short source files. Precompiled header files can help with this cost, but usually there are a lot of header files which are not precompiled.
The next main reason that unity builds improve build speeds is because the compiler is invoked fewer times. There is some startup cost with invoking the compiler.
Finally, the reduction in redundant header parsing means a reduction in redundant code-gen for inlined functions, so the total size of object files is smaller, which makes linking faster.
Unity builds can also give better code-gen.
Unity builds are NOT faster because of reduced disk I/O. I have profiled many builds with xperf and I know what I'm talking about. If you have sufficient memory then the OS disk cache will avoid the redundant I/O - subsequent reads of a header will come from the OS disk cache. If you don't have enough memory then unity builds could even make build times worse by causing the compiler's memory footprint to exceed available memory and get paged out.
Disk I/O is expensive, which is why all operating systems aggressively cache data in order to avoid redundant disk I/O.
I wonder if that ALL.cpp is attempting to put the entire project within a single compilation unit, to improve the ability for the compiler to optimize the program for size?
Normally some optimizations are only performed within distinct compilation units, such as removal of duplicate code and inlining.
That said, I seem to remember that recent compilers (Microsoft's, Intel's, but I don't think this includes GCC) can do this optimization across multiple compilation units, so I suspect that this 'trick' is unneccessary.
That said, it would be curious to see if there is indeed any difference.
I agree with Bruce; from my experience I had tried implementing the Unity Build for one of my .dll projects which had a ton of header includes and lots of .cpps; to bring down the overall Compilation time on the VS2010(had already exhausted the Incremental Build options) but rather than cutting down the Compilation time, I ran out of memory and the Build not even able to finish the Compilation.
However to add; I did find enabling the Multi-Processor Compilation option in Visual Studio quite helps in cutting down the Compilation time; I am not sure if such an option is available across other platform compilers.
In addition to Bruce Dawson's excellent answer the following link brings more insights onto the pros and cons of unity builds - https://github.com/onqtam/ucm#unity-builds
We have a multi platform project for MacOS and Windows. We have lots of large templates and many headers to include. We halfed build time with Visual Studio 2017 msvc using precompiled headers. For MacOS we tried several days to reduce build time but precompiled headers only reduced build time to 85%. Using a single .cpp-File was the breakthrough. We simply create this ALL.cpp with a script. Our ALL.cpp contains a list of includes of all .cpp-Files of our project. We reduce build time to 30% with XCode 9.0. The only drawback was we had to give names to all private unamed compilation unit namespaces in .cpp-Files. Otherwise the local names will clash.