Compilation in VS2010 takes too long [duplicate] - c++

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What strategies have you used to improve build times on large projects?
I have some 800 lines of coding done in C++, the cpp file has some 7-8 classes with equal no objects as well, but the program takes good 7 seconds to build. This is my first program in c++, so I want to know if its normal? or its way too much? Also, it would be really great if someone who's expert in c++ could share some insights which would help a beginner as me.
If it helps in any way I am using Visual Studio 2010.

The time to compile C++ probably varies more than with any other language I've ever used.
One thing that can make a significant difference is what headers you're including. Even though your code may only be 800 lines, if a few of those are #includes, the compiler may easily be looking at thousands of lines (just for reference, #include <windows.h>, by itself, generally means the compiler will look at over 10,000 lines).
A few of us in the C++ chat room were recently doing some tests on a particularly nasty piece of code that has a lot of recursive templates. Even though it's only about 30 lines of code, depending on the parameters you set, it's pretty easy to get compiles times of an hour or more -- and with most compilers (including VC++10 and 11/2012) it's pretty easy to outright crash the compiler.
If the code has little or nothing in the way of headers and/or templates (especially things like recursive templates), then 7.5 seconds to compile seems fairly excessive. Just for comparison, I did a quick test compiling a program I had lying around that's close to the same size (926 lines). That took 0.3 seconds. My machine is something like 5 or 6 years old, so its speed isn't even close to cutting edge either. At the same time, I should add that for compiling that small an amount of code, CPU speed probably isn't the main determining factor. I'd expect an SSD to make a lot more difference than a faster CPU.

C++ is a complicated language that requires more time to compile than many other languages. On top of that Visual Studio itself has additional overhead for building Intellisense databases and such. There's also a linking phase to consider after the actual compilation.
When Visual Studio creates a new project, it typically creates a precompiled header that includes a lot of Windows header files. This would add many thousands of lines to your 800-line source.
7 seconds seems a little slow, but not out of line.

Related

Idea C++ auto unity build system, would it work?

So this may sound a bit crazy but i had the following idea...
In c++ they call a unity build when you put all your code in one file using #include's or simply copy and pasting. This allows for lightning fast compilation.
Now for simplicity lets assume we have a header only library and we restrict ourselves to only using classes (that way we have a garaunteed namespace).
Now i was wondering how well this would work if made automatic. A script/tool would be made to preprocess the .h files; picking out includes and dependencies. The tool would walk every .h file recursively and construct a mapping of dependencies of includes. It would then place these dependencies in one file (main.cpp) using #includes and compile.
Now i want to get your guy's opinions on this idea of automatic unity build creator. Is this smart to do? Would it work as I expect it?
Unity builds are overprized, really.
This allows for lightning fast compilation.
This is not quite true these days. Compilation is not I/O bound assuming you have dedicated machine for compilation with enough RAM. Every modern operating system caches files after first read and than get them from memory. You can avoid intermediate files by piping or storing them in RAM drive. So preprocessor work is just concatenating files in most of the time. This is fast. Real time is spent in compilation and linking.
So any serious application having thousands of files will be huge. Real world example - we work with app where source code takes more than 100MB of data (without system and 3rd party includes). So your system would have to operate on approximate 150MB file. I wouldn't be surprised if some compilers would refuse to compile it. Not to mention time to compile. It is way better to split into smaller files and compile in parallel. Again real example - single thread compile took 40 minutes to finish, running on one server with 16 threads around 2.5-3 minutes and compiling on farm of 16 servers each 16 core with distcc 30 seconds plus some time to link.
The other benefit of unity builds which is better optimization vanishes when LTO and static build are done.
Answering your question - yes you can do that. Does it make sense? Doubtful.
What do you accomplish?
You hate "header only" because you" need to recompile the world every time". In your "brave new world" there is only 1 file so it is always going to be compiled every time - no matter what.
People have spent a lot of time and effort making C++ such that it can use libraries etc to help speed up compilation. Oh, by the way the system I'm looking at today has 17 million + lines of code. You want that in one file?
Links provided by OP in the comments are a good explanation. The big problem I see is scalability. It did remind me of one area in my (inherited) code base where the guards are outside the includes to avoid opening files. e.g.
#ifndef BLAH_H
#include <blah.h>
#endif

Performance Gains with Visual Studio Whole Program Optimization

Our product is a library which we deliver as a dll or static library. I've noticed that using Whole Program Optimization in Visual Studio improves the performance around 30%. This is good but referring to
http://blogs.msdn.com/b/vcblog/archive/2009/02/24/quick-tips-on-using-whole-program-optimization.aspx
I see that it is not suggested to use whole program optimization for libraries that are delivered to customers.
The same article mentions around 3-4% improvement in performance. Now that we see 10 times of the expected performance gain, I am thinking whether we are doing something wrong.
Not sure how to formulate this but I'll give it a try: Apparently our code base has a "problem" that WPO can solve very well. Whatever this "problem" (or problems?) is , it is less important in other software hence WPO has relatively small impact. Now my question is what might be this problem? We would like to optimize our code manually since turning on WPO is not an option.
Probably, you have some functions called many times, which can't be inlined without WPO due to being defined in source files. You can use a profiler to identify these, then move them into headers and mark them inline.

How to find compilation bottlenecks?

How do I find which parts of code are taking a long time to compile?
I am already using precompiled headers for all of my headers, and they definitely improve the compilation speed. Nevertheless, whenever I make a change to my C++ source file, compiling it takes a long time (this is CPU/memory-bound, not I/O-bound -- it's all cached). Furthermore, this is not related to the linking portion, just the compilation portion.
I've tried turning on /showIncludes, but of course, since I'm using precompiled headers, nothing is getting included after stdafx.h. So I know it's only the source code that takes a while to compile, but I don't know what part of it.
I've also tried doing a minimal build, but it doesn't help. Neither does /MP, because it's a single source file anyway.
I could try dissecting the source code and figuring out which part is a bottleneck by adding/removing it, but that's a pain and doesn't scale. Furthermore, it's hard to remove something and still let the code compile -- error messages, if any, come back almost immediately.
Is there a better way to figure out what's slowing down the compilation?
Or, if there isn't a way: are there any language constructs (e.g. templates?) that take a lot longer to compile?
What I have in my C++ source code:
Three (relatively large) ATL dialog classes (including the definitions/logic).
They could very well be the cause, but they are the core part of the program anyway, so obviously they need to be recompiled whenever I change them.
Random one-line (or similarly small) utility functions, e.g. a byte-array-to-hex converter
References to (inline) classes found inside my header files. (One of the header files is gigantic, but it uses templates only minimally, and of course it's precompiled. The other one is the TR1 regex -- it's huge, but it's barely used.)
Note:
I'm looking for techniques that I can apply more generally in figuring out the cause of these issues, not specific recommendations for my very particular situation. Hopefully that would be more useful to other people as well.
Two general ways to improve the compilation time :
instead of including headers in headers, use forward declare (include headers only in the source files)
minimize templated code (if you can avoid using templates)
Only these two rules will greatly improve your build time.
You can find more tricks in "Large-Scale C++ Software Design" by Lakos.
For visual studio (I am not sure if it is too old), take a look into this : How should I detect unnecessary #include files in a large C++ project?
Template code generally takes longer to compile.
You could investigate using "compiler firewalls", which reduce the frequency of a .cpp file having to build (they can reduce time to read included files as well because of the forward declarations).
You can also shift time spent doing code generation from the compiler to the linker by using Link-Time Code Generation and/or Whole Program Optimization, though generally you lose time in the long run.

Visual Studio 2005 C++ Compiler slower that Visual Studio 6 Compiler?

One of our old C++ projects is still with Visual Studio 6. Once a year I try to convert it in to a higher Visual Studio Version but it's not easy because not all the code is written by us.
Anyway, I finally succeeded in converting the project to VS2005 after fixing a few hundred lines of code. But compiling the projects takes a very long time! Much longer than in VS6.
Some classes have a lot of codelines, a few thousands even. These are just arrays to be filled in the code with a lot of items. I know it's not the perfect solution but this is how it is at the moment and VS6 never had a problem with that.
Maybe there are just some settings I have to adjust to speed things up but if it stays like it is now I will keep it as an VS6 project since I don't want to sit at my desk all day doing nothing.
Any ideas?
Differences in compile times are normal. The C++ compiler from VS2005 is significantly more compliant to standard C++ than VC6 was. There is a huge difference between these two compilers.
VS2005 produces more optimized code and thus has to spend extra time figuring out how to make it faster.
See if you can find the smallest modules that compile quickly, and very slowly in VS05, and see what they don't have in common. Add in the elements from the slow module to the fast one until you get a sudden slowdown. That is the cause of the problem.
Sounds like you are a few years behind in your "once-a-year-upgrade", no?
Check to make sure you didn't turn off pre-compiled headers.
Get Incredibuild.
Definitely worth the money you pay for it.
What it does is delegate compilation of files to idle build "agents" on the network, get the results back and link it on the build co-ordinator. The more machines the better. I was impressed with the reduction of build time.

Why do compilations take so long? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Why do programming languages take a long to compile? Assumed c++ takes a long time because it must parse and compile a header every time it compiles a file. But I -heard- precompiled headers take as long? I suspect c++ is not the only language that has this problem.
One C++ specific problem that makes it horribly slow is that, unlike almost any other language, you can't parse it independently of semantic analysis.
Compiling is a complicated process which involves quite a few steps:
Scanning/Lexing
Parsing
Intermediate code generation
Possibly Intermediate code optimization
Target Machine code generation
Optionally Machine-dependent code optimization
(Leaving aside linking.)
Naturally, this will take some time for longer programs.
Precompiled headers are way faster, as has been known at least since 1988.
The usual reason for a C compiler or C++ compiler to take a long time is that it has to #include, preprocess, and then lex gazillions of tokens.
As an exercise you might find out how long it takes just to run cpp over a typical collection of header files---then measure how long it takes to lex the output.
gcc -O uses a very effective but somewhat slow optimization technique developed by Chris Fraser and Jack Davidson. Most other optimizers can be slow because they involve repeated iteration over fairly large data structures.
Language design does have an effect on compiler performance. C++ compilers are typically slower than C# compilers, which has a lot to do with the design of the language. (This also depends on the compiler implementer, Anders Hejlsberg implemented C# and is one of the best around.)
The simplistic "header file" structure of C++ contributes to its slower performance, although precompiled headers can often help. C++ is a much more complex language than C, and C compilers are therefore typically faster.
Compilation does not need to take long: tcc compiles ANSI c fast enough to be useful as an interpreter.
Some thing to think about:
Complexity in the scanning and parsing passes. Presumably requiring long look-aheads will hurt, as will contextual (as opposed to context-free) languages.
Internal representation. Building and working on a large and featureful AST will take some time. Presumably you should use the simplest internal representation that will support the features you want to implement.
Optimization. Optimization is fussy. You need to check for a lot of different conditions. You probably want to make multiple passes. All of this is going to take time.
They take as long as they take and it usually depends on how much extraneous stuff you inject into your compilation units. I'd like to see you hand-compile them any faster :-)
The first time you compile a file, you should have no headers at all. Then add them as you need them (and check when you're finished whether you still need them).
Other ways of reducing that time is to keep your compilation units small (even to the point of one function per file, in an extreme case) and use a make-like tool to ensure you only build what's needed.
Some compilers (IDE's really) do incremental compilation in the background so that they're (almost) always close to fully-compiled.
I think the other answers here have missed some important parts of the situation that slow C++ compilation:
Compilation model that saves .obj/.o files to disk, reads them back, then links them
Linking in general and bad slow linkers in particular
Overly complex macro preprocessor
Arbitrarily complex Turing-complete template system
Nested and repeated inclusion of source files, even with #pragma once
User-inflicted fragmentation, splitting code into too many files (even to the point of one function per file, in an extreme case)
Bloated or low-effort internal data structures in the compiler
Overbloated standard library, template abuse
By contrast, these don't slow C++ compilation:
Scanning/Lexing
Parsing
Intermediate code generation
Target machine code generation
As an aside, optimization is one of the biggest slowdowns, but it is the only slowdown here that is actually necessary by some measure, and also it is entirely optional.
Run Idera RAD Studio (there is a free version). It comes with C++ and Delphi. The Delphi code compiles in a tiny fraction of the time that C++ code doing the same thing does. This is because C++ evolved horribly over the decades, with not much thought to compiler consiquences for it's complex context determined macros and to some degree the so called ".hpp" hell. Ada has similar issues. Delphi dialect of Pascal was designed from the ground up to be an efficient language to compile. So compiler and run takes seconds, instead of minutes, making iterative debugging fast and easy. Debugging slow to compile languages is a huge time waster, and a pain in the you know what! BTW Anders also wrote Delphi before M$ stole him!