why my visual studio 2013 is slow for c++ code compiling - c++

a c++ project, even if I only change minor code, it will spend a lot on time for compiling like below
and I found when I compile, the cl. exe must load a huge of memory by several MB's speed, anyone have idea for this situation
anyway , the code's result is correct but every time a lot of time is wasted on compiling , hope some one could help me

That's actually not a lot of memory.
Modern C++ compilers assume that developers have decent PC's. Unlike JIT compilers (as in Java JVM's), normal C++ compilers have a very strong bias towards better code generation at the expense of compile-time efficiency.
This is also driven by the fact that PC's are generally a lot cheaper than developers. 8-16 GB of memory is not excessively expensive.

Related

Compiler memory consumption with template libraries (boost + Eigen)

I am writing a template algorithm that makes use of boost::accumulators and the Eigen linear algebra library.
While compiling, the visual studio compiler (cl.exe), memory consumption peaks at over 2.5GB of RAM, and my PC (windows 7 32 bit with 3GB virtual address space) becomes unresponsive (for quite a long time: ~1 minute). The binary files (.obj) are 10-20MB for these compilation units.
My questions (not directed towards these specific libraries)
Is this normal behavior for code that heavily uses templates?
Is there something that can be done to reduce the memory demands and
compile time?
If there is no good solution to the problem, why isn't this
addressed by the people that design the programming language? The
more people understand C++, the more they are likely to use templates, and generate hard-to compile code, and bloated binaries.
If there is no good solution to the problem, why isn't this addressed
by the people that design the programming language?
Because there is no good solution, full stop.
The problem you are talking about has nothing to do with C++. It's an artefact from C- the old "translation unit". Fixing this problem would require re-doing the compilation model. The C++ Committee has been trying for years to make this happen without breaking every single line of existing C++ out there (which is a bigger consideration), but it's not a trivial problem. Fixing it would require vast changes.
Also, Clang has way better performance, and newer versions of GCC which are variadic-template-equipped can do as well.
Yes. Compiling templates is memory consuming. Some implementations suck more than others. In my personal experience, out of GCC, MSVC and Clang, the latter is the best at managing its memory use.
You can split your huge source files into several smaller ones. That would even out the load over several compile steps.
The people who designed the programming language only cared very little for the implementation, to give compiler writers enough freedom to excel and compete. Or Suck.

Intel C++ compiler as an alternative to Microsoft's?

Is anyone here using the Intel C++ compiler instead of Microsoft's Visual c++ compiler?
I would be very interested to hear your experience about integration, performance and build times.
The Intel compiler is one of the most advanced C++ compiler available, it has a number of advantages over for instance the Microsoft Visual C++ compiler, and one major drawback. The advantages include:
Very good SIMD support, as far as I've been able to find out, it is the compiler that has the best support for SIMD instructions.
Supports both automatic parallelization (multi core optimzations), as well as manual (through OpenMP), and does both very well.
Support CPU dispatching, this is really important, since it allows the compiler to target the processor for optimized instructions when the program runs. As far as I can tell this is the only C++ compiler available that does this, unless G++ has introduced this in their yet.
It is often shipped with optimized libraries, such as math and image libraries.
However it has one major drawback, the dispatcher as mentioned above, only works on Intel CPU's, this means that advanced optimizations will be left out on AMD cpu's. There is a workaround for this, but it is still a major problem with the compiler.
To work around the dispatcher problem, it is possible to replace the dispatcher code produced with a version working on AMD processors, one can for instance use Agner Fog's asmlib library which replaces the compiler generated dispatcher function. Much more information about the dispatching problem, and more detailed technical explanations of some of the topics can be found in the Optimizing software in C++ paper - also from Anger (which is really worth reading).
On a personal note I have used the Intel c++ Compiler with Visual Studio 2005 where it worked flawlessly, I didn't experience any problems with microsoft specific language extensions, it seemed to understand those I used, but perhaps the ones mentioned by John Knoeller were different from the ones I had in my projects.
While I like the Intel compiler, I'm currently working with the microsoft C++ compiler, simply because of the financial extra investment the Intel compiler requires. I would only use the Intel compiler as an alternative to Microsofts or the GNU compiler, if performance were critical to my project and I had a the financial part in order ;)
I'm not using Intel C++ compiler at work / personal (I wish I would).
I would use it because it has:
Excellent inline assembler support. Intel C++ supports both Intel and AT&T (GCC) assembler syntaxes, for x86 and x64 platforms. Visual C++ can handle only Intel assembly syntax and only for x86.
Support for SSE3, SSSE3, and SSE4 instruction sets. Visual C++ has support for SSE and SSE2.
Is based on EDG C++, which has a complete ISO/IEC 14882:2003 standard implementation. That means you can use / learn every C++ feature.
I've had only one experience with this compiler, compiling STLPort. It took MSVC around 5 minutes to compile it and ICC was compiling for more than an hour. It seems that their template compilation is very slow. Other than this I've heard only good things about it.
Here's something interesting:
Intel's compiler can produce different
versions of pieces of code, with each
version being optimised for a specific
processor and/or instruction set
(SSE2, SSE3, etc.). The system detects
which CPU it's running on and chooses
the optimal code path accordingly; the
CPU dispatcher, as it's called.
"However, the Intel CPU dispatcher
does not only check which instruction
set is supported by the CPU, it also
checks the vendor ID string," Fog
details, "If the vendor string says
'GenuineIntel' then it uses the
optimal code path. If the CPU is not
from Intel then, in most cases, it
will run the slowest possible version
of the code, even if the CPU is fully
compatible with a better version."
OSnews article here
I tried using Intel C++ at my previous job. IIRC, it did indeed generate more efficient code at the expense of compilation time. We didn't put it to production use though, for reasons I can't remember.
One important difference compared to MSVC is that the Intel compiler supports C99.
Anecdotally, I've found that the Intel compiler crashes more frequently than Visual C++. Its diagnostics are a bit more thorough and clearly written than VC's. Thus, it's possible that the compiler will give diagnostics that weren't given with VC, or will crash where VC didn't, making your conversion more expensive.
However, I do believe that Intel's compiler allows you to link with Microsoft runtimes like the CRT, easing the transition cost.
If you are interoperating with managed code you should probably stick with Microsoft's compiler.
Recent Intel compilers achieve significantly better performance on floating-point heavy benchmarks, and are similar to Visual C++ on integer heavy benchmarks. However, it varies dramatically based on the program and whether or not you are using link-time code generation or profile-guided optimization. If performance is critical for you, you'll need to benchmark your application before making a choice. I'd only say that if you are doing scientific computing, it's probably worth the time to investigate.
Intel allows you a month-long free trial of its compiler, so you can try these things out for yourself.
I've been using the Intel C++ compiler since the first Release of Intel Parallel Studio, and so far I haven't felt the temptation to go back. Here's an outline of dis/advantages as well as (some obvious) observations.
Advantages
Parallelization (vectorization, OpenMP, SSE) is unmatched in other compilers.
Toolset is simply awesome. I'm talking about the profiling, of course.
Inclusion of optimized libraries such as Threading Building Blocks (okay, so Microsoft replicated TBB with PPL), Math Kernel Library (standard routines, and some implementations have MPI (!!!) support), Integrated Performance Primitives, etc. What's great also is that these libraries are constantly evolving.
Disadvantages
Speed-up is Intel-only. Well duh! It doesn't worry me, however, because on the server side all I have to do is choose Intel machines. I have no problem with that, some people might.
You can't really do OSS or anything like that on this, because the project file format is different. Yes, you can have both VS and IPS file formats, but that's just weird. You'll get lost in synchronising project options whenever you make a change. Intel's compiler has twice the number of options, by the way.
The compiler is a lot more finicky. It is far too easy to set incompatible project settings that will give you a cryptic compilation error instead of a nice meaningful explanation.
It costs additional money on top of Visual Studio.
Neutrals
I think that the performance argument is not a strong one anymore, because plenty of libraries such as Thrust or Microsoft AMP let you use GPGPU which will outgun your cpu anyway.
I recommend anyone interested to get a trial and try out some code, including the libraries. (And yes, the libraries are nice, but C-style interfaces can drive you insane.)
The last time the company I work for compared the two was about a year ago, (maybe 2). The Intel compiler generated faster code, usually only a bit faster, but in some cases quite a bit.
But it couldn't handle some of the MS language extensions that we depended on, so we ended up sticking with MS. It was VS 2005 that we were comparing it to. And I'm wracking my brain to remember exactly what MS extension the Intel compiler couldn't handle. I'll come back and edit this post if I can remember.
Intel C++ Compiler has AMAZING (human) support. Talking to Microsoft can literally take days. My non-trivial issue was solved through chat in under 10 minutes (including membership verification time).
EDIT: I have talked to Microsoft about problems in their products such as Office 2007, even got a bug reported. While I eventually succeeded, the overall size and complexity of their products and organization hierarchy is daunting.

Why is it important for C / C++ Code to be compilable on different compilers?

I'm
interested in different aspects of portability (as you can see when browsing my other questions), so I read a lot about it. Quite often, I read/hear that Code should be written in a way that makes it compilable on different compilers.
Without any real life experience with gcc / g++, it seems to me that it supports every major platform one can imagine, so Code that compiles on g++ can run on almost any system. So why would someone bother to have his code run on the MS Compiler, the Intel compiler and others?
I can think of some reasons, too. As the FAQ suggest, I'll try to post them as an answer, opposed to including them into my own question.
Edit: Conclusion
You people got me completely convinced that there are several good reasons to support multiple compilers. There are so many reasons that it was hard to choose an answer to be the accepted one. The most important reasons for me:
Contributors are much more likely to work an my project or just use it if they can use the compiler of their choice
Being compilable everywhere, being usable with future compilers and tools, and adhering to the standards are enforcing each other, so it's a good idea
On the other hand, I still believe that there are other things which are more important, and now I know that sometimes it isn't important at all.
And last of all, there was no single answer that could convince me not to choose GCC as the primary or default compiler for my project.
Some reasons from the top of my head:
1) To avoid being locked with a single compiler vendor (open source or not).
2) Compiling code with different compilers is likely to discover more errors: warnings are different and different compilers support the Standard to a different degree.
It is good to be compilable on MSVC, because some people may have projects that they build in MSVC that they want to link your code into, without having to set up an entirely different build system.
It is good to be compilable under the Intel compiler, because it frequently compiles faster code.
It is good to be compilable under Clang, because it can give better error messages and provide a better development experience, and it is an easier project to work on than GCC and so may gain additional benefits in the future.
In general, it is good to keep your options open, because there is no one compiler that fits all needs. GCC is a good compiler, and is great for most purposes, but you sometimes need something else.
And even if you're usually only going to be compiling under GCC, making sure your code compiles under other compilers is also likely to help find problems that could prevent your code from working with past and future versions of GCC, for instance, if there's something that GCC is less strict about now, but later adds checks for, another compiler may catch in advance, helping you keep your code cleaner. I've found this helpful in the reverse case, where GCC caught more potential problems with warnings than MSVC did (MSVC is the only compiler we needed to support, as we were only shipping on Windows, but we did a partial port to the Mac under GCC in our free time), which allowed me to produce cleaner code than I would have otherwise.
Portability. If you want your code to be accessible by the maximum number of people possible, you have to make it work on the widest range of possible compilers. It the same idea as make a web site run on browsers other than IE.
Some of it is political. Companies have standards, people have favorite tools etc. Telling someone that they should use X, really puts some people off, and makes it really inaccessible to others.
Nemanja brings up a good point too, targeting for a certain compiler locks you into to using it. In the Open Source world, this might not be as big of a problem (although people could just stop developing on it and it becomes obsolete), but what if the company you buy it from discontinues the product, or goes out of business?
For most languages I care less about portability and more about conforming to international standards or accepted language definitions, from which properties portability is likely to follow. For C, however, portability is a useful idea, because it is very hard to write a program that is "strictly conforming" to the standard. (Why? Because the standards committees felt it necessary to grandfather some existing practice, including giving compilers some freedom you might not like them to have.)
So why try to conform to a standard or make your code acceptable to multiple compilers as opposed to simply writing whatever gcc (or your other favorite compiler) happens to accept?
Likely in 2015 gcc will accept a rather different language than it does today. You would prefer not to have to rewrite your old code.
Perhaps your code might be ported to very small devices, where the GNU toolchain is not as well supported.
If your code compiles with any ANSI C compiler straight out of the box with no errors and no warnings, your users' lives will be easier and your software may be widely ported and used.
Perhaps someone will invent a great new tool for analyzing C programs, refactoring C programs, improving performance of C programs, or finding bugs in C programs. We're not sure what version of C that tool will work on or what compiler it might be based on, but almost certainly the tool will accept standard C.
Of all these arguments, it's the tool argument I find most convincing. People forget that there are other things one can do with source code besides just compile it and run it. In another language, Haskell, tools for analysis and refactoring lagged far behind compilers, but people who stuck with the Haskell 98 standard have access to a lot more tools. A similar situation is likely for C: if I am going to go to the effort of building a tool, I'm going to base it on a standard with a lifetime of 10 years or so, not on a gcc version which might change before my tool is finished.
That said, lots of people can afford to ignore portability completely. For example, in 1995 I tried hard to persuade Linus Torvalds to make it possible to compile Linux with any ANSI C compiler, not just gcc. Linus had no interest whatever—I suspect he concluded that there was nothing in it for him or his project. And he was right. Having Linux compile only with gcc was a big loss for compiler researchers, but no loss for Linux. The "tool argument" didn't hold for Linux, because Linux became so wildly popular; people building analysis and bug-finding tools for C programs were willing to work with gcc because operating on Linux would allow their work to have a big impact. So if you can count on your project becoming a wild success like Linux or Mosaic/Netscape, you can afford to ignore standards :-)
If you are building for different platforms, you will end up using different compilers. Moreover, C++ compilers tend to be always slightly behind the C++ standard, which means they usually change their adherence to it as time passes. If you target the common denominator to all major compilers then the code maintenance cost will be lower.
It's very common for applications (especially open-source application) that other developers would desire to use different compilers. Some would rather be using Visual Studio with MS Compiler for development purposes. Some would rather use Intel compiler for claimed performance benefits and such.
So here are the reasons I can think of
if speed is the biggest concern and there is special, highly optimized compiler for some platforms
if you build a library with a C++ interface (classes and templates, instead of just functions). Because of name mangling and other stuff, the library must be compiled with the same compiler as the client code, and if the client wants to use Visual C++, he must be able to compile the lib with it
if you want to support some very rare platform that does not have gcc support
(For me, those reasons are not significant, since I want to build a library that uses C++ internally, but has a C interface.)
Typically these are the reasons that I've found:
cross-platform (windows, linux, mac)
different developers doing development on different OS's (while not optimal, it does happen - testing usually takes place on the target platform only).
Compiler companies go out of business - or stop development on that language. If you know your program compiles/runs well using another compiler, you've covered your bet.
I'm sure there are other answers as well, but these are the most common reasons I've run into so far.
Several projects use GCC/G++ as a "day-to-day" compiler for normal use, but every so often will check to make sure their code follows the standards with the Comeau C/C++ compiler. Their website looks like a nightmare, and the compiler isn't free, but it's known as possibly the most standards-compliant compiler around, and will warn you about things many compilers will silently accept or explicitly allow as a nonstandard extension (yes, I'm looking at you, Mr. I-don't-mind-and-actually-actively-support-your-efforts-to-do-pointer-arithmetic-on-void-pointers-GCC).
Compiling every so often with a compiler as strict as Comeau (or, even better, compiling with as many compilers as you can get your hands on) will let you know of errors people might experience when trying to compile your code, things your compiler allows you to do that it shouldn't, and potentially things that other compilers don't allow you to do that you should. Writing ANSI C or C++ should be an important goal for code you intend to use on multiple platforms, and using the most standards-compliant compiler around is a good way to do that.
(Disclaimer: I don't have Comeau, and don't plan on getting it, and can't get it because I'm on OS X. I do C, not C++, so I can actually know the whole language, and the average C compiler is much closer to the C standard than the average C++ compiler to the C++ standard, so it's less of an issue for me. Just wanted to put this in here because this started to look like an ad for Comeau. It should be seen more as an ad for compiling with many different compilers.)
This one of those "It depends" questions. For open source code, it's good to be portable to multiple compilers. After all having people in diverse environments build the code is sort of the point.
But for closed source, This is a lot less important. You never want to unnecessarily tie yourself to a specific compiler. But in most of the places I've worked, compiler portability didn't even make into the top 10 of things we cared about. Even if you never use anything other than standerd C/C++, switching a large code base to a new compiler is a dangerous thing to do. Compilers have bugs. Sometimes your code will have bugs that are benign on one compiler, but suddenly a problem on another.
I remember one transition, where one compiler thought this code was just fine:
for (int ii = 0; ii < n; ++ii) { /* some code */ }
for (int ii = 0; ii < y; ++ii) { /* some other code */ }
While the newer compiler complained that ii had been declared twice, so we had to go through all of our code and declare loop variables before the loop in order to switch.
One place I worked was so careful about unintended side effects of compiler switches, that they checked specific compilers into each source tree, and once the code shipped would only use that one compiler to do updates on that code base - forever.
Another place would try out a new compiler for 6 months to a year before they switched over to it.
I find gcc a slow compiler on windows (nothing to compare against under linux). So I (sometimes) want to compile my code under other compilers, just for faster development cycles.
I don't think anyone has mentioned it so far, but another reason may be access to certain platform-specific features: Many operating system vendors have special versions of GCC, or even their own home-grown (or licensed and modified) compilers. So if you want your code to run well on several platforms, you may need to choose the right compiler on each platform. Be that an embedded system, MacOS, Windows etc.
Also, speed may be an issue (both compilation speed and execution speed). Back in the PPC days, GCC produced notoriously slow code on PowerPC CPUs, so Apple put a bunch of engineers on GCC to improve that (GCC was very new for the Mac, and all other PowerPC platforms were small). Platforms that are used less may be optimized less in GCC, so using another compiler that's been written for that platform can be faster.
But as a final summary: While there is ideal value in compiling on several compilers, in practice, this is mainly interesting for cross-platform software (and open-source software, because it often gets made cross-platform fairly quickly, and contributors have it easier if they can use their compiler of choice instead of having to learn a new one). If you need to ship on one platform only, shipping and maintenance are usually much more important than investing in building on several compilers if you're only releasing the builds made with one of them. However, you will want to clearly document any deviations from the standard (GCC-isms, for instance) to make the job of porting easier, should you ever have to do it.
Both Intel compiler and llvm are faster than gcc. The real reasons to use gcc are
Infinite hardware support (on no other compiler can you compile a lego mindstorm code on your old DEC).
it's cheap
best spagety optimizer in the business.

How to measure performance in a C++ (MFC) application?

What good profilers do you know?
What is a good way to measure and tweak the performance of a C++ MFC application?
Is Analysis of algorithms really neccesary? http://en.wikipedia.org/wiki/Algorithm_analysis
I strongly recommend AQTime if you are staying on the Windows platform. It comes with a load of profilers, including static code analysis, and works with most important Windows compilers and systems, including Visual C++, .NET, Delphi, Borland C++, Intel C++ and even gcc. And it integrates into Visual Studio, but can also be used standalone. I love it.
If you're (still) using Visual C++ 6.0, I suggest using the built-in profiler. For more recent versions you could try Compuware DevPartner Performance Analysis Community Edition.
For Windows, check out Xperf which ships free with the Windows SDK. It uses sampled profile, has some useful UI, & does not require instrumentation. Quite useful for tracking down performance problems. You can answer questions like:
Who is using the most CPU? Drill down to function name using call stacks.
Who is allocating the most memory?
Who is doing the most registry queries?
Disk writes? etc.
You will be quite surprised when you find the bottlenecks, as they are probably not where you expected!
It's been a while since I profiled unmanaged code, but when I did I had good results with Intel's vtune. I'm sure somebody else will tell us if that's been overtaken.
Algorithmic analysis has the potential to improve your performance more profoundly than anything you'll find with a profiler, but only for certain classes of application. If you operate over reasonably large sets of data, algorithmic analysis might find ways to be more efficient in CPU/Memory/both, but if your app is mainly form-fill with a relational database for storage, it might not offer you much.
Intel Thread Checker via Vtune performance analyzer- Check this picture for the view i use the most that tells me which function eats up the most of my time.
I can further drill down inside and decompose which functions inside them eats up more time etc. There are different views based on what you are watching (total time = time within fn + children), self time (time spent only in code running inside the function etc).
This tool does a lot more than profiling but i haven't explored them all. I would definitely recommend it. The tool is also available for downloading as a fully functional trial version that can run for 30 days. If you have cost constraints, i would say this window is all that you require to pin point your problem.
Trial download here - https://registrationcenter.intel.com/RegCenter/AutoGen.aspx?ProductID=907&AccountID=&ProgramID=&RequestDt=&rm=EVAL&lang=
ps : I have also played with Rational Rational but for some reason I did not take much to it. I suspect Rational might be more expensive than Intel too.
Tools (like true time from DevPartner) that let you see hit counts for source lines let you quickly find algorithms that have bad 'Big O' complexity. You still have to analyse the algorithm to determine how to reduce the complexity.
I second AQTime, having both AQTime and Compuwares DevPartner, for most cases. The reason being that AQTime will profile any executable that has a valid PDB file, whereas TrueTime requires you to make an instrumented build. This greatly speeds up and simplifies ad hoc profiling. DevPartner is also quite a bit more expensive if this is an issue. Where DevPartner comes into its own is with BoundsChecker, which I still rate as a better tool for catching leaks and overwrites than AQTimes execution profiler. TrueTime can be slighly more accurate than AQTime, but I have never found this to be an issue.
Is profiling worthwhile, IMO yes, if you need performance gains on a local application. I think you also learn a lot about how your program and algorithms really work, and the cost implications of using certain types of object classes for storing and iterating through your data.
Glowcode is a very nice profiler (when it works). It can attach to a running program and requires only symbol files - you don't need to rebuild.
Some versions pf visual studio 2005 (and maybe 2008) actually come with a pretty good performance profiler.
if you have it it should be available under the tools menu
or you can search for a way to open the "performance explorer" window to start a new performance session.
A link to MSDN
FYI, Some versions of Visual Studio only come with a non-optimizing compiler. For one of my my MFC apps if I compile it with MINGW/MSYS ( gcc compiler ) with -o3 then it runs about 5-10x as fast as my release compile with Visual Studio.
For example I have an openstreetmap xml compiler and it takes about 3 minutes ( the gcc compiled version) to process a 2.7GB xml file. My visual studio compile of the same code takes about 18 minutes to run.

Difference between Visual C++ 2008 and 2005

I couldn't find any useful information on Microsoft's site, so here is the question: has the compiler in Visual C++ 2008 been improved significantly since the 2005 version? I'm especially looking for better optimization.
Straight from the horses mouth....
http://msdn.microsoft.com/en-us/library/bb384632.aspx
Somasegar has some notes in this blog post.
Mainly about incremental build improvements and multi core improvements.
According to one of our senior developers VS2008 features extended support for multicore compilation ( file-wise instead of project-wise I'm told ), so there might be a reasonable performance optimization for your project.
Have you looked here, here or here ?
If yes, and no information was there you could start by checking first the compiler version (cl.exe) the linker version(link.exe) and then make some performance (optimization tests) and see who is the winner.
Usually a newer version of cl.exe will be better. Not the same thing can be mentioned about the UserInterface of Visual Studio (at least from my experience).
In my experience, compiler optimizations rarely improve more than a few percent between versions at most; if you really need more performance, that few percent just isn't going to cut it--you're going to have to get down and dirty in the code if you want more.
Remember, compilers are extremely dumb, and can usually be outwitted by a smart programmer; the only question is whether its worth your time and effort to do so. If you have a single core function that makes up 90% of your CPU time, it might definitely be so. If runtime is spread equally over ten thousand lines of code, probably not.
Of course, if your speed problem is due to slow algorithms, no compiler can save you.