I couldn't find any useful information on Microsoft's site, so here is the question: has the compiler in Visual C++ 2008 been improved significantly since the 2005 version? I'm especially looking for better optimization.
Straight from the horses mouth....
http://msdn.microsoft.com/en-us/library/bb384632.aspx
Somasegar has some notes in this blog post.
Mainly about incremental build improvements and multi core improvements.
According to one of our senior developers VS2008 features extended support for multicore compilation ( file-wise instead of project-wise I'm told ), so there might be a reasonable performance optimization for your project.
Have you looked here, here or here ?
If yes, and no information was there you could start by checking first the compiler version (cl.exe) the linker version(link.exe) and then make some performance (optimization tests) and see who is the winner.
Usually a newer version of cl.exe will be better. Not the same thing can be mentioned about the UserInterface of Visual Studio (at least from my experience).
In my experience, compiler optimizations rarely improve more than a few percent between versions at most; if you really need more performance, that few percent just isn't going to cut it--you're going to have to get down and dirty in the code if you want more.
Remember, compilers are extremely dumb, and can usually be outwitted by a smart programmer; the only question is whether its worth your time and effort to do so. If you have a single core function that makes up 90% of your CPU time, it might definitely be so. If runtime is spread equally over ten thousand lines of code, probably not.
Of course, if your speed problem is due to slow algorithms, no compiler can save you.
Related
a c++ project, even if I only change minor code, it will spend a lot on time for compiling like below
and I found when I compile, the cl. exe must load a huge of memory by several MB's speed, anyone have idea for this situation
anyway , the code's result is correct but every time a lot of time is wasted on compiling , hope some one could help me
That's actually not a lot of memory.
Modern C++ compilers assume that developers have decent PC's. Unlike JIT compilers (as in Java JVM's), normal C++ compilers have a very strong bias towards better code generation at the expense of compile-time efficiency.
This is also driven by the fact that PC's are generally a lot cheaper than developers. 8-16 GB of memory is not excessively expensive.
Is anyone here using the Intel C++ compiler instead of Microsoft's Visual c++ compiler?
I would be very interested to hear your experience about integration, performance and build times.
The Intel compiler is one of the most advanced C++ compiler available, it has a number of advantages over for instance the Microsoft Visual C++ compiler, and one major drawback. The advantages include:
Very good SIMD support, as far as I've been able to find out, it is the compiler that has the best support for SIMD instructions.
Supports both automatic parallelization (multi core optimzations), as well as manual (through OpenMP), and does both very well.
Support CPU dispatching, this is really important, since it allows the compiler to target the processor for optimized instructions when the program runs. As far as I can tell this is the only C++ compiler available that does this, unless G++ has introduced this in their yet.
It is often shipped with optimized libraries, such as math and image libraries.
However it has one major drawback, the dispatcher as mentioned above, only works on Intel CPU's, this means that advanced optimizations will be left out on AMD cpu's. There is a workaround for this, but it is still a major problem with the compiler.
To work around the dispatcher problem, it is possible to replace the dispatcher code produced with a version working on AMD processors, one can for instance use Agner Fog's asmlib library which replaces the compiler generated dispatcher function. Much more information about the dispatching problem, and more detailed technical explanations of some of the topics can be found in the Optimizing software in C++ paper - also from Anger (which is really worth reading).
On a personal note I have used the Intel c++ Compiler with Visual Studio 2005 where it worked flawlessly, I didn't experience any problems with microsoft specific language extensions, it seemed to understand those I used, but perhaps the ones mentioned by John Knoeller were different from the ones I had in my projects.
While I like the Intel compiler, I'm currently working with the microsoft C++ compiler, simply because of the financial extra investment the Intel compiler requires. I would only use the Intel compiler as an alternative to Microsofts or the GNU compiler, if performance were critical to my project and I had a the financial part in order ;)
I'm not using Intel C++ compiler at work / personal (I wish I would).
I would use it because it has:
Excellent inline assembler support. Intel C++ supports both Intel and AT&T (GCC) assembler syntaxes, for x86 and x64 platforms. Visual C++ can handle only Intel assembly syntax and only for x86.
Support for SSE3, SSSE3, and SSE4 instruction sets. Visual C++ has support for SSE and SSE2.
Is based on EDG C++, which has a complete ISO/IEC 14882:2003 standard implementation. That means you can use / learn every C++ feature.
I've had only one experience with this compiler, compiling STLPort. It took MSVC around 5 minutes to compile it and ICC was compiling for more than an hour. It seems that their template compilation is very slow. Other than this I've heard only good things about it.
Here's something interesting:
Intel's compiler can produce different
versions of pieces of code, with each
version being optimised for a specific
processor and/or instruction set
(SSE2, SSE3, etc.). The system detects
which CPU it's running on and chooses
the optimal code path accordingly; the
CPU dispatcher, as it's called.
"However, the Intel CPU dispatcher
does not only check which instruction
set is supported by the CPU, it also
checks the vendor ID string," Fog
details, "If the vendor string says
'GenuineIntel' then it uses the
optimal code path. If the CPU is not
from Intel then, in most cases, it
will run the slowest possible version
of the code, even if the CPU is fully
compatible with a better version."
OSnews article here
I tried using Intel C++ at my previous job. IIRC, it did indeed generate more efficient code at the expense of compilation time. We didn't put it to production use though, for reasons I can't remember.
One important difference compared to MSVC is that the Intel compiler supports C99.
Anecdotally, I've found that the Intel compiler crashes more frequently than Visual C++. Its diagnostics are a bit more thorough and clearly written than VC's. Thus, it's possible that the compiler will give diagnostics that weren't given with VC, or will crash where VC didn't, making your conversion more expensive.
However, I do believe that Intel's compiler allows you to link with Microsoft runtimes like the CRT, easing the transition cost.
If you are interoperating with managed code you should probably stick with Microsoft's compiler.
Recent Intel compilers achieve significantly better performance on floating-point heavy benchmarks, and are similar to Visual C++ on integer heavy benchmarks. However, it varies dramatically based on the program and whether or not you are using link-time code generation or profile-guided optimization. If performance is critical for you, you'll need to benchmark your application before making a choice. I'd only say that if you are doing scientific computing, it's probably worth the time to investigate.
Intel allows you a month-long free trial of its compiler, so you can try these things out for yourself.
I've been using the Intel C++ compiler since the first Release of Intel Parallel Studio, and so far I haven't felt the temptation to go back. Here's an outline of dis/advantages as well as (some obvious) observations.
Advantages
Parallelization (vectorization, OpenMP, SSE) is unmatched in other compilers.
Toolset is simply awesome. I'm talking about the profiling, of course.
Inclusion of optimized libraries such as Threading Building Blocks (okay, so Microsoft replicated TBB with PPL), Math Kernel Library (standard routines, and some implementations have MPI (!!!) support), Integrated Performance Primitives, etc. What's great also is that these libraries are constantly evolving.
Disadvantages
Speed-up is Intel-only. Well duh! It doesn't worry me, however, because on the server side all I have to do is choose Intel machines. I have no problem with that, some people might.
You can't really do OSS or anything like that on this, because the project file format is different. Yes, you can have both VS and IPS file formats, but that's just weird. You'll get lost in synchronising project options whenever you make a change. Intel's compiler has twice the number of options, by the way.
The compiler is a lot more finicky. It is far too easy to set incompatible project settings that will give you a cryptic compilation error instead of a nice meaningful explanation.
It costs additional money on top of Visual Studio.
Neutrals
I think that the performance argument is not a strong one anymore, because plenty of libraries such as Thrust or Microsoft AMP let you use GPGPU which will outgun your cpu anyway.
I recommend anyone interested to get a trial and try out some code, including the libraries. (And yes, the libraries are nice, but C-style interfaces can drive you insane.)
The last time the company I work for compared the two was about a year ago, (maybe 2). The Intel compiler generated faster code, usually only a bit faster, but in some cases quite a bit.
But it couldn't handle some of the MS language extensions that we depended on, so we ended up sticking with MS. It was VS 2005 that we were comparing it to. And I'm wracking my brain to remember exactly what MS extension the Intel compiler couldn't handle. I'll come back and edit this post if I can remember.
Intel C++ Compiler has AMAZING (human) support. Talking to Microsoft can literally take days. My non-trivial issue was solved through chat in under 10 minutes (including membership verification time).
EDIT: I have talked to Microsoft about problems in their products such as Office 2007, even got a bug reported. While I eventually succeeded, the overall size and complexity of their products and organization hierarchy is daunting.
I was asking my team to port our vc6 application to vc2005, they are ready to allot sometime to do the same.Now they need to know what is the advantage of porting.
I don't thing they really understand what does it mean to adhere to standard compliance.
Help me list out the advantage to do the porting.
Problem I am facing are
1)No debugging support for standard containers
2)Not able to use boost libraries
3)We use lot of query generation but use CString format function that is not type safe
4)Much time is spent on trouble shooting vc6 problems like having >>
vector<vector<int>>
with out space between >>
Advantages:
More standards compliant compiler. This is a good thing because it will make it easier to port to another platform (if you ever want to do that). It also means you can look things up in the standard rather than in microsoft's documentation. In the end you will have to upgrade your compiler at some point in the feature. The sooner you do it, the less work it will be.
Not supported by MS. The new SDK doesn't work. 64-bit doesn't work. And I don't think they're still fixing bugs either.
Nicer IDE. Personally, I really prefer tabs to MDI. I also think that it's much easier to configure Visual Studio (create custom shortcuts, menu bars, etc.). Of course that's subjective. Check out an express edition and see if you agree.
Better plugin support. Some plugins aren't available for VC6.
Disadvantages:
Time it takes to port. This very much depends on what kind of code you have. If your code heavily uses non-standards compliant VC6 features, it might take some time. As Andrew said, if you're maintaining an old legacy project, it might not be worth it.
Worse Performance. If you're developing on really old computers, Visual Studio may be too slow.
Cost I just had a quick look and Visual Studio licenses seem to be a bit more expensive than VC6's.
Why VC2005? If you are going to invest the time (and testing!) to upgrade from VC6, why not target VC2008?
If you're maintaining a legacy project then there may be no advantage in porting. Simply converting projects and fixing up compiler problems could take weeks of time and introduce instability.
If you're actively developing a product then the main advantage is that you'll no longer be using a product that's over eight years old - which is clearly a good thing.
More recent versions of the Windows SDK don't work with VC6 - if you want to use the latest Windows features, you'll need a more recent compiler.
The later compilers are said to be more standards conforming. I'm sorry I can't be more specific. I do know that VC6 generates lots of compiler warnings just for using standard template classes.
If you use any external libraries that are compiled with a later compiler, you'll need to use something compatible.
Prepare for something of a harsh transition - the IDE's are more different than they should be.
To ensure complete compatibility of the application with different versions of the base platform. And to rectify any errors found thereby so as to give enough freedom to end user to use his own version of the base platform.
I'm not saying you shouldn't convert, but to take your specific points:
1)No debugging support for standard
containers
I debug code using standard containers with VC++ 6 all the time. What's your problem here?
2)Not able to use boost libraries
True. You may find you can use some of the simpler stuff.
3)Much time is spent on trouble
shooting vc6 problems like having >>
[can't get SO to stop mangling this, nb]
with out space between >>
Um, that is a syntax error (at least in the version of C++ understood by VC++6) and will be flagged as such. If your team is spending "much time" on this sort of thing, you need another team.
Edit:
3)We use lot of query generation but
use CString format function that is
not type safe
It will be equally type-unsafe under VS2005. I don't see why this is a reason for porting. If you want type safety use the standard C++ I/O mechanisms.
If your team can't see any advantage and you are unable to explain any advantage, why are you asking them to do this?
Sounds like you are porting just for the sake of it.
What good profilers do you know?
What is a good way to measure and tweak the performance of a C++ MFC application?
Is Analysis of algorithms really neccesary? http://en.wikipedia.org/wiki/Algorithm_analysis
I strongly recommend AQTime if you are staying on the Windows platform. It comes with a load of profilers, including static code analysis, and works with most important Windows compilers and systems, including Visual C++, .NET, Delphi, Borland C++, Intel C++ and even gcc. And it integrates into Visual Studio, but can also be used standalone. I love it.
If you're (still) using Visual C++ 6.0, I suggest using the built-in profiler. For more recent versions you could try Compuware DevPartner Performance Analysis Community Edition.
For Windows, check out Xperf which ships free with the Windows SDK. It uses sampled profile, has some useful UI, & does not require instrumentation. Quite useful for tracking down performance problems. You can answer questions like:
Who is using the most CPU? Drill down to function name using call stacks.
Who is allocating the most memory?
Who is doing the most registry queries?
Disk writes? etc.
You will be quite surprised when you find the bottlenecks, as they are probably not where you expected!
It's been a while since I profiled unmanaged code, but when I did I had good results with Intel's vtune. I'm sure somebody else will tell us if that's been overtaken.
Algorithmic analysis has the potential to improve your performance more profoundly than anything you'll find with a profiler, but only for certain classes of application. If you operate over reasonably large sets of data, algorithmic analysis might find ways to be more efficient in CPU/Memory/both, but if your app is mainly form-fill with a relational database for storage, it might not offer you much.
Intel Thread Checker via Vtune performance analyzer- Check this picture for the view i use the most that tells me which function eats up the most of my time.
I can further drill down inside and decompose which functions inside them eats up more time etc. There are different views based on what you are watching (total time = time within fn + children), self time (time spent only in code running inside the function etc).
This tool does a lot more than profiling but i haven't explored them all. I would definitely recommend it. The tool is also available for downloading as a fully functional trial version that can run for 30 days. If you have cost constraints, i would say this window is all that you require to pin point your problem.
Trial download here - https://registrationcenter.intel.com/RegCenter/AutoGen.aspx?ProductID=907&AccountID=&ProgramID=&RequestDt=&rm=EVAL&lang=
ps : I have also played with Rational Rational but for some reason I did not take much to it. I suspect Rational might be more expensive than Intel too.
Tools (like true time from DevPartner) that let you see hit counts for source lines let you quickly find algorithms that have bad 'Big O' complexity. You still have to analyse the algorithm to determine how to reduce the complexity.
I second AQTime, having both AQTime and Compuwares DevPartner, for most cases. The reason being that AQTime will profile any executable that has a valid PDB file, whereas TrueTime requires you to make an instrumented build. This greatly speeds up and simplifies ad hoc profiling. DevPartner is also quite a bit more expensive if this is an issue. Where DevPartner comes into its own is with BoundsChecker, which I still rate as a better tool for catching leaks and overwrites than AQTimes execution profiler. TrueTime can be slighly more accurate than AQTime, but I have never found this to be an issue.
Is profiling worthwhile, IMO yes, if you need performance gains on a local application. I think you also learn a lot about how your program and algorithms really work, and the cost implications of using certain types of object classes for storing and iterating through your data.
Glowcode is a very nice profiler (when it works). It can attach to a running program and requires only symbol files - you don't need to rebuild.
Some versions pf visual studio 2005 (and maybe 2008) actually come with a pretty good performance profiler.
if you have it it should be available under the tools menu
or you can search for a way to open the "performance explorer" window to start a new performance session.
A link to MSDN
FYI, Some versions of Visual Studio only come with a non-optimizing compiler. For one of my my MFC apps if I compile it with MINGW/MSYS ( gcc compiler ) with -o3 then it runs about 5-10x as fast as my release compile with Visual Studio.
For example I have an openstreetmap xml compiler and it takes about 3 minutes ( the gcc compiled version) to process a 2.7GB xml file. My visual studio compile of the same code takes about 18 minutes to run.
I'm using quite much STL in performance critical C++ code under windows. One possible "cheap" way to get some extra performance would be to change to a faster STL library.
According to this post STLport is faster and uses less memory, however it's a few years old.
Has anyone made this change recently and what were your results?
I haven't compared the performance of STLPort to MSCVC but I'd be surprised if there were a significant difference. (In release mode of course - debug builds are likely to be quite different.) Unfortunately the link you provided - and any other comparison I've seen - is too light on details to be useful.
Before even considering changing standard library providers I recommend you heavily profile your code to determine where the bottlenecks are. This is standard advice; always profile before attempting any performance improvements!
Even if profiling does reveal performance issues in standard library containers or algorithms I'd suggest you first analyse how you're using them. Algorithmic improvements and appropriate container selection, especially considering Big-O costs, are far more likely to bring greater returns in performance.
Before making the switch, be sure to test the MS (in fact, Dinkumware) library with checked iterators turned off. For some weird reason, they are turned on by default even in release builds and that makes a big difference when it comes to performance.
We have done the opposite task recently. Our application is a cross-platform C++ server program and it is built on Windows with VS 2008 (x86) and on HP-UX ia64 and Linux with gcc 4.3 . On every platform we used the STLport 5.1.7 as an STL library and Boost 1.38.
In order to compare performance some time ago we also built our application without STLport and after that we measured performance.
After that on Windows the performance became slightly better. So we chose to stop using the STLport with VS 2008 and to use the default VS 2008 STL library.
On HP-UX ia64 there was 20% decrease in performance. Caliper (the HP-UX profiler) showed that string assignments took more time. And inside of string assignment in the default gcc STL library there were calls to pthread_mutex_unock. As far as I know pthread_mutex_lock/pthread_mutex_unlock are used since the default gcc's STL library uses COW-strings. In our application we do lots of string assignments and as a result of the COW strings we get worse performance. So we still use STLPort on HP-UX with gcc.
In a project i worked on that makes quite heavy use of stl, switching to STLport resulted in getting things done in half the time it took with Microsoft's STL implementation. It's no proof, but it's a good sign of performance, i guess. I believe it's partly due to STLport's advanced memory management system.
I do remember getting some warnings when making this change, but nothing that couldn't be worked around fast. As a drawback, I'd add that debugging with STLport is less easy with Visual Studio's debugger than with Microsoft's STL (Update : it seems there is a way to explain to the debugger how to handle STLport containers, thanks Jalf !).
The latest version goes back to October 2008 so there are still people working on it. See here for downloading it.
I've done the exact opposite a year ago and here is why:
StlPort is updated very rarely (as far as I know only one developer is working on it, you can take a look at their commit history)
Problems building it whenever you switch to new Visual Studio release. You wait for the new make file or you create it yourself but sometimes you can't build it because of some configuration option that you're using. Then you wait for them to make it build.
When you submit a bug report you wait forever, so basically no support (maybe if you pay). You usually end up fixing it yourself, if you know how.
STL in Visual Studio has checked iterators and debug iterator support that is much better than the one in StlPort. This is where most of the slowdown comes from especially in debug. Checked iterators are enabled in both debug and release and this is not something everybody knows (you have to disable them yourself).
STL in Visual Studio 2008 SP1 comes with TR1 and you don't have this in StlPort
STL in Visual Studio 2010 uses rvalue references from C++0x and this is where you get a real performance benefit.
If you use the STLPort you will enter a world where every STL-based third party library you use will have to be recompiled with STLPort as well to avoid problems...
STLPort does have a different memory strategy, but if this is your bottleneck then your performance gain path is changing the allocator (switching to Hoard for example), not changing the STL.
I haven't tried it, but as far as I know, there have been no major changes to Microsoft's STL implementation. (There are no huge new optimizations in VS2008 compiler over 2005 either) So if STLPort was faster then, it's probably still the case.
But that's just speculation. :)
Be sure to report back on the results if you try it out.
One benefit of stlport is that it's open source.