I have a client who is still using Visual Studio 6 for building production systems. They write multi-threaded systems that use STL and run on mutli-processor machines.
Occasionally when they change the spec of or increase the load on one of their server machines they get 'weird' difficult to reproduce errors...
I know that there are several issues with Visual Studio 6 development and I'd like to convince them to move to Visual Stuio 2005 or 2008 (they have Visual Studio 2005 and use it for some projects).
The purpose of this question is to put together a list of known issues or reasons to upgrade along with links to where these issues are discussed or reported. It would also be useful to have real life 'horror stories' of how these issues have bitten you.
Not supported on 64-bit systems, compatibility issues with Vista, and it was moved out of extended support by Microsoft on April 8, 2008
http://msdn.microsoft.com/en-us/vbrun/ms788708.aspx
Unpatched VC6 STL is not thread safe. See here http://www.amanjit-gill.de/articles/vc6_stl.html, the patches aren't included in the service packs and you have to get them from Dinkumware directly (from here http://www.dinkumware.com/vc_fixes.html) and then apply them to each installation...
The biggest problem that we've seen at my workplace is it's inability to handle even marginally complex templated classes or functions. This fact alone has force some of the most devoted VS6 fans in the company to upgrade and start using VS2005. In addition to the template problem, intellisense is much better, debugging is easier and more accurate, and many people find the IDE easier to navigate. The only downside that we have seen thus far is that builds take a bit longer in 2005 than they did in 6 (but that's probably a side effect of the compiler being more robust)
You can also check out these sites for a sampling of known issues in VS6:
http://louisville.edu/~ecrouc01/CECS302/VisualCPP.htm
http://www.acceleratedcpp.com/details/msbugs.html
I'm sure you could find more if you poked around a bit.
VS6 does not compile code according to the current C/C++ standard. For example,
it has incorrect (outdated) scoping rules for loops. At least one MSFT SDK have been updated now with code that expects the correct semantics, so the SDK won't even compile with VS6 any more.
It has trouble being able to compile all but the most trivial template constructs.
It will compile some template constructs that have been declared illegal in recent standards updates (because the constructs don't actually do what normal users expect).
operator new doesn't conform to the C++ spec and doesn't throw exceptions on allocation failure, fixing this is non trivial.
see: http://msdn.microsoft.com/en-us/magazine/cc164087.aspx
One of the biggest reasons for me to upgrade was the standard compliant C++ compiler ( although still not 100% ), so I could leverage more C++ features in my projects and not worry about strange hacks and workarounds that can lead to hard to find bugs.
Not compatible with Vista. Heck, there's a long list of issues VS 2005 has with Vista.
That being said, most of the improvements in VS seem to apply to everything other than C++ native code. What I'm seeing is more standards compliance, which is important but hardly dramatic.
Visual Studio 6 is not compatible with the lasted Windows SDKs, so it cannot utilize (at least easily) the latest OS features.
Though I no longer have concrete details, I'll just throw in that when we upgraded at work, the new compiler found quite a few errors that VC 6 let slip through quietly. Improved product robustness just from the upgrade.
If they use the STL, they may be interested in the recently-released feature pack, which includes an implementation of TR1.
I have upgraded my stuff but it's relatively uncomplicated. A con to upgrade is VS 2005 DLL Hell
The VS 2008 version of the STL compiles with /clr, so if they're interested in transitioning to the managed world, they don't have to lose all their old code.
By defoult newer versions have better compiler and better libraries. But it's not always easy to port existing projects to newer studio, and you can upgrade both compiler and libraries manually.
I was using VS 6.0 with Intel compiler just year ago. We just had a bunch of old code then, which was threating iterators as pointers and vice versa, and it was all real messy and scary, so this holded us from an upgrade.
But I have had to upgrade after all, because the framework I'm currently using simply doesn't run on VS 6.0. Think this is the ultimative reason :-)
Third-part libraries support only a limited number of compilers, too. Your client may not be able to accept bugfixes or feature upgrades as a result.
For instance, even a widely used library as Boost supports only VS 7.1 and later (source)
And you might have some problems with Data Execution Prevention (DEP) as well, because VC6 ships with an old ATL version. As usual, see Raymond Chen for details.
Related
According to MSDN, Microsoft still ships nothrownew.obj with the Visual C++ 10 (Visual Studio 2010) runtime library, so that users can link against it and have sub-standard behavior of "ordinary" (not nothrow flavor) new returning null on allocation failure. This sub-standard behavior dates back to Visual C++ 6 which is now considered extremely old.
Why would it do so? I mean they make each new version of the compiler more and more Standard-compliant. For example, Visual C++ 7 would support "default int", but Visual C++ 9 would not. And the old sub-standard behavior of new can be easily achieved by slightly changing code to use nothrow flavor of new - this is straightforward and very easy.
Why is this option so important that Microsoft still supports it?
Well, this is sort of an open question, since nobody except someone responsible from Microsoft can say for sure - if at all. So, I'll take a bite:
I'll guess it is for convenience:
Microsoft itself may need it in some of their products and it is just easier having it together with the compiler tools.
Microsoft may know that someone (say a big vendor/app) still needs it and it is just easier (or even necessary if compiler specific) to still provide it.
Microsoft may know/anticipate that it is generally still "widely" used in legacy apps. Big or small.
"It doesn't hurt", well arguably. For example, Microsoft has a long record of maintaining backward compatibility in Windows (see Raymond Chens blog), again, arguably not always for the better.
Documentation, Tests, etc. would need to be altered (or removed, but still).
That is, removing it may be more trouble yet then just keeping it.
At least they need / should provide a deprecated notice a version prior to removing it. I don't know if they did that for VS2010 or any prior version.
Because I am now (2012) porting a product from Visual C++ 6.0 to Visual Studio 2010 and that helps greatly to bring the development up to speed. We also will not make the Unicode transition for a few years to come. If Microsoft would not provide the compatibility feature I would build it myself.
As a side note we are a major ISV in a specialized field. If we decide to change OS, an entire Industry would probably change to. (Before Windows we used to also build a specialised OS.)
My department writes a mixture of Windows, Linux and cross platform (RHEL Linx and Windows Server 2003) C++ code for in house applications. We use the STL and Boost 1.39.
VS2010 is now available in my organisation. If we were to move to VS2010 I'd have to make a significant business case for it. What would some of the most noticable benefits we would see from the move? Do you think it would be worth the time cost to move?
Update
Given the size of our code base and the cross platform nature of our code, I'm mainly interested in what the new IDE offers, e.g. how good is the intellisense (say, compared to VS for .net). Does the intellisense work well for very large code bases? What's the refactoring support like? How is raw IDE performance? What is the debugger like, i.e. if I hover over a pointer to a collection of smart pointers is it relatively easy to see what's in the collection?
Thanks in advance
New C++ 0x features, e.g. lambda expressions are really nice to have.
The only real difference in the two compilers is some C++0x support in VS2010. The IDE has improved a lot more, but VS2005 is fine for me too. Now are these worth the time cost to move? Up to you...
Greatly improved IntelliSense. C++0x, which means shared_ptr, unordered_set/map, function, lambdas, etc. This will in practice simplify things for you since you don't need as much from Boost. You also get access to Parallel Patterns Library (parallel for_each, etc) which really helps if you are targeting multi-core. I'd say go for it!
If you're only interested in IDE improvements, and you make a big use of smart pointers, I'd suggest to wait up to SP1 (or some SP that comes with fixes to intellisense).
As some people pointed out, there are BIG changes in C++ intellisense, to support a lot of features that other languages already had for years. The thing is that they accidentally broke the intellisense of smart pointers when instantiated with a template type.
I've posted a question with that issue a couple of weeks ago, and as suggested by someone I sent the issue to Microsoft Connect. Sadly the response from the VC++ team was that it won't be fixed soon.
Since you use STL and Boost, performance might be a pretty big deal. VC2010 supports rvalue references and move semantics, which, even if you don't use it in your own code, speeds up Boost and STL code significantly. (Although I doubt Boost 1.39 utilizes this a lot though. But if at some point you upgrade to a recent version of Boost, you'll get the benefit)
Intellisense was reworked in a big way for 2010. It's still a bit wonky, and falls over the moment it sees a template, as it'll always do for C++, but I have to admit it works much better than it used to.
As I'm more and more dissapointed with VS 2010 I'm trying to find some alternative and I was looking at Embarcadero's new edition of C++ env.
Is there any point of learning new (not popular I think) product when VS practically dominates market?
Thanks.
Although I'm not really a Windows programmer, I have been using Borland/Embarcadero to-and-fro during the past 10 years. Here are my personal opinions of why you should not consider it:
The general quality of Builder has dropped significantly over the years. Borland Builder 6 was the last high-quality product, from there the IDE itself has become more and more buggy. The IDE typically crashes once per 1-2 weeks of usage.
No undo in the RAD design. Yes I know, it is quite unbelievable. Even the earliest versions of utter crap like VB had this. But Builder year 2011 doesn't! If you slip on your keyboard and accidentally alter a component, you shall be punished!
The debugger is next to useless. This might have been fixed in the latest version, but in several versions you can't single step through the program without collapsing struct/class variables in the watch window, which is of course very frustrating.
Documentation is very poor, often non-existant, and may be written in Object Pascal, even though you ordered the C++ IDE. The help files also have a tendency to linger as evil ghost processes in your computer, making it impossible to shut down Windows before the ghost is busted.
Personally I'm considering switching to Visual Studio.
I've been using both the Embarcadero Borland, now RAD Studio 2010, c++ and VS2008 every day for the last 6 months. My programming philosophy has always been to use the right tool for the project, no matter what that particular tool is. So a couple of my observations/opinions are -
Advantages
The WYSIWYG screen designer is good. It acts a lot like the WinForms editor in VS2008, but for c++. In VS2008, the only package for c++ that I've used that is close is Qt. My biggest compliant is documentation, but that applies to most software, so it isn't just their problem.
For many builtin classes, they are built on TObject class. This base class functions a lot like Object in C#. The biggest advantage this gives you as a c++ programmer, if you follow a few rules, is mostly automatic memory management. It's not garbage collection, rather, list of related objects that are deleted together.
Disadvantages-
The RAD Studio 2010 C++ environment exists primarily to support Delphi. That is their real strength anyway. Nowhere does anything say this. It is just an overall feel that I've gotten from using system.
Limited support for 3rd party libraries.
It cannot link with any Microsoft compatible c++ library. This includes both Microsoft and 3rd party libraries. They use a different "name mangling" format from Microsoft. So everything has to wrapped in a c language wrapper.
We use the Boost Libraries a lot in VS2008. But in Rad Studio, it only has limited support for Boost.
I've found the overall speed of the generated code to significantly slower than that produced by VS2008.
Please remember, that these are just one person's opinions.
I would suggest that you download a demo version of the product and try it for yourself.
If you want to be 100% up-to-date, you have to use the development environment provided by the platform's vendor.
If you do not mind waiting few months/years for new things to get ported over (or your market allows for it) then you surely you can venture into the unknown.
It's not that Borland's IDEs (unlike MS' VS family) needed any advanced training to start using them and be already productive. That is the main reason why they remain popular in many niches.
I was asking my team to port our vc6 application to vc2005, they are ready to allot sometime to do the same.Now they need to know what is the advantage of porting.
I don't thing they really understand what does it mean to adhere to standard compliance.
Help me list out the advantage to do the porting.
Problem I am facing are
1)No debugging support for standard containers
2)Not able to use boost libraries
3)We use lot of query generation but use CString format function that is not type safe
4)Much time is spent on trouble shooting vc6 problems like having >>
vector<vector<int>>
with out space between >>
Advantages:
More standards compliant compiler. This is a good thing because it will make it easier to port to another platform (if you ever want to do that). It also means you can look things up in the standard rather than in microsoft's documentation. In the end you will have to upgrade your compiler at some point in the feature. The sooner you do it, the less work it will be.
Not supported by MS. The new SDK doesn't work. 64-bit doesn't work. And I don't think they're still fixing bugs either.
Nicer IDE. Personally, I really prefer tabs to MDI. I also think that it's much easier to configure Visual Studio (create custom shortcuts, menu bars, etc.). Of course that's subjective. Check out an express edition and see if you agree.
Better plugin support. Some plugins aren't available for VC6.
Disadvantages:
Time it takes to port. This very much depends on what kind of code you have. If your code heavily uses non-standards compliant VC6 features, it might take some time. As Andrew said, if you're maintaining an old legacy project, it might not be worth it.
Worse Performance. If you're developing on really old computers, Visual Studio may be too slow.
Cost I just had a quick look and Visual Studio licenses seem to be a bit more expensive than VC6's.
Why VC2005? If you are going to invest the time (and testing!) to upgrade from VC6, why not target VC2008?
If you're maintaining a legacy project then there may be no advantage in porting. Simply converting projects and fixing up compiler problems could take weeks of time and introduce instability.
If you're actively developing a product then the main advantage is that you'll no longer be using a product that's over eight years old - which is clearly a good thing.
More recent versions of the Windows SDK don't work with VC6 - if you want to use the latest Windows features, you'll need a more recent compiler.
The later compilers are said to be more standards conforming. I'm sorry I can't be more specific. I do know that VC6 generates lots of compiler warnings just for using standard template classes.
If you use any external libraries that are compiled with a later compiler, you'll need to use something compatible.
Prepare for something of a harsh transition - the IDE's are more different than they should be.
To ensure complete compatibility of the application with different versions of the base platform. And to rectify any errors found thereby so as to give enough freedom to end user to use his own version of the base platform.
I'm not saying you shouldn't convert, but to take your specific points:
1)No debugging support for standard
containers
I debug code using standard containers with VC++ 6 all the time. What's your problem here?
2)Not able to use boost libraries
True. You may find you can use some of the simpler stuff.
3)Much time is spent on trouble
shooting vc6 problems like having >>
[can't get SO to stop mangling this, nb]
with out space between >>
Um, that is a syntax error (at least in the version of C++ understood by VC++6) and will be flagged as such. If your team is spending "much time" on this sort of thing, you need another team.
Edit:
3)We use lot of query generation but
use CString format function that is
not type safe
It will be equally type-unsafe under VS2005. I don't see why this is a reason for porting. If you want type safety use the standard C++ I/O mechanisms.
If your team can't see any advantage and you are unable to explain any advantage, why are you asking them to do this?
Sounds like you are porting just for the sake of it.
I'm using quite much STL in performance critical C++ code under windows. One possible "cheap" way to get some extra performance would be to change to a faster STL library.
According to this post STLport is faster and uses less memory, however it's a few years old.
Has anyone made this change recently and what were your results?
I haven't compared the performance of STLPort to MSCVC but I'd be surprised if there were a significant difference. (In release mode of course - debug builds are likely to be quite different.) Unfortunately the link you provided - and any other comparison I've seen - is too light on details to be useful.
Before even considering changing standard library providers I recommend you heavily profile your code to determine where the bottlenecks are. This is standard advice; always profile before attempting any performance improvements!
Even if profiling does reveal performance issues in standard library containers or algorithms I'd suggest you first analyse how you're using them. Algorithmic improvements and appropriate container selection, especially considering Big-O costs, are far more likely to bring greater returns in performance.
Before making the switch, be sure to test the MS (in fact, Dinkumware) library with checked iterators turned off. For some weird reason, they are turned on by default even in release builds and that makes a big difference when it comes to performance.
We have done the opposite task recently. Our application is a cross-platform C++ server program and it is built on Windows with VS 2008 (x86) and on HP-UX ia64 and Linux with gcc 4.3 . On every platform we used the STLport 5.1.7 as an STL library and Boost 1.38.
In order to compare performance some time ago we also built our application without STLport and after that we measured performance.
After that on Windows the performance became slightly better. So we chose to stop using the STLport with VS 2008 and to use the default VS 2008 STL library.
On HP-UX ia64 there was 20% decrease in performance. Caliper (the HP-UX profiler) showed that string assignments took more time. And inside of string assignment in the default gcc STL library there were calls to pthread_mutex_unock. As far as I know pthread_mutex_lock/pthread_mutex_unlock are used since the default gcc's STL library uses COW-strings. In our application we do lots of string assignments and as a result of the COW strings we get worse performance. So we still use STLPort on HP-UX with gcc.
In a project i worked on that makes quite heavy use of stl, switching to STLport resulted in getting things done in half the time it took with Microsoft's STL implementation. It's no proof, but it's a good sign of performance, i guess. I believe it's partly due to STLport's advanced memory management system.
I do remember getting some warnings when making this change, but nothing that couldn't be worked around fast. As a drawback, I'd add that debugging with STLport is less easy with Visual Studio's debugger than with Microsoft's STL (Update : it seems there is a way to explain to the debugger how to handle STLport containers, thanks Jalf !).
The latest version goes back to October 2008 so there are still people working on it. See here for downloading it.
I've done the exact opposite a year ago and here is why:
StlPort is updated very rarely (as far as I know only one developer is working on it, you can take a look at their commit history)
Problems building it whenever you switch to new Visual Studio release. You wait for the new make file or you create it yourself but sometimes you can't build it because of some configuration option that you're using. Then you wait for them to make it build.
When you submit a bug report you wait forever, so basically no support (maybe if you pay). You usually end up fixing it yourself, if you know how.
STL in Visual Studio has checked iterators and debug iterator support that is much better than the one in StlPort. This is where most of the slowdown comes from especially in debug. Checked iterators are enabled in both debug and release and this is not something everybody knows (you have to disable them yourself).
STL in Visual Studio 2008 SP1 comes with TR1 and you don't have this in StlPort
STL in Visual Studio 2010 uses rvalue references from C++0x and this is where you get a real performance benefit.
If you use the STLPort you will enter a world where every STL-based third party library you use will have to be recompiled with STLPort as well to avoid problems...
STLPort does have a different memory strategy, but if this is your bottleneck then your performance gain path is changing the allocator (switching to Hoard for example), not changing the STL.
I haven't tried it, but as far as I know, there have been no major changes to Microsoft's STL implementation. (There are no huge new optimizations in VS2008 compiler over 2005 either) So if STLPort was faster then, it's probably still the case.
But that's just speculation. :)
Be sure to report back on the results if you try it out.
One benefit of stlport is that it's open source.