OpenHMPP in GCC - c++

The gist of the question is:
Do you know any projects that aim to bring OpenHMPP support to GCC? I could also possibly live with affordable commercial compilers, but it's very unlikely, because I prefer Linux, and I would like the compiler to support non-x86 architectures as well.
And the background story:
I know OpenCL and CUDA people will bash me, but here goes my experience/opinion: I've been pursuing some toy projects to get into many core processing using CUDA and OpenCL. I feel that it's such a mess to set up those development environments (especially under linux and especially if you've the slightest bit of irregularity in your system). Even when you set them up, it's still a mess to run them anywhere other than your development environment. Finally (and probably the most importantly) these languages are very verbose and tiresome. I feel like they're the assembler of many-core processing. Compare them to OpenMP, and you see how they could actually be.
At this point, OpenHMPP comes into the scene. It uses #pragma statements like OpenMP and it seems to be a very good step in the right direction. However, it's very hard to find compilers for it. CAPS Enterprize and Pathscale do have OpenHMPP support, but they're very expensive (€4000 for CAPS, I couldn't find the price for Pathscale). And correct me if I'm wrong, but CAPS seems to support C, not C++.
So, we return to the gist. It would be like a dream, to have OpenHMPP support in GCC. Do you know of any open-source projects or any affordable alternatives? Maybe even, do you know of alternatives to OpenHMPP that are easier to find support for.

If I understand you correctly, you are looking for ways to simplify access to accelerator devices, which may be GPUs as well as multi-core CPUs.
This is a field with a lot of academic work happening right now, resulting in many publications describing such frameworks, however only few are actually available.
In fact, the reasons you are stating are the basis of my research, which is also far from complete or in a state usable by anyone else...
The only thing I know, that comes close to what you seek (using #pragmas to access accelerators), would be MGP from the Virtual OpenCL package.
All other solutions are more intrusive by requiring the use of their API.
I have not yet had a closer look to AMP for C++, but it might be interesting if it picks up some pace.

Related

What is the current status of C++ AMP

I am working on high performance code in C++ and have been using both CUDA and OpenCL and more recently C++AMP, which I like very much. I am however a little worried that it is not being developed and extended and will die out.
What leads me to this thought is that even the MS C++AMP blogs have been silent for about a year. Looking at the C++ AMP algorithms library http://ampalgorithms.codeplex.com/wikipage/history it seems nothing at all has happened for over a year.
The only development I have seen is that now LLVM sort of supports C++AMP, so it is not windows only, but that is all, and not something which has been told far and wide.
What kind of work is going on, if any, that you know of?
What leads me to this thought is that even the MS C++AMP blogs have been silent for about a year. Looking at the C++ AMP algorithms library http://ampalgorithms.codeplex.com/wikipage/history it seems nothing at all has happened for over a year.
I used to work on the C++AMP algorithms library. After the initial release, which Microsoft put together I built a number of additional features and ported it to newer versions of VS. It seemed like there was a loss of momentum around C++AMP. I have no plans to do further work on the project.
Make of this what you will. Perhaps someone from Microsoft can clarify things?
I've found that AMD is still using the C++AMP..
http://developer.amd.com/community/blog/2015/09/15/programming-models-for-heterogeneous-systems/
http://developer.amd.com/community/blog/2015/01/19/bolt-1-3-whats-new/
and there are some forum references where Intel is mentioning it too.
The main thing I see is that we the programmers are finally starting to play with the idea that we can use the GPU for ordinary tasks also. Especially now that the HBMs are coming to the APUs you could do a lot on a relatively cheap system.
So no copying of data to graphic card or main memory, but keep it in a BIG HBM "cache" where it can be accessed "real-time" i.e. without GPU latency.
So Microsoft build a really really nice technology which will become relevant only in next few years i.e. when the hardware is finally "user friendly".
But the thing can become obsolete if they wont advance as others do. Not that something wouldn't work in C++ AMP, but because the speed of change is so big lately that programmers wont risk to start using it, if they don't see some advancements... at least a blog or two per year, where they tested something with it so that you see Microsoft still believes in it.
FWIW we are also using C++AMP in the financial world. very successful relatively easy to code. CUDA is probably a safer choice but if anyone is considering learning AMP i suggest brush up on your basic STL first then read up on array views.
I'm still using amp. Right now I'm making a gpu path tracer (hopefully) for games use.
It seams that amp doesn't have much documentation at the moment or many new updates sadly. Its definitely something I would like to see updated and used more, but it seams dead.

Has Boost ever been used in a regulated project (FDA, FAA)?

While posting a comment recently, I found myself remarking that, in my experience, Boost is not widely used in regulated industries (FDA, FAA). In fact, I don't know of any project that uses it or has used it. I realize, though, that my experience may be lacking here, so I wanted to know if anybody had knowledge of a project using boost in a medical device or in an aviation flight system (lighting, cabin controls, cockpit equipment, etc.).
I am not sure this is the right place to ask it (maybe some other SO site), but I thought this would be a good place to start.
This is not a question about whether or not boost should be used in these areas, it is a question about anybody knowing if it has been used.
EDIT Some examples projects that might help clarify this: Aircraft cabin lighting systems, cabin management systems, cockpit instrumentation, infusion/food/insulin pumps, dialysis machines, laboratory diagnostic devices, blood center data collection systems, etc. Some are life sustaining or potentially flight critical, some collect data, some collect data used to make medical decisions, etc., but I believe all are covered as regulated devices by the FAA/FDA.
EDIT Outside (did not come with the development chain) libraries are often brought into these types of projects for other purposes (graphics libraries, drivers, USB stacks, etc.) These are treated as SOUP. The use of boost would fall under this approach. Does anybody know of a project where boost was used this way?
EDIT Boost is a very large framework, with multiple components. I'm looking for any part of it that has been used in a project. For example "Boost smart pointers" or "Boost Enable" or "Boost Array" or "Boost Optional", etc. But used in "whole", not part. Not used by looking at the Boost code and re-using the idea; used as a whole component of the system (i.e. the legal sense).
This is central to the question, because used in this way means that tradeoffs of handling the SOUP must be dealt with. This may place this question outside the scope of this SO site...not sure.
I would say that a lot if it comes down to how the system designer(s) are handling system safety at an architectural level.
Triplication
For instance if the approach taken is triplicate redundancy with a trusted voter then system trials/testing is going to be the major step in approving the implementation. Suppose that one of the triplicates' development team chose to use Boost. If the system as a whole passed all its test vectors then one could argue that one didn't need to trawl through Boost itself looking for implementation errors. Obviously if all three triplicates had chosen to use Boost then that would be cause for concern, because then the scope of the test vectors becomes unmanageable.
Triplication is a standard approach to handling the problem of using software resources like compilers, libraries and programmers all of which are at risk of error. Boost is just another one of these. One could argue that Boost shared pointers are clearly a way of reducing the risk of programmer error. That in the round is most likely to be beneficial to the system as a whole.
Important, but Not Safety Critical
Where it gets interesting is when triplication is not being used, and one is now in the realms of really having to trust stuff. Interestingly the way a lot of systems seem to get round the problem is to say that ultimately there is a human in control who is supervising and able to intervene in the event of system error.
For example the car industry has a set of programming rules called MISRA. The software for an ABS system is supposed to be written to this rule set, and the development tools are supposed to be set to enforce those rules on the source code. The idea is that this will reduce the risk of undetected bugs to an acceptable level. And because ultimately there is a driver driving the car they can always do their own cadence braking. And thus the car industry has avoided having to have a triplicate implementation of ABS.
They are extending the same philosophy to more complex car systems like adaptive cruise control and self driving cars. Personally I think that such an extension is unreasonable for self driving cars. The relevant legislation makes it clear that it is the 'drivers' fault if such a vehicle has a crash (ie you are still 'driving' it), but the glossy advertising won't dwell on that important aspect.
It's the same in the world of medical devices; there's supposed to be a nurse or someone monitoring the patient anyway, so the occasional blip is covered by that supervision. The whole thing is very poor anyway; whilst the software for a medical device may have been written tested and approved, quite often these things run on embedded Windows XP. They all get networked up and end up being infested with computer viruses, etc. The FDA won't let you have an auto update system inplace, only the device vendor can update it, and of course they can't ever hope to keep up. So you end up with a nicely written well tested and good piece of medical software running on top of an OS installation which has had all the world's hackers running around inside it doing god knows what. I think that the use of Boost in these circumstances is not going to add much to the overall system risk.
So if a MISRA compliant toolchain offered Boost as part of that toolchain then I don't see why that would be any different to a toolchain offering a standard C library. If the toolchain vendor is certifying it then it's no different to the situation with anything else.
There are weaknesses with that approach. In my experience I have come across a MISRA compliant tool chain in widespread use whose compiler turned out junk object code when all optimisations were turned on. I was actually able to verify this in the disassembly. I then took a look at the source code for toolchains's standard C library, and it clearly wasn't in itself written to the MISRA rule set, and furthermore it contained glaring, horrible bugs.
And yet there is no regulatory block to building, testing and selling a car ABS system using this tool chain so long as you tick the MISRA checkbox in the project settings. Adding Boost to that toolchain would hardly make matters worse.
Safety Critical Without Triplication
The final approach is no triplication and no human supervision. This is really hard because you then need formal proof of the correctness of every component of the tool chain, OS, drivers, chips, etc. AFAIK it's never been done for a truly safety critical system like nuclear reactors, flight control avionics, or other systems that really will definitely kill people if they go wrong.
The only thing that comes close so far as I can tell is Greenhill's compiler suite and their INTEGRITY operating system. They can give you (for a large fee) formal testing and verification evidence for every single line of the OS, all their libraries and their compiler, everything. If one were ever to attempt a truly safety critical system without triplication that would be a starting point.
I don't think they've done a C++11 yet, though I have added Boost to their toolchain and it worked just fine (it wasn't in a safety critical system I hasten to add).
Conclusion
Certainly if outfits like Greenhills with a well deserved and good reputation for reliable and thoroughly tested toolchains offer Boost then I think one would be in good position to use it in an regulated system. However I doubt that the whole of Boost would be offered that way; they are more likely to follow the compiler standards.
I also know that GCC has in the past been put through formal compiler validation testing so that it could be used in Stuff That Matters. I expect that that will get repeated sooner of later for the more recent incarnations that have taken on aspects of Boost.
I think the best answer we can have here is "yes and no." I will try to explain why.
Boost is a huge umbrella for many constituent libraries. Some of them depend on each other in various ways (e.g. when a higher-level library needs features provided by a lower-level part of Boost like Type Traits). This raises questions about the usefulness of a simple answer to the question, because if three parts of Boost have been used in a regulated project, but they are different parts than you want to use, it is of no little value to know this. And we will never know the full answer regarding all parts, because you cannot prove a negative (and there are too many parts to ever expect a "100% yes" answer).
Boost is (and always has been) rapidly evolving. Entirely new libraries are added all the time. ASIO is a big one that didn't exist at all until somewhat recently. This makes it even more difficult to answer the question, because over time there are parts of Boost which are young and not as well tested as others. Additionally, existing libraries sometimes go through backward-incompatible revisions (e.g. "Boost Filesystem 3" not too long ago).
Many parts of Boost end up in projects not by a traditional dependency but rather by copy-pasting code from Boost, and perhaps modifying it to taste (e.g. adding or removing support for specific compilers). Likewise, many parts of Boost end up in projects via the fact that Boost is sort of a proving ground for many new C++ standard library features, such as shared_ptr (C++11) and unordered_map (TR1). Some features which are part of the language today were originally part of Boost, so many people have used "Boost code" without even knowing it.
Note that code does not somehow become safer when it transitions to official status within the language--GCC has had bugs which did not exist in the Boost equivalent implementations of the same concepts. This matters when considering practical questions like "Should we allow the use of Boost in our project or should we restrict ourselves to what the compiler vendor gives us?" If you are thinking of using a feature which has been implemented very recently by your compiler vendor (say, within the past year), you may well be better off using a third-party (e.g. Boost) implementation which is more mature.
Finally, since it seems that the impetus for this question is to gain some reassurance that using Boost is not a bad idea for a production project: I would certainly say that in general using Boost is fine and good, with the huge caveat that you need a local expert in Boost who knows which parts of Boost should not be used in your domain. For example, Boost Spirit, Phoenix, and Wave are examples of libraries which have been in Boost for a while but which very few people truly, deeply understand. It's one thing to use library code you don't fully understand (we all do), but quite another to use code which almost no person on earth understands.
In summary, I don't think anyone will be able to give you the reassurance that you seek that Boost is OK for safety-critical systems. You need to evaluate it on your own, the same as you need to evaluate your own compiler vendor's software, your other third-party dependencies, and the code you write yourself. I have used all four categories of software quite a lot, and in my experience Boost had fewer critical bugs than any of the others, and fewer regressions than either GCC or my own code.

Why does the chip control the language to choose

I've asked the question before what language should I learn for embedded development. Most embedded engineers said c and c++ are a must, but also pointed out that it depends on the chip.
Can someone clarify? Is it a compiler issue or what? Do chips come with their own specific compilers (like a c compiler or c++ compiler) and that's why you have to use the language the compiler knows? Is it not possible to code and compile it elsewhere, then burn it to the chip directly in its compiled state? (I think I heard an acquaintance say something to this effect)
I'm not sure how this works, as clearly I don't know much embedded systems or how they work. It's probably an easy answer for those of you who know.
Probably, they meant some toolchains do not support C++. Yes, many chips and boards do come with their own toolchains. Different processors have different instruction sets, which means a different compiler (or more specifically a different backend). That doesn't mean you always have to relearn everything. Many of these are based on GCC (often considered the most ported compiler). The final executable/image formats also vary, so you need a specific linker. Most likely, you will be (cross-)compiling the chip on a "regular" computer, then burning it to the chip. However, that doesn't mean you can use a typical compiler and linker targeted towards a desktop operating system.
It "depends on the chip" in three possible ways:
Some very constrained architectures are not suited to C++, or at least C++ provides constructs not suited to such architectures so offers no benefit over C. Most 8 bit devices fall into this category, but by no means all; I have seen useful C++ code implemented on MegaAVR for example.
Some devices are not supported by a C++ compiler. For example Microchip's dsPIC/PIC24 compiler is C only (third-party tools may have C++ support).
The chip architecture is designed specifically for a particular language; for example INMOS Transputers invariably ran OCCAM.
As well as C, C++, other possibilities are assembler, Forth, Ada, Pascal and many others, but C is almost ubiquitous; few chip vendors will release a new architecture or device without a C compiler being available from day-one. For other languages you will generally have to wait until a third-part decides to develop one, and that wait may be forever for a niche architecture.
Is it not possible to code and compile it elsewhere, then burn it to the chip directly in its compiled state?
That is called cross-compilation or cross-development, and is the usual development method for embedded systems. Most embedded systems lack the OS, file, performance and memory resources to self-host a compiler, and most developers want the comfort of a sophisticated development environment with IDEs, debuggers etc. in a familiar user-oriented desktop OS.
I'm not sure how this works, as
clearly I don't know much embedded
systems or how they work.
Get up-to-speed with some of these:
http://www.state-machine.com/arm/Building_bare-metal_ARM_with_GNU.pdf
http://www.eetimes.com/design/embedded
http://www.amazon.com/exec/obidos/ASIN/020179523X
http://www.amazon.com/Embedded-Systems-Firmware-Demystified-CD-ROM/dp/1578200997
Yes, there are many architectures for which a C compiler exists but a C++ compiler does not. The smaller and less fully-featured a processor you choose, the more likely this situation is to occur.
For embedded development, you almost always compile the code 'elsewhere', as you say, and then send it to the chip for execution/debugging. The process of compiling code for a different architecture than the compiler itself is built for is called 'cross-compiling'.
You are correct: chips have variations on compilers. Most/many modern chips have a gcc port; but not all.
The term 'embedded' is used to describe a vast range of hardware. Most embedded software engineering will consist of writing C/C++ code to produce a binary for a target microprocessor, but there are devices that you may work with that are not coded with compiled binary.
One example is a Programmable Logic Controller (PLC). These devices use a language called "Ladder Logic". It's a wonderful language. I have enjoyed working with it in the past.
Another thing you may encounter, as I have in the past, is devices that have interpreted BASIC emulators. Hopefully that is rare today.
C/C++ are a very good choice for firmware development. So the software you make will run on a embedded CPU/Microcontroller. In order to proper programmer the device, you will need to know the language and the device architecture.
The same code probably will not work in different devices. So, you have to learn the language, and the device architecture.
Another options are FPGAs, which are not microcontroller. FPGA are devices with specialized cell capable to transform itself in any type of synchronous circuit, including microcontroller. FPGAs are programed with Hardware Description languages, like verilog and VHDL. The "compiled" (synthesized) version of the software are called gateware.
The HDLs are the same languages used for ASICs designe also. The path to properly learn
the language are long. So I recommend start with C/C++ with pic form Microchip, which is a
low cost and highly accepted microcontroller.
If you intend to do FPGA development, the knowledge gained with C/C++/pic will be helpfull and important, because must FPGAs have embedded CPU/Microcontroller inside.
There is no direct scientific reason for it. In a lot of cases it has to do with the management and politics of the specific company.
Some companies are driven to create a turn key system and force you to buy that system and pay for maintenance. It locks out the individual developers, but there are many companies and esp government agencies that prefer this model because the support is often much better and you can often drive the direction of their products to suit your needs.
Other companies do not have the staff or the talent and outsource the solution and sometimes take whatever they can get. And you might end up with a one time developed tool that after the contractor leaves is never updated or fixed again, or if it is fixed it is a patch job by someone else. It takes money to make money, but if you run out of money before you can sell your product you still fail.
Sometimes you have companies that both have a staff that maintains their in-house must buy from them tool AND has individuals that also contribute to open tools like gcc.
Sometimes the politics or management in the company have individuals that have a strong opinion of how the world must be and only allow tools to be developed for a specific language. Or perhaps they are owned by or partner with or just like a company that has a specific language and this chip product came to be simply to support that language.
On top of all of this you have the very real technical problems of memory space, the quality and efficiency of the instruction set and how compiler friendly it is. Some architectures may be fine for assembler, but higher level compiled code chews up the limited memory resources too quickly.
Gcc in particular has a lot of problems internally (not as a people but the software/source code itself). I challenge you to write a back end, even with the tutorials that are out there. A company requires specialised talent in order to create and then maintain a gcc backend year after year, otherwise you get dumped. if your chip architecture is not 32 bit or bigger you are already fighting a losing battle with gcc, your chip architecture might be compiler friendly but just not friendly with the popular compilers design.
In the near future llvm is going to shine as a cross compiler relative to gcc because it has not yet built this internal bulk, and perhaps because the internal guts are themselves a defined language/system it may never suffer what has happened to gcc. As more folks get comfortable with llvm we will see a number of architectures ported to it. The msp430 backend was done specifically to demonstrate that you can add a target literally in an afternoon. By the end of next month, some motivated individual could have all of the targets most of us have ever heard of ported to llvm. And you dont have to build a cross compiler it is always a cross compiler. I only mention llvm because the door is now open for targets that have suffered from bad tools to recover.
Some companies, microcontrollers in particular, can and will make the programming interface proprietary so that you must use their programming tool (and or hack it and take your chances with publishing those results and or a cat and mouse of them changing it to defeat you). And they may have only made tools for Windows leaving the linux and apple folks hanging in the wind. Or they make it so that the only binaries it will load are the ones generated by their tools, here again you may hack through the binary format allowing an alternate compiler, and they may or may not work to defeat you.
Despite the technical problems the biggest is the companies politics, management, marketing teams, and supply of or lack of talent in the engineering staff. The bottom line, follow the dollars not the technology or science to understand why this language is supported and not that, or the support for this language is good, bad, or marginal.
What language to learn as a result of all of this? Start with assembler on at least three different architectures. Then C and then C++ if you feel you really need it. C and assembler are your primary languages for embedded (depending on your definition of embedded). No, we write assembler mostly for initial boot code and to support C, interrupt stuff or special instructions that are needed that the compiler cannot create. There are places like microcontrollers where it may very well make sense to use assembler for various reasons like tools, limited chip resources, etc. Even if you dont use assembler knowing it makes you a much better high level programmer.
You do need to decide what your definition of embedded is. Is it api and library calls for an application on a(n embedded) linux system (indistinguishable from the same program/calls on a desktop system). Or at the other end of the spectrum are you talking a microcontroller with maybe 256 or 1024 bytes (not mega or giga, but bytes) of program space? Or something in the middle? The majority of the "embedded" folks out there are closer to the api calls for applications on an operating system (rtos, linux, wince, etc), than the deeply embedded, so that means C, maybe C++ (always be able to fall back on C), trying to avoid python and other scripty languages that are resource hogs.
Some 8-bit parts cannot efficiently access data from a stack. Instead of using a stack to pass parameters, auto-variables and parameters are statically allocated; typically, a linker allocates the automatic variables for main() at one end of memory, and then allocate the variables for functions that are called by main and nothing else, then allocate the variables for functions that are called by those functions and nothing else, etc. This will yield an optimal allocation fairly easily, subject to some caveats:
Recursion can only be supported by adding code to explicitly copy variables onto some sort of stack arrangement; in many compilers, it's simply not supported at all.
If a function looks like it "might" call another function, the linker will assume it can do so in all cases (e.g. it may be that when 'foo' calls 'bar', one of its parameters might always have a value such that 'bar' won't call 'boz', but the linker won't know that).
Any call to a function pointer with a certain signature will be regarded as a call to all functions with the same signature whose address is taken.
If the evaluation of more than one parameter to a function requires making additional function calls, additional temporary storage must generally be pessimistically allocated even if optimal placement of the parameter storage could have avoided that.
There are many types of C programs for which the above restrictions pose no problem at all, and many more for which they pose a nuisance but not a huge one (e.g. by adding dummy parameters or return values to ensure different classes of indirectly-called functions have different signatures). Unfortunately, the code generated by an C++ to C pre-compiler will almost always involve function pointers whose call graph cannot be reasonably divined, so using C++ on such a platform is apt to be difficult if not impossible.

Data parallel libraries in C/C++

I have a C# prototype that is heavily data parallel, and I've had extremely successful order of magnitude speed ups using the Parallel.For construct in .NETv4. Now I need to write the program in native code, and I'm wondering what my options are. I would prefer something somewhat portable across various operating systems and compilers, but if push comes to shove, I can go with something Windows/VC++.
I've looked at OpenMP, which looks interesting, but I have no idea whether this is looked upon as a good solution by other programmers. I'd also love suggestions for other portable libraries I can look at.
If you're happy with the level of parallelism you're getting from Parallel.For, OpenMP is probably a pretty good solution for you -- it does roughly the same kinds of things. There's also work and research being done on parallelized implementations of the algorithms in the standard library. Code that uses the standard algorithms heavily can gain from this with even less work.
Some of this is done using OpenMP, while other is using hand-written code to (attempt to) provide greater benefits. In the long term, we'll probably see more of the latter, but for now, the OpenMP route is probably a bit more practical.
If you're using Parallel.For in .Net 4.0, you should also look at the Parallel Pattern Library in VS2010 for C++; many things are similar and it is library based.
If you need to run on another platform besides Windows the PPL is also implemented in Intel's Thread Building Blocks which has an open source version.
Regarding the other comment on similarities / differences vs. .Net 4.0, the parallel loops in the PPL (parallel_for, parallel_for_each, parallel_invoke) are virtually identical to the .Net 4.0 loops. The task model is slightly different, but should be straightforward.
You should check out Intel's Thread Building Blocks. Visual Studio 2010 also offers a Concurrency Runtime native. The .NET 4.0 libraries and the ConcRT are designed very similarly, if not identically.
if you want something that is versatile, as in portable across various OS and environments, it would be very difficult to not to consider Java. And they are very similar to C# so it be a very easy transition.
Unless you want to pull out your ninja scalpel and wanting to make your code extremely efficient, I would say java over VC++ or C++.

Advantage of porting vc6 to vc2005/vc2008?

I was asking my team to port our vc6 application to vc2005, they are ready to allot sometime to do the same.Now they need to know what is the advantage of porting.
I don't thing they really understand what does it mean to adhere to standard compliance.
Help me list out the advantage to do the porting.
Problem I am facing are
1)No debugging support for standard containers
2)Not able to use boost libraries
3)We use lot of query generation but use CString format function that is not type safe
4)Much time is spent on trouble shooting vc6 problems like having >>
vector<vector<int>>
with out space between >>
Advantages:
More standards compliant compiler. This is a good thing because it will make it easier to port to another platform (if you ever want to do that). It also means you can look things up in the standard rather than in microsoft's documentation. In the end you will have to upgrade your compiler at some point in the feature. The sooner you do it, the less work it will be.
Not supported by MS. The new SDK doesn't work. 64-bit doesn't work. And I don't think they're still fixing bugs either.
Nicer IDE. Personally, I really prefer tabs to MDI. I also think that it's much easier to configure Visual Studio (create custom shortcuts, menu bars, etc.). Of course that's subjective. Check out an express edition and see if you agree.
Better plugin support. Some plugins aren't available for VC6.
Disadvantages:
Time it takes to port. This very much depends on what kind of code you have. If your code heavily uses non-standards compliant VC6 features, it might take some time. As Andrew said, if you're maintaining an old legacy project, it might not be worth it.
Worse Performance. If you're developing on really old computers, Visual Studio may be too slow.
Cost I just had a quick look and Visual Studio licenses seem to be a bit more expensive than VC6's.
Why VC2005? If you are going to invest the time (and testing!) to upgrade from VC6, why not target VC2008?
If you're maintaining a legacy project then there may be no advantage in porting. Simply converting projects and fixing up compiler problems could take weeks of time and introduce instability.
If you're actively developing a product then the main advantage is that you'll no longer be using a product that's over eight years old - which is clearly a good thing.
More recent versions of the Windows SDK don't work with VC6 - if you want to use the latest Windows features, you'll need a more recent compiler.
The later compilers are said to be more standards conforming. I'm sorry I can't be more specific. I do know that VC6 generates lots of compiler warnings just for using standard template classes.
If you use any external libraries that are compiled with a later compiler, you'll need to use something compatible.
Prepare for something of a harsh transition - the IDE's are more different than they should be.
To ensure complete compatibility of the application with different versions of the base platform. And to rectify any errors found thereby so as to give enough freedom to end user to use his own version of the base platform.
I'm not saying you shouldn't convert, but to take your specific points:
1)No debugging support for standard
containers
I debug code using standard containers with VC++ 6 all the time. What's your problem here?
2)Not able to use boost libraries
True. You may find you can use some of the simpler stuff.
3)Much time is spent on trouble
shooting vc6 problems like having >>
[can't get SO to stop mangling this, nb]
with out space between >>
Um, that is a syntax error (at least in the version of C++ understood by VC++6) and will be flagged as such. If your team is spending "much time" on this sort of thing, you need another team.
Edit:
3)We use lot of query generation but
use CString format function that is
not type safe
It will be equally type-unsafe under VS2005. I don't see why this is a reason for porting. If you want type safety use the standard C++ I/O mechanisms.
If your team can't see any advantage and you are unable to explain any advantage, why are you asking them to do this?
Sounds like you are porting just for the sake of it.