How ubiquitous is hash_map? - c++

The hash_map and hash_set headers aren't included in the C++ standard yet, but they're available as extensions with all the compilers I've used lately.
I'm wondering how much I can rely on these in real code without sacrificing portability. I'm working on tools projects that need to run on a host of architectures and compilers, including:
Linux (x86_64, AMD/Intel): GCC, Intel, Portland Compilers
AIX (Power): GCC, xlC
Cray XT Series (AMD): GCC, Portland, Pathscale Compilers
IBM Blue Gene Series (Power): xlC, GCC
SGI Altix (Itanium): Intel compilers
Windows: Not really a priority, but feel free to provide useful answers.
I realize some of these are pretty exotic, but that's not the point. What are your experiences with STL extensions across multiple platforms and compilers? Are they ubiquitous yet? Would you use them in your project?

I would probably look for the boost equivelant and use that. At least they have some pressure from their users to be platform independent. I can't imagine what would happen if you filed a bug against GCC and Intel compilers and told them to reconcile their differences on how hash_map was implemented. At best you would be able to get them to talk to each other. Suppose you even acheived that, then you've only fixed how Intel and GCC compilers are different. Good luck getting everyone else together and solving the problem within a few years.
At least with boost you know any differences across platforms are being worked out by one organization..
EDIT
The boost equivalent is apparently unordered set or unordered map. (thanks Head Geek)

NO, you would write your own if you're a large enough organization/project. That way you can tailor them to better suit your needs and address portability issues. EA did this, with their "eastl" intended to target all gaming platforms, PC, Mac, XBOX360, Wii, PS2, PS3 etc...

Related

How to compile c/c++ to ms-dos .com programs?

I use Code::Blocks with GNU GCC Compiler.
My question is: is there any way to compile c/c++ code to ms-dos 16bit (.com) executable format?
I tried to set the build options and search the compiler parameters on the net, but i couldn't find anything.
You can certainly compile C and/or (an ancient dialect of) C++ to a 16-bit MS-DOS .com file. The compiler/linker you have with Code::Blocks almost certainly can't do that though.
In particular, at least to my knowledge, gcc has never even attempted to generate code for a 16-bit, segmented-memory environment. There was at least one port of gcc to a DOS extender (DJGPP, but it produces .exe files, not .com and it uses a proprietary DOS extender. This originally used an ancient version of gcc, but has since been updated to a much newer version of gcc.
If you really need to generate a .com file, there are quite a few options, but all the compilers are quite old, so especially with respect to C++ the language they accept is quite limited.
Tool chains that target(ed) MS-DOS.
Caveat: As already noted, all of these are very old. Generally speaking, the C they accept is reasonably conformant C89, but only for fairly small programs (both in terms of code and data size--of necessity: .com files are basically limited to a combined total of 64Kbytes of data and code). The differences between the C++ they accept and anything even sort of close to modern is much more profound (e.g., some didn't support templates at all). All mention of conformance here is relative to other compilers of the time; by modern standards, their conformance is uniformly terrible.
Microsoft: Only sold C++ compilers for MS-DOS for a fairly short time--they were somewhat late into the market, and moved out of it to compilers that produced only 32-bit Windows executables fairly early. Known more for optimization than language conformance.
Borland: Mirror image of Microsoft. Better conformance, poorer optimization, probably the last to abandon the MS-DOS market. Their last few compilers for MS-DOS even supported C++ templates (fairly new at the time).
Watcom: one of the few that's still available as a free download, but without commercial support. When it was new, this was generally considered one of the best available for both conformance and optimization. It's apparently been updated (to at least some extent) relatively recently, but I haven't used a recent version so I can't really comment on those updates.
Metaware: Quite an expensive option at the time. I never used it, but some people I respected highly considered it the best compiler you could get. Mostly targeted embedded systems.
Datalight/Zortech/Symantec/Digital Mars: the other one that's still officially available. Had a small but extremely loyal following. I tried it for a while, but never found a compelling reason to prefer it over others. Digital Mars still maintains this compiler, so it's one of the few that still gets fairly regular updates.
There were quite a few more back then as well, but these probably account for well over 90% of the market at the time.
What you are looking for is exe2bin. This was a utility that came with DOS to convert .EXE format object code into the .COM format (code and data in one 64K segment). It came with DOS and some compiliers/assemblers.

C++ Standard Library Portability

I work on large scale, multi platform, real time networked applications. The projects I work on lack any real use of containers or the Standard Library in general, no smart pointers or really any "modern" C++ language features. Lots of raw dynamically allocated arrays are common place.
I would very much like to start using the Standard Library and some of the C++11 spec, however, there are many people also working on my projects that are against because "STL / C++11 isn't as portable, we take a risk using it". We do run software on a wide variety of embedded systems as well as fully fledged Ubuntu/Windows/Mac OS systems.
So, to my question; what are the actual issues of portability with concern to the Standard Library and C++11? Is it just a case of having g++ past a certain version? Are there some platforms that have no support? Are compiled libraries required and if so, are they difficult to obtain/compile? Has anyone had serious issues being burnt by non-portable pure C++?
Library support for the new C++11 Standard is pretty complete for either Visual C++ 2012, gcc >= 4.7 and Clang >= 3.1, apart from some concurrency stuff. Compiler support for all the individual language features is another matter. See this link for an up to date overview of supported C++11 features.
For an in-depth analysis of C++ in an embedded/real-time environment, Scott Meyers's presentation materials are really great. It discusses costs of virtual functions, exception handling and templates, and much more. In particular, you might want to look at his analysis of C++ features such as heap allocations, runtime type information and exceptions, which have indeterminate worst-case timing guarantees, which matter for real-time systems.
It's those kind of issues and not portability that should be your major concern (if you care about your granny's pacemaker...)
Any compiler for C++ should support some version of the standard library. The standard library is part of C++. Not supporting it means the compiler is not a C++ compiler. I would be very surprised if any of the compilers you're using at the moment don't portably support the C++03 standard library, so there's no excuse there. Of course, the compiler will have to be have been updated since 2003, but unless you're compiling for some archaic system that is only supported by an archaic compiler, you'll have no problems.
As for C++11, support is pretty good at the moment. Both GCC and MSVC have a large portion of the C++11 standard library supported already. Again, if you're using the latest versions of these compilers and they support the systems you want to compile for, then there's no reason you can't use the subset of the C++11 standard library that they support - which is almost all of it.
C++ without the standard library just isn't C++. The language and library features go hand in hand.
There are lists of supported C++11 library features for GCC's libstdc++ and MSVC 2012. I can't find anything similar for LLVM's libc++, but they do have a clang c++11 support page.
The people you are talking to are confusing several different
issues. C++11 isn't really portable today. I don't think any
compiler supports it 100% (although I could be wrong); you can
get away with using large parts of it if (and only if) you limit
yourself to the most recent compilers on two or three platforms
(Windows and Linux, and probably Apple). While these are the
most visible platforms, they represent but a small part of all
machines. (If you're working on large scale networked
applications, Solaris will probably be important, and Sun CC.
Unless Sun have greatly changed since I last worked on it, that
means that there are even parts of C++03 that you can't count
on.)
The STL is a completely different issue. It depends partially
on what you mean by the STL, but there is certainly no
portability problem today in using std::vector. locale
might be problematic on a very few compilers (it was with Sun
CC—with both the Rogue Wave and the Stlport libraries),
and some of the algorithms, but for the most part, you can
pretty much count on all of C++03.
And in the end, what are the alternatives? If you don't have
std::vector, you end up implementing something pretty much
like it. If you're really worried about the presence of
std::vector, wrap it in your own class—if ever it's not
available (highly unlikely, unless you go back with a time
machine), just reimplement it, exactly like we did in the
pre-standard days.
Use STLPort with your existing compiler, if it supports it. This is no more than a library of code, and you use other libraries without problem, right?
Every permitted implementation-defined behaviour is listed in publicly available standard draft. There is next to nothing less portable in C+11 than in C++98.

Advantages and disadvantages of Open Watcom [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Since in some post on StackOverflow it was recommended to try to support multiple (in this case C/C++) compilers if feasible, since this forces you to code more standard compliant and helps finding bugs.
So I was looking for additional free C/C++ compilers I could add support for to my project (it is written C/C++ (both languages combined)). I found Open Watcom to be an interesting candidate.
So my question is: what are the advantages and disadvantages of Open Watcom C/C++ compiler in comparison to other ones (for example gcc/g++, Visual C++ etc.)?
There are probably no particular advantages since if portable code is your aim you would generally try to restrict your code to the standard subset implemented by all compilers. I would say lowest common denominator but that may seem somewhat derogatory.
The advantages of one compiler over another generally lie in either the extensions it provides, the libraries it includes, or the performance of the generated code, if portability is your aim, you are probably interested in neither. It is not the advantages of one compiler over another that should interest you in this case, but rather its adherence to and compliance with the ISO standards.
In its earlier commercial incarnation, Watcom was famously one of the best optimising compilers available; I doubt however whether it has kept pace with processor development since then however (or even the transition for 16 bit to 32 bit x86!).
Its one feature that may be seen as an advantage in some cases is that it supports DOS, OS/2 and Windows, but that is probably only an advantage if legacy systems maintenance is your aim. Efforts to port it to Linux and BSD and processors other than x86 exist but are not complete, while GCC is already there and has been for years.
I would suggest that if you can support GCC and VC++ you probably have sufficient compiler independence (but recommend you compile with high warning level settings (-Wall -Werrorin GCC and \W4 \Wx in VC++). I think that compiler portability is a trivial issue compared with OS portability, and what you really need to consider is cross-platform library support rather than compiler independent code support.
If however playing with compilers is your thing, also consider the Digital Mars compiler. Like Watcom, this also has commercial compiler heritage, having been the Zortech/Symantec C/C++ compiler in a previous life.
Something watcom has in favor if your a 'haxxor' is the fact you can define out of the ordinary calling conventions using #pragma aux. Other than that, I see no reason to even attempt to use such a dated compiler unless you had horrible hardware restrictions. Imo, there are only 3 to worry about, GCC, ICC and MSVC
Some people here use expressions having to do with the Watcom (actually OpenWatcom) compiler being "dated." So what does it mean?
It could mean that it doesn't implement the latest C standard. How
many "non-dated" compilers do?
It could mean that it doesn't provide frameworks as it is primarily
an environment for C and ForTran and somewhere far after that comes a
C++ implementation which I cannot judge.
It could mean that it cannot generate excellent assembly code from
garbage C code.
It could mean that it doesn't support x64 development.
It could mean that the debugger is rudimentary and supports assembly
debugging.
Now to what it does do - in addition to supporting 16-bit real and protected mode code:
It produces excellent 32-bit protected mode code in the flat memory
model everyone uses for the Win32 environment.
Its code generating capabilities are excellent and it's right up
there at the top with more "non-dated" compilers.
It's easy to tune multi-threaded code using its profiler.
How do you "feel" a compiler? I for one don't know how to do that. Is it how the error messages are written? Is it in the messages on the console log?
The world's greatest network operating system - Novell Netware - had Watcom as its development environment. That says a great deal about Watcom. And lest anyone forget: Netware died due to poor marketing management combined with Redmond foul play. It did not die from lack of technological excellence.
I guess what I'm trying to say is that you guys that don't know what you're talking about should perhaps be a little less eager to write answers.
I know I know it's all about getting those coveted points and badges and what have you. And how you get them is irrelevant, right?
The Open Watcom compiler is somewhat outdated and it feels. It is based on what was long time ago a good compiler for making MS DOS games. Currently it is not very standard compliant and its standard library is in immature state.
I would prefer more modern and popular compilers like Intel cc, g++, VC++ or CLang. Not sure about Borland C, haven't tried it long time.
Advantages:
it's free
it's open source. You can alter it and its runtime libraries any way you like
it is crossplatform. You can run it, among other platforms, on Windows and Linux. More, you can build programs with it for different platforms, using a single platform
Disadvantages:
it is outdated a bit, but not that much as in the past
Positive (2)
The code and projects are not bloated like the projects in Microsoft Visual Studio/C++ (Not hundreds of vproj and other files and folders). You can just generate a makefile like in GCC (Which is better to understand than the Visual Projects Makefiles...)
Even the installation takes no big time (on x64 Win 7), in comparisation to 2++ GBytes Visual Project...
Compared to GCC it may seem that it is better to handle
Negative
Clib is missing: strn... functions (strndup, strncmpi etc.), getoptlong
No ARM support (# 1st July 2015)
As Editor you should really use Notepad++, not the internal Editor

Intel C++ compiler as an alternative to Microsoft's?

Is anyone here using the Intel C++ compiler instead of Microsoft's Visual c++ compiler?
I would be very interested to hear your experience about integration, performance and build times.
The Intel compiler is one of the most advanced C++ compiler available, it has a number of advantages over for instance the Microsoft Visual C++ compiler, and one major drawback. The advantages include:
Very good SIMD support, as far as I've been able to find out, it is the compiler that has the best support for SIMD instructions.
Supports both automatic parallelization (multi core optimzations), as well as manual (through OpenMP), and does both very well.
Support CPU dispatching, this is really important, since it allows the compiler to target the processor for optimized instructions when the program runs. As far as I can tell this is the only C++ compiler available that does this, unless G++ has introduced this in their yet.
It is often shipped with optimized libraries, such as math and image libraries.
However it has one major drawback, the dispatcher as mentioned above, only works on Intel CPU's, this means that advanced optimizations will be left out on AMD cpu's. There is a workaround for this, but it is still a major problem with the compiler.
To work around the dispatcher problem, it is possible to replace the dispatcher code produced with a version working on AMD processors, one can for instance use Agner Fog's asmlib library which replaces the compiler generated dispatcher function. Much more information about the dispatching problem, and more detailed technical explanations of some of the topics can be found in the Optimizing software in C++ paper - also from Anger (which is really worth reading).
On a personal note I have used the Intel c++ Compiler with Visual Studio 2005 where it worked flawlessly, I didn't experience any problems with microsoft specific language extensions, it seemed to understand those I used, but perhaps the ones mentioned by John Knoeller were different from the ones I had in my projects.
While I like the Intel compiler, I'm currently working with the microsoft C++ compiler, simply because of the financial extra investment the Intel compiler requires. I would only use the Intel compiler as an alternative to Microsofts or the GNU compiler, if performance were critical to my project and I had a the financial part in order ;)
I'm not using Intel C++ compiler at work / personal (I wish I would).
I would use it because it has:
Excellent inline assembler support. Intel C++ supports both Intel and AT&T (GCC) assembler syntaxes, for x86 and x64 platforms. Visual C++ can handle only Intel assembly syntax and only for x86.
Support for SSE3, SSSE3, and SSE4 instruction sets. Visual C++ has support for SSE and SSE2.
Is based on EDG C++, which has a complete ISO/IEC 14882:2003 standard implementation. That means you can use / learn every C++ feature.
I've had only one experience with this compiler, compiling STLPort. It took MSVC around 5 minutes to compile it and ICC was compiling for more than an hour. It seems that their template compilation is very slow. Other than this I've heard only good things about it.
Here's something interesting:
Intel's compiler can produce different
versions of pieces of code, with each
version being optimised for a specific
processor and/or instruction set
(SSE2, SSE3, etc.). The system detects
which CPU it's running on and chooses
the optimal code path accordingly; the
CPU dispatcher, as it's called.
"However, the Intel CPU dispatcher
does not only check which instruction
set is supported by the CPU, it also
checks the vendor ID string," Fog
details, "If the vendor string says
'GenuineIntel' then it uses the
optimal code path. If the CPU is not
from Intel then, in most cases, it
will run the slowest possible version
of the code, even if the CPU is fully
compatible with a better version."
OSnews article here
I tried using Intel C++ at my previous job. IIRC, it did indeed generate more efficient code at the expense of compilation time. We didn't put it to production use though, for reasons I can't remember.
One important difference compared to MSVC is that the Intel compiler supports C99.
Anecdotally, I've found that the Intel compiler crashes more frequently than Visual C++. Its diagnostics are a bit more thorough and clearly written than VC's. Thus, it's possible that the compiler will give diagnostics that weren't given with VC, or will crash where VC didn't, making your conversion more expensive.
However, I do believe that Intel's compiler allows you to link with Microsoft runtimes like the CRT, easing the transition cost.
If you are interoperating with managed code you should probably stick with Microsoft's compiler.
Recent Intel compilers achieve significantly better performance on floating-point heavy benchmarks, and are similar to Visual C++ on integer heavy benchmarks. However, it varies dramatically based on the program and whether or not you are using link-time code generation or profile-guided optimization. If performance is critical for you, you'll need to benchmark your application before making a choice. I'd only say that if you are doing scientific computing, it's probably worth the time to investigate.
Intel allows you a month-long free trial of its compiler, so you can try these things out for yourself.
I've been using the Intel C++ compiler since the first Release of Intel Parallel Studio, and so far I haven't felt the temptation to go back. Here's an outline of dis/advantages as well as (some obvious) observations.
Advantages
Parallelization (vectorization, OpenMP, SSE) is unmatched in other compilers.
Toolset is simply awesome. I'm talking about the profiling, of course.
Inclusion of optimized libraries such as Threading Building Blocks (okay, so Microsoft replicated TBB with PPL), Math Kernel Library (standard routines, and some implementations have MPI (!!!) support), Integrated Performance Primitives, etc. What's great also is that these libraries are constantly evolving.
Disadvantages
Speed-up is Intel-only. Well duh! It doesn't worry me, however, because on the server side all I have to do is choose Intel machines. I have no problem with that, some people might.
You can't really do OSS or anything like that on this, because the project file format is different. Yes, you can have both VS and IPS file formats, but that's just weird. You'll get lost in synchronising project options whenever you make a change. Intel's compiler has twice the number of options, by the way.
The compiler is a lot more finicky. It is far too easy to set incompatible project settings that will give you a cryptic compilation error instead of a nice meaningful explanation.
It costs additional money on top of Visual Studio.
Neutrals
I think that the performance argument is not a strong one anymore, because plenty of libraries such as Thrust or Microsoft AMP let you use GPGPU which will outgun your cpu anyway.
I recommend anyone interested to get a trial and try out some code, including the libraries. (And yes, the libraries are nice, but C-style interfaces can drive you insane.)
The last time the company I work for compared the two was about a year ago, (maybe 2). The Intel compiler generated faster code, usually only a bit faster, but in some cases quite a bit.
But it couldn't handle some of the MS language extensions that we depended on, so we ended up sticking with MS. It was VS 2005 that we were comparing it to. And I'm wracking my brain to remember exactly what MS extension the Intel compiler couldn't handle. I'll come back and edit this post if I can remember.
Intel C++ Compiler has AMAZING (human) support. Talking to Microsoft can literally take days. My non-trivial issue was solved through chat in under 10 minutes (including membership verification time).
EDIT: I have talked to Microsoft about problems in their products such as Office 2007, even got a bug reported. While I eventually succeeded, the overall size and complexity of their products and organization hierarchy is daunting.

Why is it important for C / C++ Code to be compilable on different compilers?

I'm
interested in different aspects of portability (as you can see when browsing my other questions), so I read a lot about it. Quite often, I read/hear that Code should be written in a way that makes it compilable on different compilers.
Without any real life experience with gcc / g++, it seems to me that it supports every major platform one can imagine, so Code that compiles on g++ can run on almost any system. So why would someone bother to have his code run on the MS Compiler, the Intel compiler and others?
I can think of some reasons, too. As the FAQ suggest, I'll try to post them as an answer, opposed to including them into my own question.
Edit: Conclusion
You people got me completely convinced that there are several good reasons to support multiple compilers. There are so many reasons that it was hard to choose an answer to be the accepted one. The most important reasons for me:
Contributors are much more likely to work an my project or just use it if they can use the compiler of their choice
Being compilable everywhere, being usable with future compilers and tools, and adhering to the standards are enforcing each other, so it's a good idea
On the other hand, I still believe that there are other things which are more important, and now I know that sometimes it isn't important at all.
And last of all, there was no single answer that could convince me not to choose GCC as the primary or default compiler for my project.
Some reasons from the top of my head:
1) To avoid being locked with a single compiler vendor (open source or not).
2) Compiling code with different compilers is likely to discover more errors: warnings are different and different compilers support the Standard to a different degree.
It is good to be compilable on MSVC, because some people may have projects that they build in MSVC that they want to link your code into, without having to set up an entirely different build system.
It is good to be compilable under the Intel compiler, because it frequently compiles faster code.
It is good to be compilable under Clang, because it can give better error messages and provide a better development experience, and it is an easier project to work on than GCC and so may gain additional benefits in the future.
In general, it is good to keep your options open, because there is no one compiler that fits all needs. GCC is a good compiler, and is great for most purposes, but you sometimes need something else.
And even if you're usually only going to be compiling under GCC, making sure your code compiles under other compilers is also likely to help find problems that could prevent your code from working with past and future versions of GCC, for instance, if there's something that GCC is less strict about now, but later adds checks for, another compiler may catch in advance, helping you keep your code cleaner. I've found this helpful in the reverse case, where GCC caught more potential problems with warnings than MSVC did (MSVC is the only compiler we needed to support, as we were only shipping on Windows, but we did a partial port to the Mac under GCC in our free time), which allowed me to produce cleaner code than I would have otherwise.
Portability. If you want your code to be accessible by the maximum number of people possible, you have to make it work on the widest range of possible compilers. It the same idea as make a web site run on browsers other than IE.
Some of it is political. Companies have standards, people have favorite tools etc. Telling someone that they should use X, really puts some people off, and makes it really inaccessible to others.
Nemanja brings up a good point too, targeting for a certain compiler locks you into to using it. In the Open Source world, this might not be as big of a problem (although people could just stop developing on it and it becomes obsolete), but what if the company you buy it from discontinues the product, or goes out of business?
For most languages I care less about portability and more about conforming to international standards or accepted language definitions, from which properties portability is likely to follow. For C, however, portability is a useful idea, because it is very hard to write a program that is "strictly conforming" to the standard. (Why? Because the standards committees felt it necessary to grandfather some existing practice, including giving compilers some freedom you might not like them to have.)
So why try to conform to a standard or make your code acceptable to multiple compilers as opposed to simply writing whatever gcc (or your other favorite compiler) happens to accept?
Likely in 2015 gcc will accept a rather different language than it does today. You would prefer not to have to rewrite your old code.
Perhaps your code might be ported to very small devices, where the GNU toolchain is not as well supported.
If your code compiles with any ANSI C compiler straight out of the box with no errors and no warnings, your users' lives will be easier and your software may be widely ported and used.
Perhaps someone will invent a great new tool for analyzing C programs, refactoring C programs, improving performance of C programs, or finding bugs in C programs. We're not sure what version of C that tool will work on or what compiler it might be based on, but almost certainly the tool will accept standard C.
Of all these arguments, it's the tool argument I find most convincing. People forget that there are other things one can do with source code besides just compile it and run it. In another language, Haskell, tools for analysis and refactoring lagged far behind compilers, but people who stuck with the Haskell 98 standard have access to a lot more tools. A similar situation is likely for C: if I am going to go to the effort of building a tool, I'm going to base it on a standard with a lifetime of 10 years or so, not on a gcc version which might change before my tool is finished.
That said, lots of people can afford to ignore portability completely. For example, in 1995 I tried hard to persuade Linus Torvalds to make it possible to compile Linux with any ANSI C compiler, not just gcc. Linus had no interest whatever—I suspect he concluded that there was nothing in it for him or his project. And he was right. Having Linux compile only with gcc was a big loss for compiler researchers, but no loss for Linux. The "tool argument" didn't hold for Linux, because Linux became so wildly popular; people building analysis and bug-finding tools for C programs were willing to work with gcc because operating on Linux would allow their work to have a big impact. So if you can count on your project becoming a wild success like Linux or Mosaic/Netscape, you can afford to ignore standards :-)
If you are building for different platforms, you will end up using different compilers. Moreover, C++ compilers tend to be always slightly behind the C++ standard, which means they usually change their adherence to it as time passes. If you target the common denominator to all major compilers then the code maintenance cost will be lower.
It's very common for applications (especially open-source application) that other developers would desire to use different compilers. Some would rather be using Visual Studio with MS Compiler for development purposes. Some would rather use Intel compiler for claimed performance benefits and such.
So here are the reasons I can think of
if speed is the biggest concern and there is special, highly optimized compiler for some platforms
if you build a library with a C++ interface (classes and templates, instead of just functions). Because of name mangling and other stuff, the library must be compiled with the same compiler as the client code, and if the client wants to use Visual C++, he must be able to compile the lib with it
if you want to support some very rare platform that does not have gcc support
(For me, those reasons are not significant, since I want to build a library that uses C++ internally, but has a C interface.)
Typically these are the reasons that I've found:
cross-platform (windows, linux, mac)
different developers doing development on different OS's (while not optimal, it does happen - testing usually takes place on the target platform only).
Compiler companies go out of business - or stop development on that language. If you know your program compiles/runs well using another compiler, you've covered your bet.
I'm sure there are other answers as well, but these are the most common reasons I've run into so far.
Several projects use GCC/G++ as a "day-to-day" compiler for normal use, but every so often will check to make sure their code follows the standards with the Comeau C/C++ compiler. Their website looks like a nightmare, and the compiler isn't free, but it's known as possibly the most standards-compliant compiler around, and will warn you about things many compilers will silently accept or explicitly allow as a nonstandard extension (yes, I'm looking at you, Mr. I-don't-mind-and-actually-actively-support-your-efforts-to-do-pointer-arithmetic-on-void-pointers-GCC).
Compiling every so often with a compiler as strict as Comeau (or, even better, compiling with as many compilers as you can get your hands on) will let you know of errors people might experience when trying to compile your code, things your compiler allows you to do that it shouldn't, and potentially things that other compilers don't allow you to do that you should. Writing ANSI C or C++ should be an important goal for code you intend to use on multiple platforms, and using the most standards-compliant compiler around is a good way to do that.
(Disclaimer: I don't have Comeau, and don't plan on getting it, and can't get it because I'm on OS X. I do C, not C++, so I can actually know the whole language, and the average C compiler is much closer to the C standard than the average C++ compiler to the C++ standard, so it's less of an issue for me. Just wanted to put this in here because this started to look like an ad for Comeau. It should be seen more as an ad for compiling with many different compilers.)
This one of those "It depends" questions. For open source code, it's good to be portable to multiple compilers. After all having people in diverse environments build the code is sort of the point.
But for closed source, This is a lot less important. You never want to unnecessarily tie yourself to a specific compiler. But in most of the places I've worked, compiler portability didn't even make into the top 10 of things we cared about. Even if you never use anything other than standerd C/C++, switching a large code base to a new compiler is a dangerous thing to do. Compilers have bugs. Sometimes your code will have bugs that are benign on one compiler, but suddenly a problem on another.
I remember one transition, where one compiler thought this code was just fine:
for (int ii = 0; ii < n; ++ii) { /* some code */ }
for (int ii = 0; ii < y; ++ii) { /* some other code */ }
While the newer compiler complained that ii had been declared twice, so we had to go through all of our code and declare loop variables before the loop in order to switch.
One place I worked was so careful about unintended side effects of compiler switches, that they checked specific compilers into each source tree, and once the code shipped would only use that one compiler to do updates on that code base - forever.
Another place would try out a new compiler for 6 months to a year before they switched over to it.
I find gcc a slow compiler on windows (nothing to compare against under linux). So I (sometimes) want to compile my code under other compilers, just for faster development cycles.
I don't think anyone has mentioned it so far, but another reason may be access to certain platform-specific features: Many operating system vendors have special versions of GCC, or even their own home-grown (or licensed and modified) compilers. So if you want your code to run well on several platforms, you may need to choose the right compiler on each platform. Be that an embedded system, MacOS, Windows etc.
Also, speed may be an issue (both compilation speed and execution speed). Back in the PPC days, GCC produced notoriously slow code on PowerPC CPUs, so Apple put a bunch of engineers on GCC to improve that (GCC was very new for the Mac, and all other PowerPC platforms were small). Platforms that are used less may be optimized less in GCC, so using another compiler that's been written for that platform can be faster.
But as a final summary: While there is ideal value in compiling on several compilers, in practice, this is mainly interesting for cross-platform software (and open-source software, because it often gets made cross-platform fairly quickly, and contributors have it easier if they can use their compiler of choice instead of having to learn a new one). If you need to ship on one platform only, shipping and maintenance are usually much more important than investing in building on several compilers if you're only releasing the builds made with one of them. However, you will want to clearly document any deviations from the standard (GCC-isms, for instance) to make the job of porting easier, should you ever have to do it.
Both Intel compiler and llvm are faster than gcc. The real reasons to use gcc are
Infinite hardware support (on no other compiler can you compile a lego mindstorm code on your old DEC).
it's cheap
best spagety optimizer in the business.