How important is standards-compliance? - c++

For a language like C++ the existence of a standard is a must. And good compilers try their best (well, most of the good compilers, at least) to comply. Many compilers have language extensions, some of which are allowed by the standard, some of which are not. Of the latter kind 2 examples:
gcc's typeof
microsoft's compilers allow a pure virtual function declaration to have both a pure-specifier(=0) and a definition (which is prohibited by the standard - let's not discuss why, that's another topic:)
(there are many other examples)
Both examples are useful in the following sense: example1 is a very useful feature which will be available in c++0x under a different name. example2 is also useful, and microsoft has decided not to respect the ban that made no sense.
And I am grateful that compilers provide language extensions that help us developers in our routine. But here's a question: shouldn't there be an option which, when set, mandates that the compiler be as standards compliant as it can, no matter whether they agree with the standard or not. For example visual studio has such an option, which is called disable language extensions. But hey, they still allow example2.
I want everyone to understand my question correctly. It is a GREAT thing that MSVC allows example2, and I would very much like that feature to be in the standard. It doesn't break any compliant code, it does nothing bad. It just isn't standard.
Would you like that microsoft disable example2 when disable language extensions is set to true? Note that the words microsoft, example2, etc. are placeholders :)
Why?
Again, just to make sure. The crucial point is: Should a compiler bother to provide a compliant version (optionally set in the settings)(in its limits, e.g. I am not talking about export) for a certain feature when they provide a better alternative that is not standard and is perhaps even a superset of the standard, thus not breaking anything.

Standards compliance is important for the fundamental reason that it makes your code easier to maintain. This manifests in a number of ways:
Porting from one version of a compiler to another. I once had to post a 1.2 million-LOC app from VC6 to VC9. VC6 was notorious for being horribly non-Compliant, even when it was new. It allowed non-compliant code even on the highest warning levels that the new compiler rejected at the lowest. If the code had been written in a more compliant way in the first place, this project wouldn't (shouldn't)have taken 3 months.
Porting from one platform to another. As you say, the current MS compilers have language extensions. Some of these are shared by compilers on other platforms, some are not. Even if they are shared, the behavior may be subtly different. Writing compliant code, rather that using these extensions, makes your code correct from the word go. "Porting" becomes simply pulling the tree down and doing a rebuild, rather than digging through the bowels of your app trying to figure out why 3 bits are wrong.
C++ is defined by the standard. The extensions used by compilers changes the language. New programmers coming online who know C++ but not the dialect your compiler uses will get up to speed more quickly if you write to Standard C++, rather than the dialect that your compiler supports.

First, a reply to several comments. The MS VC extension in question is like this:
struct extension {
virtual void func() = 0 { /* function body here */ }
};
The standard allows you to implement the pure virtual function, but not "in place" like this, so you have to write it something like this instead:
struct standard {
virtual void func() = 0;
};
void standard::func() { ; }
As to the original question, yes, I think it's a good idea for the compiler to have a mode in which it follows (and enforces) the standard as accurately as possible. While most compilers have that, the result isn't necessarily as accurate a representation of the standard as you/I would like.
At least IMO, about the only answer to this is for people who care about portability to have (and use) at least a couple of compilers on a regular basis. For C++, one of those should be based on the EDG front-end; I believe it has substantially better conformance than most of the others. If you're using Intel's compiler on a regular basis anyway, that's fine. Otherwise, I'd recommend getting a copy of Comeau C++; it's only $50, and it's the closest thing to a "reference" available. You can also use Comeau online, but if you use it on a regular basis, it's worth getting a copy of your own.
Not to sound like an EDG or Comeau shill or anything, but even if you don't care much about portability, I'd recommend getting a copy anyway -- it generally produces excellent error messages. Its clean, clear error messages (all by themselves) have saved enough time over the years to pay for the compiler several times over.
Edit: Looking at this again, some of the advice is looking pretty dated, especially the recommendation for EDG/Comeau. In the three years since I originally wrote this, Clang has progressed from purely experimental to being quite reasonable for production use. Likewise, the gcc maintainers have (IMO) made great strides in conformance as well.
During the same time, Comeau hasn't released a single new version of their compiler, and there's been a new release of the C++ standard. As a result, Comeau is now fairly out of date with respect to the current standard (and the situation seems to be getting worse, not better -- the committee has already approved a committee draft of a new standard that is likely to become C++14).
As such, although I recommended Comeau at that time, I'd have difficulty (at best) doing so today. Fortunately, most of the advantages it provided are now available in more mainstream compilers -- both Clang and gcc have improved compliance (substantially) as outlined above, and their error messages have improved considerably as well (Clang has placed a strong emphasis on better error messages, almost from its inception).
Bottom line: I'd still recommend having at least two compilers installed and available, but today I'd probably choose different compilers than I did when I originally wrote this answer.

"Not breaking anything" is such a slippery slope in the long run, that it's better to avoid it altogether. My company's main product outlived several generations of compilers (first written in 1991, with RW), and combing through compiler extensions and quiet standards violations whenever it was the time to migrate to a newer dev system took a lot of effort.
But as long as there's an option to turn off or at least warn about 'non-standard extension', I'm good with it.
34, 70, 6.

I would certainly want an option that disables language extensions to disable all language extensions. Why?
All options should do what they say they do.
Some people need to develop portable code, requiring a compiler that only accepts the standard form of the language.
"Better" is a subjective word. Language extensions are useful for some developers, but make things more difficult for others.

I think that it's critical that a compiler provide a standards-only mode if it wants to be the primary one used while developing. All compilers should, of course, compile standards compliant code, but it's not critical they they don't extend if they don't think of themselves as the primary compiler -- for example, a cross-compiler, or a compiler for a less popular platform that is nearly always ported to, rather than targeted.
Extensions are fine for any compiler, but it would be nice if I had to turn them on if I want them. By default, I'd prefer a standards-only compiler.
So, given that, I expect MSVC to be standards-only by default. The same with gcc++.
Stats: 40, 90, 15

I think standards compliance is very important.
I always consider source code is more for the human readers than for the machine(s). So, to communicate programmer's intention to the reader, abiding the standard is like speaking a language of lowest common denominator.
Both at home and work, I use g++, and I have aliased it with the following flags for strict standard compliance.
-Wall -Wextra -ansi -pedantic -std=c++98
Check out this page on Strict ANSI/ISO
I am not a standards expert, but this has served me well. I have written STL-style container libraries which run as-is on different platforms, e.g. 32-bit linux, 64-bit linux, 32-bit solaris, and 32-bit embedded OSE.

Consider indicators on cars (known as "turn signals" in some jurisdictions); they are a reliable way to determine which direction someone's going to turn off a roundabout... until just one person doesn't use them at all. Then the whole system breaks down.
It didn't "hurt anyone" or obviously "break anything" in IE when they allowed document.someId to be used as a shortcut for document.getElementById('someId').... however, it did spawn an entire generation of coders and even books that consequently thought it was okay and right, because "it works". Then, suddenly, the ten million resulting websites were entirely non-portable.
Standards are important for interoperability, and if you don't follow them then there's little point in having them at all.
Standards-compliance hounds may get hated for "pedanticism" but, really, until everybody follows suit you're going to have portability and compatibility problems for ever.

How important standards-compliance is depends on what you are trying to achieve.
If you are writing a program that will never be ported outside of its current environment (especially a program that you're not planning to develop/support for a long time) then it's not very important. Whatever works, works.
If you need your program to remain relevant for a long time, and be easily portable to different environments, than you will want it to be standards compliant, since that's the only way to (more or less) guarantee that it will work everywhere.
The trick, of course, is figuring out which situation you are actually in. It's very common to start a program thinking it is a short-term hack, and later on find that it's so useful that you're still developing/maintaining it years later. In that situation your life will be much less unpleasant if you didn't make any short-sighted design decisions at the beginning of the program's lifetime.

Related

gnu c++0x backwards compatibility status - can I just switch it on and go?

I have a pretty big c++ code base (not self written).
Numerous libraries, some not so syntactically heavy, some extremely so.
Among others there's heavy use of Boost, some Eigen.
I just love some of the new features of 0x and a quick compile/test tells me that it seems all good.
This question, and this one suggest that there are some things that smell funny.
My current state is:
gcc4.4.3
libstc++6-4.4
boost-1.40
eigen 3.0 - beta3
using the std=c++0x flag.
I know the standards committee has agonized about backwards compatibility and endured serious pain.
My question is, did it work? Can I take all that code, switch on c++0x and be certain, that everything does not only compile but also work as expected?
I don't use high 0x magic, just auto and some of the usual favorites explicitly marked "implemented" on GNU C++0x status.
I would certainly recommend using GCC 4.5, as it sports more bug fixes and a more solid implementation of the latest C++0x.
With regard to the questions you linked:
This is just a type definition on a new platform. Don't worry about this, it won't really break anything or be hard to fix.
This is one of the more complicated C++0x features, but shouldn't have that much of an influence on backwards compatibility (except if boost tries to hack itself into a feature that is to become a compiler/language feature).
The only real way to check if there are problems, is to turn on the compiler flag, and see if any problems pop up. Better yet, turn on full-fledged warnings (at least on GCC, MSVC has some nagging problems in this regard) to detect as many subverted problems as possible. This isn't waterproof though. You might want to use different compilers (GCC/MSVC/Intel/Clang) to cross-check compatibility, but in the case of c++0x you'll be very limited to the common subset of the compilers you use to check. Currently I use C++0x extensively, and intend to fix any problems that come into being when the finalized standard is implemented.
There is no answer to your question, it depends on your code. Try to compile, fix compile-time problems. Once it compiles, run your test cases, and fix whatever needs to be fixed.
If you don't have test code, start there.
This cannot be answered without knowledge of the code base. In particular, with 100% backwards compatibility you might find the same issues that you can find by upgrading/changing a compiler within the same standard:
If your code is not standard, if it uses features that are specific to the compiler or version, if it has any instance of undefined behavior or depends on unspecified behaviors that just happen to work in your current platform, then if might not compile (if you are lucky) or exhibit different behavior at runtime (if you are not lucky).

What C++(98/03) features are not-so-well supported by poor compilers?

Often I read of some software cutting out some C++ features in order to be compliant with poor/old/exotic C++ compilers.
This one is just the last one I got into: Box2D isn't using namespaces because they need to support:
poor C++ compilers where namespace support can be spotty
A bigger example I can think of is Qt, which is relying upon MOC, is limiting template usage a lot and is avoiding templates (well, this is at least true for Qt3 and previous versions, Qt4 is mostly doing that to keep to their conventions).
I'm wondering what compilers are that poor?
There are lots of C++ compilers out there (I never heard of most of them), but I would hope that all of them are supporting most common(/simple?) C++ features like namespaces (unless they're dead); isn't this the case?
What are the most unsupported features?
I can easily expect the lack of external templates, maybe template partial specialization and similar features. At most even RTTI or exceptions, but I would have never suspected of namespaces.
In my experience, people are just scared of new things and especially things that broke on them once, 20 decades ago. There's no valid reason against using namespaces in anything written during this century.
If you're looking for things to toss out though, if you happened to be targeting windows not too long ago you had to do more than just toss features out of C++ and not use them, you had to use different syntax. Templates come to mind as one of the worse supported features in VC. They've gotten much better but still fail sometimes.
Another one that is not supported by that particular compiler (STILL!) is overloading virtual functions to return derived type pointers to the types the base version returned when using MI. VC just plain freaks out and you end up having to do virtual_xxx() and providing non-virtual "xxx()" functions to replicate the standard behavior.
What are the most unsupported features?
Let's abstract away export, which, sadly, was an experiment that failed. Then, by count of compiler users this is probably two-phase lookup, which Visual C++ still doesn't properly implement. And VC has a lot of users.
Basically anything that was added as part of the standardization process in C++98. That is:
Templates (Not just partial specialization -- I mean all templates)
Exceptions (Often because this one requires runtime support/overhead -- they're often not available on things like microcontrollers)
Namespaces
RTTI
If you go even older, you can add
Most everything in the standard library.
Several old compilers are just plain bad; OTOH I would strongly not recommend worrying about them. There are already several good and freely available reasonably standards compliant compilers available for pretty much every platform of which I am aware (mostly via G++).
Namespaces have been around forever. Some of the darker places in templates, maybe. But namespaces? No way. Even with templates, the vast, vast majority of usage scenarios are fine. Some compilers aren't perfect (they're software), but flat out not supporting a C++03 feature that isn't export? That just doesn't happen. Most compilers extend the language, not reduce it, and are moving to support C++0x.
RTTI and exceptions are often accused of poorly efficient implementations, but not poor implementation conformance.
If you are programming in the main stream with a main stream compiler then usually you find everything in the C++03 standard supported (except export as mentioned by sbi (but then again nobody used the feature (as it was not supported)). The further from the main stream the compiler is the less features usually (but they are all moving forward (some more slowly than others))
Its only when you get to less used hardware that has its own proprietary compiler that support for features start to dwindle.
The main example would be mobile devices and their associated compilers (though because of their popularity in the main stream in recent years these compilers are getting updated more quickly than before).
A secondary example would by SOC devices. There compilers are so specific that they are usually in house compilers and only get as much work on them as the SOC company can afford and as a result tend to lag a long way (or at least further) behind other compilers.
Value Initialization (C++03 feature) is not properly implemented in MSVC++.
Qt doesn't use templates much because it is older than templates.

How should the C++ standard be used

I have this classic question of how should the C++ Standard (I mean the actual official document of the finalized ones) e.g. C++98, C++03 be used to learn and teach C++. My idea is only from the point of view of an average C++ user and not from the point of view of the language lawyers or someone who wishes to be in the Standards committee, compiler writers and so on.
Here are my personal thoughts:
a) It is aweful place to start learning C++. Books like "C++ in a Nutshell", "The C++ programming Language" etc do a very good job on that front while closely aligning with the Standard.
b) One needs to revert to the Standard only when
a compiler gives a behavior which is not consistent with what the common books say or,
a certain behavior is inconsistent across compilers e.g. GCC, VS, Comeau etc. I understand the fact that these compilers could be inconsistent is in very few cases / dark corners of the language e.g. templates/exception handling etc. However one really comes to know about the possible different compiler behaviors only when either one is porting and/or migrating to a different environment or when there is a compiler upgrade e.g.
if a concept is poorly explained / not explained in the books at hand e.g. if it is a really advanced concept
Any thoughts/ideas/recommendation on this?
The C++ language standard would be an absolutely terrible place to start learning the language. It is dense, obtuse, and really long. Often the information you are looking for is spread across seven different clauses or hidden in a half of a sentence in a clause completely unrelated to where you think it should be (or worse, a behavior is specified in the sentence you ignored because you didn't think it was relevant).
It does have its uses, of course. To name a few,
If you think you've found a bug in a compiler, it's often necessary to refer to the standard to make sure you aren't just misunderstanding what the specified behavior is.
If you find behavior that is inconsistent between compilers, it's handy to be able to look up which is correct (or which is more correct), though often you'll need to write workarounds regardless.
If you want to know why things are the way they are, it is often a good reference: you can see how different features of the language are related and understand how they interact. Things aren't always clear, of course, but they often are. There are a lot of condensed examples and notes demonstrating and explaining the normative text.
If you reference the C++ standard in a post on Stack Overflow, you get more a lot more upvotes. :-)
It's very interesting to learn about the language. It's one thing to write code and stumble through getting things to compile and run. It's another thing altogether to go and try to understand the language as a whole and understand why you have to do things a certain way.
The standard should be used to ensure portability of code.
When writing basic c++ code you shouldn't need to refer to the standards, but when using templates or advanced use of the STL, reference to the standard is essential to maintain compatibility with more than one compiler, and forward compatibility with future versions.
I use g++ to compile my C++ programs and there I use the option -std=c++0x (earlier, -std=c++98) to make sure that my code is always standard compliant. If I get any warning or error regarding standard compliance, I research on that to educate myself and fix my code.

To use or not to use C++0x features [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
How are you using C++0x today?
I'm working with a team on a fairly new system. We're talking about migrating to MSVC 2010 and we've already migrated to GCC 4.5. These are the only compilers we're using and we have no plans to port our code to different compilers any time soon.
I suggested that after we do it, we start taking advantage of some of the C++0x features already provided like auto. My co-worker suggested against this, proposing to wait "until C++0x actually becomes standard." I have to disagree, but I can see the appeal in the way he worded it. Nevertheless, I can't help but think that this counter-argument comes more out of fear and trepidation of learning C++0x than a genuine concern for standardization.
Given the new state of the system, I want for us to take advantage of the new technology available. Just auto, for instance, would make our daily lives easier (just writing iterator-based for loops until range-based loops come along, e.g.).
Am I wrong to think this? It is not as though I'm proposing we radically change our budding codebase, but just start making use of C++0x features where convenient. We know what compilers we're using and have no immediate plans to port (if we ever port the code base, by then surely compilers will be available with C++0x features as well for the target platform). Otherwise it seems to me like avoiding the use of iostreams in 1997 just because the ISO C++ standard was not published yet in spite of the fact that all compilers already provided them in a portable fashion.
If you all agree, could you provide me arguments I could use to strengthen my position? If not, could I get a bit more details on this "until the C++0x is standard" idea? BTW, anyone know when that's going to be?
I'd make the decision on a per-feature basis.
Remember that the standard is really close to completion. All that is left is voting, bugfixing and more voting.
So a simple feature like auto is not going to go away, or have its semantics changed. So why not use it.
Lambdas are complex enough that they might have their wording changed and the semantics in a few corner cases fixed up a bit, but on the whole, they're going to behave the way they do today (although VS2010 has a few bugs about the scope of captured variables, MS has stated that they are bugs, and as such may be fixed outside of a major product release).
If you want to play it safe, stay away from lambdas. Otherwise, use them where they're convenient, but avoid the super tricky cases, or just be ready to inspect your lambda usage when the standard is finalized.
Most features can be categorized like this, they're either so simple and stable that their implementation in GCC/MSVC are exactly how they're going to work in the final standard, or they're tricky enough that they might get a few bugfixes applied, and so they can be used today, but you run the risk of running into a few rough edges in certain border cases.
It does sound silly to avoid C++0x feature solely because they're not formalized yet. Avoid the features that you don't trust to be complete, bug-free and stable, but use the rest.
Theoretical but not practical disadvantages of using C++0x:
Makes it harder to port to different compilers.
Not adhering to any published standard.
Practical advantages of using C++0x:
Makes your daily lives easier, hence more productive.
It's a debate between what's theoretically right, and what's practical. If your team has any intent of actually doing something with this code, the practical should outweigh the theoretical tenfold.
One thing you don't need to (mostly) worry about now is features being added or taken away because the working draft reached "Final Committee Draft" (FCD) back in march. Feature-wise should be frozen, standards committee will not accept any-more proposals for C++0x.
Downside is it's still a draft and not finalized yet, the standards committee are in the phase of making corrections and adjustments before finalizing and publish the ISO standard (expected release to be march 2011). That could mean minor syntactic or semantic/behaviour changes which could make your code not compilable or not work correctly once you compile with a compiler that is more standard compliant than the one you're using at the time you wrote the code.
You'll probably have to wait sometime for compilers like VC++10 to get update with any corrections/adjustments made.
We had the exact same problem so we compromised. We took the C++ 0x TR1 release and then only took the portions that we knew we wanted to use. Sounds like a lot of work, but so far it's worked out well. We're using the regex libraries, tuples, and a couple of others. Once the standard is ratified, then we'll migrate to the full C++ 0x. This obviously isn't the best solution but it was one that has worked well for us.
If you intend to make your system open source within a not-too-far future, then that's an argument for not using too many bleeding-edge features. A production system running Debian or Red Hat won't necessarily have a bleeding-edge compiler installed.
You said
if we ever port the code base, by then surely compilers will be available with C++0x features as well for the target platform
but that a compiler exists for a platform doesn't always mean that it's installed/used/wanted, especially on production systems.
If, on the other hand, you intend to do all the compiling yourself, this is not an issue.

Why is it important for C / C++ Code to be compilable on different compilers?

I'm
interested in different aspects of portability (as you can see when browsing my other questions), so I read a lot about it. Quite often, I read/hear that Code should be written in a way that makes it compilable on different compilers.
Without any real life experience with gcc / g++, it seems to me that it supports every major platform one can imagine, so Code that compiles on g++ can run on almost any system. So why would someone bother to have his code run on the MS Compiler, the Intel compiler and others?
I can think of some reasons, too. As the FAQ suggest, I'll try to post them as an answer, opposed to including them into my own question.
Edit: Conclusion
You people got me completely convinced that there are several good reasons to support multiple compilers. There are so many reasons that it was hard to choose an answer to be the accepted one. The most important reasons for me:
Contributors are much more likely to work an my project or just use it if they can use the compiler of their choice
Being compilable everywhere, being usable with future compilers and tools, and adhering to the standards are enforcing each other, so it's a good idea
On the other hand, I still believe that there are other things which are more important, and now I know that sometimes it isn't important at all.
And last of all, there was no single answer that could convince me not to choose GCC as the primary or default compiler for my project.
Some reasons from the top of my head:
1) To avoid being locked with a single compiler vendor (open source or not).
2) Compiling code with different compilers is likely to discover more errors: warnings are different and different compilers support the Standard to a different degree.
It is good to be compilable on MSVC, because some people may have projects that they build in MSVC that they want to link your code into, without having to set up an entirely different build system.
It is good to be compilable under the Intel compiler, because it frequently compiles faster code.
It is good to be compilable under Clang, because it can give better error messages and provide a better development experience, and it is an easier project to work on than GCC and so may gain additional benefits in the future.
In general, it is good to keep your options open, because there is no one compiler that fits all needs. GCC is a good compiler, and is great for most purposes, but you sometimes need something else.
And even if you're usually only going to be compiling under GCC, making sure your code compiles under other compilers is also likely to help find problems that could prevent your code from working with past and future versions of GCC, for instance, if there's something that GCC is less strict about now, but later adds checks for, another compiler may catch in advance, helping you keep your code cleaner. I've found this helpful in the reverse case, where GCC caught more potential problems with warnings than MSVC did (MSVC is the only compiler we needed to support, as we were only shipping on Windows, but we did a partial port to the Mac under GCC in our free time), which allowed me to produce cleaner code than I would have otherwise.
Portability. If you want your code to be accessible by the maximum number of people possible, you have to make it work on the widest range of possible compilers. It the same idea as make a web site run on browsers other than IE.
Some of it is political. Companies have standards, people have favorite tools etc. Telling someone that they should use X, really puts some people off, and makes it really inaccessible to others.
Nemanja brings up a good point too, targeting for a certain compiler locks you into to using it. In the Open Source world, this might not be as big of a problem (although people could just stop developing on it and it becomes obsolete), but what if the company you buy it from discontinues the product, or goes out of business?
For most languages I care less about portability and more about conforming to international standards or accepted language definitions, from which properties portability is likely to follow. For C, however, portability is a useful idea, because it is very hard to write a program that is "strictly conforming" to the standard. (Why? Because the standards committees felt it necessary to grandfather some existing practice, including giving compilers some freedom you might not like them to have.)
So why try to conform to a standard or make your code acceptable to multiple compilers as opposed to simply writing whatever gcc (or your other favorite compiler) happens to accept?
Likely in 2015 gcc will accept a rather different language than it does today. You would prefer not to have to rewrite your old code.
Perhaps your code might be ported to very small devices, where the GNU toolchain is not as well supported.
If your code compiles with any ANSI C compiler straight out of the box with no errors and no warnings, your users' lives will be easier and your software may be widely ported and used.
Perhaps someone will invent a great new tool for analyzing C programs, refactoring C programs, improving performance of C programs, or finding bugs in C programs. We're not sure what version of C that tool will work on or what compiler it might be based on, but almost certainly the tool will accept standard C.
Of all these arguments, it's the tool argument I find most convincing. People forget that there are other things one can do with source code besides just compile it and run it. In another language, Haskell, tools for analysis and refactoring lagged far behind compilers, but people who stuck with the Haskell 98 standard have access to a lot more tools. A similar situation is likely for C: if I am going to go to the effort of building a tool, I'm going to base it on a standard with a lifetime of 10 years or so, not on a gcc version which might change before my tool is finished.
That said, lots of people can afford to ignore portability completely. For example, in 1995 I tried hard to persuade Linus Torvalds to make it possible to compile Linux with any ANSI C compiler, not just gcc. Linus had no interest whatever—I suspect he concluded that there was nothing in it for him or his project. And he was right. Having Linux compile only with gcc was a big loss for compiler researchers, but no loss for Linux. The "tool argument" didn't hold for Linux, because Linux became so wildly popular; people building analysis and bug-finding tools for C programs were willing to work with gcc because operating on Linux would allow their work to have a big impact. So if you can count on your project becoming a wild success like Linux or Mosaic/Netscape, you can afford to ignore standards :-)
If you are building for different platforms, you will end up using different compilers. Moreover, C++ compilers tend to be always slightly behind the C++ standard, which means they usually change their adherence to it as time passes. If you target the common denominator to all major compilers then the code maintenance cost will be lower.
It's very common for applications (especially open-source application) that other developers would desire to use different compilers. Some would rather be using Visual Studio with MS Compiler for development purposes. Some would rather use Intel compiler for claimed performance benefits and such.
So here are the reasons I can think of
if speed is the biggest concern and there is special, highly optimized compiler for some platforms
if you build a library with a C++ interface (classes and templates, instead of just functions). Because of name mangling and other stuff, the library must be compiled with the same compiler as the client code, and if the client wants to use Visual C++, he must be able to compile the lib with it
if you want to support some very rare platform that does not have gcc support
(For me, those reasons are not significant, since I want to build a library that uses C++ internally, but has a C interface.)
Typically these are the reasons that I've found:
cross-platform (windows, linux, mac)
different developers doing development on different OS's (while not optimal, it does happen - testing usually takes place on the target platform only).
Compiler companies go out of business - or stop development on that language. If you know your program compiles/runs well using another compiler, you've covered your bet.
I'm sure there are other answers as well, but these are the most common reasons I've run into so far.
Several projects use GCC/G++ as a "day-to-day" compiler for normal use, but every so often will check to make sure their code follows the standards with the Comeau C/C++ compiler. Their website looks like a nightmare, and the compiler isn't free, but it's known as possibly the most standards-compliant compiler around, and will warn you about things many compilers will silently accept or explicitly allow as a nonstandard extension (yes, I'm looking at you, Mr. I-don't-mind-and-actually-actively-support-your-efforts-to-do-pointer-arithmetic-on-void-pointers-GCC).
Compiling every so often with a compiler as strict as Comeau (or, even better, compiling with as many compilers as you can get your hands on) will let you know of errors people might experience when trying to compile your code, things your compiler allows you to do that it shouldn't, and potentially things that other compilers don't allow you to do that you should. Writing ANSI C or C++ should be an important goal for code you intend to use on multiple platforms, and using the most standards-compliant compiler around is a good way to do that.
(Disclaimer: I don't have Comeau, and don't plan on getting it, and can't get it because I'm on OS X. I do C, not C++, so I can actually know the whole language, and the average C compiler is much closer to the C standard than the average C++ compiler to the C++ standard, so it's less of an issue for me. Just wanted to put this in here because this started to look like an ad for Comeau. It should be seen more as an ad for compiling with many different compilers.)
This one of those "It depends" questions. For open source code, it's good to be portable to multiple compilers. After all having people in diverse environments build the code is sort of the point.
But for closed source, This is a lot less important. You never want to unnecessarily tie yourself to a specific compiler. But in most of the places I've worked, compiler portability didn't even make into the top 10 of things we cared about. Even if you never use anything other than standerd C/C++, switching a large code base to a new compiler is a dangerous thing to do. Compilers have bugs. Sometimes your code will have bugs that are benign on one compiler, but suddenly a problem on another.
I remember one transition, where one compiler thought this code was just fine:
for (int ii = 0; ii < n; ++ii) { /* some code */ }
for (int ii = 0; ii < y; ++ii) { /* some other code */ }
While the newer compiler complained that ii had been declared twice, so we had to go through all of our code and declare loop variables before the loop in order to switch.
One place I worked was so careful about unintended side effects of compiler switches, that they checked specific compilers into each source tree, and once the code shipped would only use that one compiler to do updates on that code base - forever.
Another place would try out a new compiler for 6 months to a year before they switched over to it.
I find gcc a slow compiler on windows (nothing to compare against under linux). So I (sometimes) want to compile my code under other compilers, just for faster development cycles.
I don't think anyone has mentioned it so far, but another reason may be access to certain platform-specific features: Many operating system vendors have special versions of GCC, or even their own home-grown (or licensed and modified) compilers. So if you want your code to run well on several platforms, you may need to choose the right compiler on each platform. Be that an embedded system, MacOS, Windows etc.
Also, speed may be an issue (both compilation speed and execution speed). Back in the PPC days, GCC produced notoriously slow code on PowerPC CPUs, so Apple put a bunch of engineers on GCC to improve that (GCC was very new for the Mac, and all other PowerPC platforms were small). Platforms that are used less may be optimized less in GCC, so using another compiler that's been written for that platform can be faster.
But as a final summary: While there is ideal value in compiling on several compilers, in practice, this is mainly interesting for cross-platform software (and open-source software, because it often gets made cross-platform fairly quickly, and contributors have it easier if they can use their compiler of choice instead of having to learn a new one). If you need to ship on one platform only, shipping and maintenance are usually much more important than investing in building on several compilers if you're only releasing the builds made with one of them. However, you will want to clearly document any deviations from the standard (GCC-isms, for instance) to make the job of porting easier, should you ever have to do it.
Both Intel compiler and llvm are faster than gcc. The real reasons to use gcc are
Infinite hardware support (on no other compiler can you compile a lego mindstorm code on your old DEC).
it's cheap
best spagety optimizer in the business.