Related
In C++, if I write a simple game like pong using Linux, can that same code be compiled on Windows and OSX? Where can I tell it won't be able to be compiled?
You have three major portability hurdles.
The first, and simplest, is writing C++ code that all the target compilers understand. Note: this is different from writing to the C++ standard. The problem with "writing to the standard" starts with: which standard? You have C++98, C++03, C++TR1 or C++11 or C++14 or C++17? These are all revisions to C++ and the newer one you use the less compliant compilers are likely to be. C++ is very large, and realistically the best you can hope for is C++98 with some C++03 features.
Compilers all add their own extensions, and it's all too easy to unknowingly use them. You would be wise to write to the standard and not to the compiler documentation. Some compilers have a "strict" mode where they will turn off all extensions. You would be wise to do primary development in the compiler which has the most strictures and the best standard compliance. gcc has the -Wstrict family of flags to turn on strict warnings. -ansi will remove extensions which conflict with the standard. -std=c++98 will tell the compiler to work against the C++98 standard and remove GNU C++ extensions.
With that in mind, to remain sane you must restrict yourself to a handful of compilers and only their recent versions. Even writing a relatively simple C library for multiple compilers is difficult. Fortunately, both Linux and OS X use gcc. Windows has Visual C++, but different versions are more like a squabbling family than a single compiler when it comes to compatibility (with the standard or each other), so you'll have to pick a version or two to support. Alternatively, you can use one of the gcc derived compiler environments such as MinGW. Check the [list of C++ compilers](less compliant compilers are likely to be) for compatibility information, but keep in mind this is only for the latest version.
Next is your graphics and sound library. It has to not just be cross platform, it has to look good and be fast on all platforms. These days there's a lot of possibilities, Simple DirectMedia Layer is one. You'll have to choose at what level you want to code. Do you want detailed control? Or do you want an engine to take care of things? There's an existing answer for this so I won't go into details. Be sure to choose one that is dedicated to being cross platform, not just happens to work. Compatibility bugs in your graphics library can sink your project fast.
Finally, there's the simple incompatibilities which exist between the operating systems. POSIX compliance has come a long way, and you're lucky that both Linux and OS X are Unix under the hood, but Windows will always be the odd man out. Things which are likely to bite you mostly have to do with the filesystem. Here's a handful:
Filesystem layout
File path syntax (ie. C:\foo\bar vs /foo/bar)
Mandatory Windows file locking
Differing file permissions systems
Differing models of interprocess communication (ie. fork, shared memory, etc...)
Differing threading models (your graphics library should smooth this out)
There you have it. What a mess, huh? Cross-platform programming is as much a state of mind and statement of purpose as it is a technique. It requires some dedication and extra time. There are some things you can do to make the process less grueling...
Turn on all strictures and warnings and fix them
Turn off all language extensions
Periodically compile and test in Windows, not just at the end
Get programmer who likes Windows on the project
Restrict yourself to as few compilers as you can
Choose a well maintained, well supported graphics library
Isolate platform specific code (for example, in a subclass)
Treat Windows as a first class citizen
The most important thing is to do this all from the start. Portability is not something you bolt on at the end. Not just your code, but your whole design can become unportable if you're not vigilant.
C++ is ultra portable and has compilers available on more platforms than you can shake a stick at. Languages like Java are typically touted as being massively cross platform, ironically they are in fact usually implemented in C++, or C.
That covers "portability". If you actually mean, how cross platform is C++, then not so much: The C++ standard only defines an IO library suitable for console IO - i.e. text based, so as soon as you want develop some kind of GUI, you are going to need to use a GUI framework - and GUI frameworks are historically very platform specific. Windows has multiple "native" GUI frameworks now - the C++ framework made available from Microsoft is still MFC - which wraps the native Win32 API which is a C API. (WPF and WinForms are available to CLR C++).
The Apple Mac's GUI framework is called Cocoa, and is an objective-C library, but its easy to access Objective C from C++ in that development environment.
On Linux there is the GTK+ and Qt frameworks that are both actually ported to Windows and Apple, so one of these C++ frameworks can solve your "how to write a GUI application in C++ once that builds and runs on windows, apple mac and linux".
Of course, its difficult to regard Qt as strictly C++ anymore - Qt defines a special markup for signals and slots that requires a pre-compile compile step.
You can read the standard - if a program respects the standard, it should be compilable on all platforms that have a C++ standard-compliant compiler.
As for 3rd party libraries you might be using, the platform availability is usually specified in the documentation.
When GUI comes to question, there are cross-platform options (such as QT), but you should probably ask yourself - do I really want portability when it comes to UI? Sometimes, it's better to have the GUI part platform-specific.
If you are thinking of porting from Linux to Windows, using OPENGL for the graphical part gives you freedom to run your program on both operating systems as long as you don't use any system specific functionality.
Compared to C, C++ portability is extremely limited, if not completely unexisting. For one you can't disable exceptions (well you can), for the standard specifically says that's undefined behaviour. Many devices don't even support exceptions. So as for that, C++ is ZERO portable. Plus seeing the UB, it's obvioulsy a no-go for zero-fail high-performance real time systems in which exceptions are taboo - undefined behaviour has no place in zero-fail environment. Then there's the name mangling which most, if not every, compiler does completely different. For good portability and inter-compatibility extern "C" would have to be used to export symbols, yet this renders any and all namespace information completely void, resulting in duplicate symbols. One can ofcourse choose to not use namespaces and use unique symbol names. Yet another C++ feature rendered void. Then there's the complexity of the language, which results in implementation difficulties in the various compilers for various architectures. Due to these difficulties, true portability becomes a problem. One can solve this by having a large chain of compiler directives/#ifdefs/macros. Templates? Not even supported by most compilers.
What portability? You mean the semi-portability between a couple of main-stream build targets like MSVC for Windows and GCC for Linux? Even there, in that MAIN-STREAM segment, all the above problems and limitations exist. It's retarded to even think C++ is portable.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Since in some post on StackOverflow it was recommended to try to support multiple (in this case C/C++) compilers if feasible, since this forces you to code more standard compliant and helps finding bugs.
So I was looking for additional free C/C++ compilers I could add support for to my project (it is written C/C++ (both languages combined)). I found Open Watcom to be an interesting candidate.
So my question is: what are the advantages and disadvantages of Open Watcom C/C++ compiler in comparison to other ones (for example gcc/g++, Visual C++ etc.)?
There are probably no particular advantages since if portable code is your aim you would generally try to restrict your code to the standard subset implemented by all compilers. I would say lowest common denominator but that may seem somewhat derogatory.
The advantages of one compiler over another generally lie in either the extensions it provides, the libraries it includes, or the performance of the generated code, if portability is your aim, you are probably interested in neither. It is not the advantages of one compiler over another that should interest you in this case, but rather its adherence to and compliance with the ISO standards.
In its earlier commercial incarnation, Watcom was famously one of the best optimising compilers available; I doubt however whether it has kept pace with processor development since then however (or even the transition for 16 bit to 32 bit x86!).
Its one feature that may be seen as an advantage in some cases is that it supports DOS, OS/2 and Windows, but that is probably only an advantage if legacy systems maintenance is your aim. Efforts to port it to Linux and BSD and processors other than x86 exist but are not complete, while GCC is already there and has been for years.
I would suggest that if you can support GCC and VC++ you probably have sufficient compiler independence (but recommend you compile with high warning level settings (-Wall -Werrorin GCC and \W4 \Wx in VC++). I think that compiler portability is a trivial issue compared with OS portability, and what you really need to consider is cross-platform library support rather than compiler independent code support.
If however playing with compilers is your thing, also consider the Digital Mars compiler. Like Watcom, this also has commercial compiler heritage, having been the Zortech/Symantec C/C++ compiler in a previous life.
Something watcom has in favor if your a 'haxxor' is the fact you can define out of the ordinary calling conventions using #pragma aux. Other than that, I see no reason to even attempt to use such a dated compiler unless you had horrible hardware restrictions. Imo, there are only 3 to worry about, GCC, ICC and MSVC
Some people here use expressions having to do with the Watcom (actually OpenWatcom) compiler being "dated." So what does it mean?
It could mean that it doesn't implement the latest C standard. How
many "non-dated" compilers do?
It could mean that it doesn't provide frameworks as it is primarily
an environment for C and ForTran and somewhere far after that comes a
C++ implementation which I cannot judge.
It could mean that it cannot generate excellent assembly code from
garbage C code.
It could mean that it doesn't support x64 development.
It could mean that the debugger is rudimentary and supports assembly
debugging.
Now to what it does do - in addition to supporting 16-bit real and protected mode code:
It produces excellent 32-bit protected mode code in the flat memory
model everyone uses for the Win32 environment.
Its code generating capabilities are excellent and it's right up
there at the top with more "non-dated" compilers.
It's easy to tune multi-threaded code using its profiler.
How do you "feel" a compiler? I for one don't know how to do that. Is it how the error messages are written? Is it in the messages on the console log?
The world's greatest network operating system - Novell Netware - had Watcom as its development environment. That says a great deal about Watcom. And lest anyone forget: Netware died due to poor marketing management combined with Redmond foul play. It did not die from lack of technological excellence.
I guess what I'm trying to say is that you guys that don't know what you're talking about should perhaps be a little less eager to write answers.
I know I know it's all about getting those coveted points and badges and what have you. And how you get them is irrelevant, right?
The Open Watcom compiler is somewhat outdated and it feels. It is based on what was long time ago a good compiler for making MS DOS games. Currently it is not very standard compliant and its standard library is in immature state.
I would prefer more modern and popular compilers like Intel cc, g++, VC++ or CLang. Not sure about Borland C, haven't tried it long time.
Advantages:
it's free
it's open source. You can alter it and its runtime libraries any way you like
it is crossplatform. You can run it, among other platforms, on Windows and Linux. More, you can build programs with it for different platforms, using a single platform
Disadvantages:
it is outdated a bit, but not that much as in the past
Positive (2)
The code and projects are not bloated like the projects in Microsoft Visual Studio/C++ (Not hundreds of vproj and other files and folders). You can just generate a makefile like in GCC (Which is better to understand than the Visual Projects Makefiles...)
Even the installation takes no big time (on x64 Win 7), in comparisation to 2++ GBytes Visual Project...
Compared to GCC it may seem that it is better to handle
Negative
Clib is missing: strn... functions (strndup, strncmpi etc.), getoptlong
No ARM support (# 1st July 2015)
As Editor you should really use Notepad++, not the internal Editor
For a language like C++ the existence of a standard is a must. And good compilers try their best (well, most of the good compilers, at least) to comply. Many compilers have language extensions, some of which are allowed by the standard, some of which are not. Of the latter kind 2 examples:
gcc's typeof
microsoft's compilers allow a pure virtual function declaration to have both a pure-specifier(=0) and a definition (which is prohibited by the standard - let's not discuss why, that's another topic:)
(there are many other examples)
Both examples are useful in the following sense: example1 is a very useful feature which will be available in c++0x under a different name. example2 is also useful, and microsoft has decided not to respect the ban that made no sense.
And I am grateful that compilers provide language extensions that help us developers in our routine. But here's a question: shouldn't there be an option which, when set, mandates that the compiler be as standards compliant as it can, no matter whether they agree with the standard or not. For example visual studio has such an option, which is called disable language extensions. But hey, they still allow example2.
I want everyone to understand my question correctly. It is a GREAT thing that MSVC allows example2, and I would very much like that feature to be in the standard. It doesn't break any compliant code, it does nothing bad. It just isn't standard.
Would you like that microsoft disable example2 when disable language extensions is set to true? Note that the words microsoft, example2, etc. are placeholders :)
Why?
Again, just to make sure. The crucial point is: Should a compiler bother to provide a compliant version (optionally set in the settings)(in its limits, e.g. I am not talking about export) for a certain feature when they provide a better alternative that is not standard and is perhaps even a superset of the standard, thus not breaking anything.
Standards compliance is important for the fundamental reason that it makes your code easier to maintain. This manifests in a number of ways:
Porting from one version of a compiler to another. I once had to post a 1.2 million-LOC app from VC6 to VC9. VC6 was notorious for being horribly non-Compliant, even when it was new. It allowed non-compliant code even on the highest warning levels that the new compiler rejected at the lowest. If the code had been written in a more compliant way in the first place, this project wouldn't (shouldn't)have taken 3 months.
Porting from one platform to another. As you say, the current MS compilers have language extensions. Some of these are shared by compilers on other platforms, some are not. Even if they are shared, the behavior may be subtly different. Writing compliant code, rather that using these extensions, makes your code correct from the word go. "Porting" becomes simply pulling the tree down and doing a rebuild, rather than digging through the bowels of your app trying to figure out why 3 bits are wrong.
C++ is defined by the standard. The extensions used by compilers changes the language. New programmers coming online who know C++ but not the dialect your compiler uses will get up to speed more quickly if you write to Standard C++, rather than the dialect that your compiler supports.
First, a reply to several comments. The MS VC extension in question is like this:
struct extension {
virtual void func() = 0 { /* function body here */ }
};
The standard allows you to implement the pure virtual function, but not "in place" like this, so you have to write it something like this instead:
struct standard {
virtual void func() = 0;
};
void standard::func() { ; }
As to the original question, yes, I think it's a good idea for the compiler to have a mode in which it follows (and enforces) the standard as accurately as possible. While most compilers have that, the result isn't necessarily as accurate a representation of the standard as you/I would like.
At least IMO, about the only answer to this is for people who care about portability to have (and use) at least a couple of compilers on a regular basis. For C++, one of those should be based on the EDG front-end; I believe it has substantially better conformance than most of the others. If you're using Intel's compiler on a regular basis anyway, that's fine. Otherwise, I'd recommend getting a copy of Comeau C++; it's only $50, and it's the closest thing to a "reference" available. You can also use Comeau online, but if you use it on a regular basis, it's worth getting a copy of your own.
Not to sound like an EDG or Comeau shill or anything, but even if you don't care much about portability, I'd recommend getting a copy anyway -- it generally produces excellent error messages. Its clean, clear error messages (all by themselves) have saved enough time over the years to pay for the compiler several times over.
Edit: Looking at this again, some of the advice is looking pretty dated, especially the recommendation for EDG/Comeau. In the three years since I originally wrote this, Clang has progressed from purely experimental to being quite reasonable for production use. Likewise, the gcc maintainers have (IMO) made great strides in conformance as well.
During the same time, Comeau hasn't released a single new version of their compiler, and there's been a new release of the C++ standard. As a result, Comeau is now fairly out of date with respect to the current standard (and the situation seems to be getting worse, not better -- the committee has already approved a committee draft of a new standard that is likely to become C++14).
As such, although I recommended Comeau at that time, I'd have difficulty (at best) doing so today. Fortunately, most of the advantages it provided are now available in more mainstream compilers -- both Clang and gcc have improved compliance (substantially) as outlined above, and their error messages have improved considerably as well (Clang has placed a strong emphasis on better error messages, almost from its inception).
Bottom line: I'd still recommend having at least two compilers installed and available, but today I'd probably choose different compilers than I did when I originally wrote this answer.
"Not breaking anything" is such a slippery slope in the long run, that it's better to avoid it altogether. My company's main product outlived several generations of compilers (first written in 1991, with RW), and combing through compiler extensions and quiet standards violations whenever it was the time to migrate to a newer dev system took a lot of effort.
But as long as there's an option to turn off or at least warn about 'non-standard extension', I'm good with it.
34, 70, 6.
I would certainly want an option that disables language extensions to disable all language extensions. Why?
All options should do what they say they do.
Some people need to develop portable code, requiring a compiler that only accepts the standard form of the language.
"Better" is a subjective word. Language extensions are useful for some developers, but make things more difficult for others.
I think that it's critical that a compiler provide a standards-only mode if it wants to be the primary one used while developing. All compilers should, of course, compile standards compliant code, but it's not critical they they don't extend if they don't think of themselves as the primary compiler -- for example, a cross-compiler, or a compiler for a less popular platform that is nearly always ported to, rather than targeted.
Extensions are fine for any compiler, but it would be nice if I had to turn them on if I want them. By default, I'd prefer a standards-only compiler.
So, given that, I expect MSVC to be standards-only by default. The same with gcc++.
Stats: 40, 90, 15
I think standards compliance is very important.
I always consider source code is more for the human readers than for the machine(s). So, to communicate programmer's intention to the reader, abiding the standard is like speaking a language of lowest common denominator.
Both at home and work, I use g++, and I have aliased it with the following flags for strict standard compliance.
-Wall -Wextra -ansi -pedantic -std=c++98
Check out this page on Strict ANSI/ISO
I am not a standards expert, but this has served me well. I have written STL-style container libraries which run as-is on different platforms, e.g. 32-bit linux, 64-bit linux, 32-bit solaris, and 32-bit embedded OSE.
Consider indicators on cars (known as "turn signals" in some jurisdictions); they are a reliable way to determine which direction someone's going to turn off a roundabout... until just one person doesn't use them at all. Then the whole system breaks down.
It didn't "hurt anyone" or obviously "break anything" in IE when they allowed document.someId to be used as a shortcut for document.getElementById('someId').... however, it did spawn an entire generation of coders and even books that consequently thought it was okay and right, because "it works". Then, suddenly, the ten million resulting websites were entirely non-portable.
Standards are important for interoperability, and if you don't follow them then there's little point in having them at all.
Standards-compliance hounds may get hated for "pedanticism" but, really, until everybody follows suit you're going to have portability and compatibility problems for ever.
How important standards-compliance is depends on what you are trying to achieve.
If you are writing a program that will never be ported outside of its current environment (especially a program that you're not planning to develop/support for a long time) then it's not very important. Whatever works, works.
If you need your program to remain relevant for a long time, and be easily portable to different environments, than you will want it to be standards compliant, since that's the only way to (more or less) guarantee that it will work everywhere.
The trick, of course, is figuring out which situation you are actually in. It's very common to start a program thinking it is a short-term hack, and later on find that it's so useful that you're still developing/maintaining it years later. In that situation your life will be much less unpleasant if you didn't make any short-sighted design decisions at the beginning of the program's lifetime.
I'm in the process of developing a software library to be used for embedded systems like an ARM chip or a TI DSP (for mostly embedded systems, but it would also be nice if it could also be used in a PC environment). Obviously this is a pretty broad range of target systems, so being able to easily port to different systems is a priority.The library will be used for interfacing with a specific hardware and running some algorithms.
I am thinking C++ is the best option, over C, because it is much easier to maintain and read. I think the additional overhead is worth it for being able to work in the object oriented paradigm. If I was writing for a very specific system, I would work in C but this is not the case.
I'm assuming that these days most compilers for popular embedded systems can handle C++. Is this correct?
Is there any other factors I should consider? Is my line of thinking correct?
If portability is very important for you, especially on an embedded system, then C is certainly a better option than C++. While C++ compilers on embedded platforms are catching up, there's simply no match for the widespread use of C, for which any self-respecting platform has a compliant compiler.
Moreover, I don't think C is inferior to C++ where it comes to interfacing hardware. The amount of abstraction is sufficiently low (i.e. no deep class hierarchies) to make C just as good an option.
There is certainly good support of C++ for ARM. ARM have their own compiler and g++ can also generate EABI compliant ARM code. When it comes to the DSPs, you will have to look at their toolchain to decide what you are going to do. Be aware that the library that comes with a DSP may well not implement the full C or C++ standard library.
C++ is suitable for low-level embedded development and is used in the SymbianOS Kernel. Having said that, you should keep things as simple as possible.
Avoid exceptions which may demand more library support than what is present (therefore use new (std::nothrow) Foo instead of new Foo).
Avoid memory allocations as much as possible and do them as early as possible.
Avoid complex patterns.
Be aware that templates can bloat your code.
I have seen many complaints that C++ is "bloated" and inappropriate for embedded systems.
However, in an interview with Stroustrup and Sutter, Bjarne Stroustrup mentioned that he'd seen heavily templated C++ code going into (IIRC) the braking systems of BMWs, as well as in missile guidance systems for fighter aircraft.
What I take away from this is that experts of the language can generate sophisticated, efficient code in C++ that is most certainly suitable for embedded systems. However, a "C With Classes"[1] programmer that does not know the language inside out will generate bloated code that is inappropriate.
The question boils down to, as always: in which language can your team deliver the best product?
[1] I know that sounds somewhat derogatory, but let me say that I know an awful lot of these guys, and they churn out an awful lot of relatively simple code that gets the job done.
C++ compilers for embedded platforms are much closer to 83's C with classes than 98's C++ standard, let alone C++0x. For instance, some platform we use still compile with a special version of gcc made from gcc-2.95!
This means that your library interface will not be able to provide interfaces with containers/iterators, streams, or such advanced C++ features. You'll have to stick with simple C++ classes, that can very easily be expressed as a C interface with a pointer to a structure as first parameter.
This also means that within your library, you won't be able to use templates to their full power. If you want portability, you will still be restricted to generic containers use of templates, which is, I'm sure you'll admit, only a very tiny part of C++ templates power.
C++ has little or no overhead compared to C if used properly in an embedded environment. C++ has many advantages for information hiding, OO, etc. If your embedded processor is supported by gcc in C then chances are it will also be supported with C++.
On the PC, C++ isn't a problem at all -- high quality compilers are extremely widespread and almost every C compiler is directly associated with a C++ compiler that's quite good, though there are a few exceptions such as lcc and the newly revived pcc.
Larger embedded systems like those based on the ARM are generally quite similar to desktop systems in terms of tool chain availability. In fact, many of the same tools available for desktop machines can also generate code to run on ARM-based machines (e.g., lots of them use ports of gcc/g++). There's less variety for TI DSPs (and a greater emphasis on quality of generated code than source code features), but there are still at least a couple of respectable C++ compilers available.
If you want to work with smaller embedded systems, the situation changes in a hurry. If you want to be able to target something like a PIC or an AVR, C++ isn't really much of an option. In theory, you could get (for example) Comeau to produce a custom port that generated code you could compile on that target's C compiler -- but chances are pretty good that even if you did, it wouldn't work out very well. These systems are really just too limitated (especially on memory size) for C++ to fit them well.
Depending on what your intended use is for the library, I think I'd suggest implementing it first as C - but the design should keep in mind how it would be incorporated into a C++ design. Then implement C++ classes on top of and/or along side of the C implementation (there's no reason this step cannot be done concurrently with the first). If your C design is done with a C++ design in mind, it's likely to be as clean, readable and maintainable as the C++ design would be. This is somewhat more work, but I think you'll end up with a library that's useful in more situations.
While you'll find C++ used more and more on various embedded projects, there are still many that restrict themselves to C (and I'd guess this is more often the case than not) - regardless of whether or not the tools support C++. It would be a shame to have a nice library of routines that you could bring to a new project you're working on, but be unable to use them because C++ isn't being used on that particular project.
In general, it's much easier to use a well-designed C library from C++ than the other way around. I've taken this approach with several sets of code including parsing Intel Hex files, a simple command parser, manipulating synchronization objects, FSM frameworks, etc. I'm planning on doing a simple XML parser at some point.
Here's an entirely different C++-vs-C argument: stable ABIs. If your library exports a C ABI, it can be compiled with any compiler that works on the system, because C ABIs are generally platform standards. If your library exports a C++ ABI, it can only be compiled with a matching compiler -- because C++ ABIs are usually not platform standards, and often differ from compiler to compiler and even version to version.
Interestingly, one of the rare exceptions to this is ARM; there's an ARM C++ ABI specification, and all compliant ARM compilers follow it. This is not true on x86; on x86, you're lucky if a C++ library compiled with a 4.1 version of GCC will link correctly with an application compiled with GCC 4.4, and don't even ask about 3.4.6.
Even if you export a C ABI, you can have problems. If your library uses C++ internally, it will then link to libstdc++ for things in the C++ std:: namespace. If your user compiles a C++ application that uses your library, they'll also link to libstdc++ -- and so the overall application gets linked to libstdc++ twice, and their libstdc++ may not be compatible with your libstdc++, which can (or so I understand) lead to odd errors from the intersection of the two. Considerably less likely, but still possible.
All of these arguments only apply because you're writing a library, and they're not showstoppers. But they are things to be aware of.
I'm
interested in different aspects of portability (as you can see when browsing my other questions), so I read a lot about it. Quite often, I read/hear that Code should be written in a way that makes it compilable on different compilers.
Without any real life experience with gcc / g++, it seems to me that it supports every major platform one can imagine, so Code that compiles on g++ can run on almost any system. So why would someone bother to have his code run on the MS Compiler, the Intel compiler and others?
I can think of some reasons, too. As the FAQ suggest, I'll try to post them as an answer, opposed to including them into my own question.
Edit: Conclusion
You people got me completely convinced that there are several good reasons to support multiple compilers. There are so many reasons that it was hard to choose an answer to be the accepted one. The most important reasons for me:
Contributors are much more likely to work an my project or just use it if they can use the compiler of their choice
Being compilable everywhere, being usable with future compilers and tools, and adhering to the standards are enforcing each other, so it's a good idea
On the other hand, I still believe that there are other things which are more important, and now I know that sometimes it isn't important at all.
And last of all, there was no single answer that could convince me not to choose GCC as the primary or default compiler for my project.
Some reasons from the top of my head:
1) To avoid being locked with a single compiler vendor (open source or not).
2) Compiling code with different compilers is likely to discover more errors: warnings are different and different compilers support the Standard to a different degree.
It is good to be compilable on MSVC, because some people may have projects that they build in MSVC that they want to link your code into, without having to set up an entirely different build system.
It is good to be compilable under the Intel compiler, because it frequently compiles faster code.
It is good to be compilable under Clang, because it can give better error messages and provide a better development experience, and it is an easier project to work on than GCC and so may gain additional benefits in the future.
In general, it is good to keep your options open, because there is no one compiler that fits all needs. GCC is a good compiler, and is great for most purposes, but you sometimes need something else.
And even if you're usually only going to be compiling under GCC, making sure your code compiles under other compilers is also likely to help find problems that could prevent your code from working with past and future versions of GCC, for instance, if there's something that GCC is less strict about now, but later adds checks for, another compiler may catch in advance, helping you keep your code cleaner. I've found this helpful in the reverse case, where GCC caught more potential problems with warnings than MSVC did (MSVC is the only compiler we needed to support, as we were only shipping on Windows, but we did a partial port to the Mac under GCC in our free time), which allowed me to produce cleaner code than I would have otherwise.
Portability. If you want your code to be accessible by the maximum number of people possible, you have to make it work on the widest range of possible compilers. It the same idea as make a web site run on browsers other than IE.
Some of it is political. Companies have standards, people have favorite tools etc. Telling someone that they should use X, really puts some people off, and makes it really inaccessible to others.
Nemanja brings up a good point too, targeting for a certain compiler locks you into to using it. In the Open Source world, this might not be as big of a problem (although people could just stop developing on it and it becomes obsolete), but what if the company you buy it from discontinues the product, or goes out of business?
For most languages I care less about portability and more about conforming to international standards or accepted language definitions, from which properties portability is likely to follow. For C, however, portability is a useful idea, because it is very hard to write a program that is "strictly conforming" to the standard. (Why? Because the standards committees felt it necessary to grandfather some existing practice, including giving compilers some freedom you might not like them to have.)
So why try to conform to a standard or make your code acceptable to multiple compilers as opposed to simply writing whatever gcc (or your other favorite compiler) happens to accept?
Likely in 2015 gcc will accept a rather different language than it does today. You would prefer not to have to rewrite your old code.
Perhaps your code might be ported to very small devices, where the GNU toolchain is not as well supported.
If your code compiles with any ANSI C compiler straight out of the box with no errors and no warnings, your users' lives will be easier and your software may be widely ported and used.
Perhaps someone will invent a great new tool for analyzing C programs, refactoring C programs, improving performance of C programs, or finding bugs in C programs. We're not sure what version of C that tool will work on or what compiler it might be based on, but almost certainly the tool will accept standard C.
Of all these arguments, it's the tool argument I find most convincing. People forget that there are other things one can do with source code besides just compile it and run it. In another language, Haskell, tools for analysis and refactoring lagged far behind compilers, but people who stuck with the Haskell 98 standard have access to a lot more tools. A similar situation is likely for C: if I am going to go to the effort of building a tool, I'm going to base it on a standard with a lifetime of 10 years or so, not on a gcc version which might change before my tool is finished.
That said, lots of people can afford to ignore portability completely. For example, in 1995 I tried hard to persuade Linus Torvalds to make it possible to compile Linux with any ANSI C compiler, not just gcc. Linus had no interest whatever—I suspect he concluded that there was nothing in it for him or his project. And he was right. Having Linux compile only with gcc was a big loss for compiler researchers, but no loss for Linux. The "tool argument" didn't hold for Linux, because Linux became so wildly popular; people building analysis and bug-finding tools for C programs were willing to work with gcc because operating on Linux would allow their work to have a big impact. So if you can count on your project becoming a wild success like Linux or Mosaic/Netscape, you can afford to ignore standards :-)
If you are building for different platforms, you will end up using different compilers. Moreover, C++ compilers tend to be always slightly behind the C++ standard, which means they usually change their adherence to it as time passes. If you target the common denominator to all major compilers then the code maintenance cost will be lower.
It's very common for applications (especially open-source application) that other developers would desire to use different compilers. Some would rather be using Visual Studio with MS Compiler for development purposes. Some would rather use Intel compiler for claimed performance benefits and such.
So here are the reasons I can think of
if speed is the biggest concern and there is special, highly optimized compiler for some platforms
if you build a library with a C++ interface (classes and templates, instead of just functions). Because of name mangling and other stuff, the library must be compiled with the same compiler as the client code, and if the client wants to use Visual C++, he must be able to compile the lib with it
if you want to support some very rare platform that does not have gcc support
(For me, those reasons are not significant, since I want to build a library that uses C++ internally, but has a C interface.)
Typically these are the reasons that I've found:
cross-platform (windows, linux, mac)
different developers doing development on different OS's (while not optimal, it does happen - testing usually takes place on the target platform only).
Compiler companies go out of business - or stop development on that language. If you know your program compiles/runs well using another compiler, you've covered your bet.
I'm sure there are other answers as well, but these are the most common reasons I've run into so far.
Several projects use GCC/G++ as a "day-to-day" compiler for normal use, but every so often will check to make sure their code follows the standards with the Comeau C/C++ compiler. Their website looks like a nightmare, and the compiler isn't free, but it's known as possibly the most standards-compliant compiler around, and will warn you about things many compilers will silently accept or explicitly allow as a nonstandard extension (yes, I'm looking at you, Mr. I-don't-mind-and-actually-actively-support-your-efforts-to-do-pointer-arithmetic-on-void-pointers-GCC).
Compiling every so often with a compiler as strict as Comeau (or, even better, compiling with as many compilers as you can get your hands on) will let you know of errors people might experience when trying to compile your code, things your compiler allows you to do that it shouldn't, and potentially things that other compilers don't allow you to do that you should. Writing ANSI C or C++ should be an important goal for code you intend to use on multiple platforms, and using the most standards-compliant compiler around is a good way to do that.
(Disclaimer: I don't have Comeau, and don't plan on getting it, and can't get it because I'm on OS X. I do C, not C++, so I can actually know the whole language, and the average C compiler is much closer to the C standard than the average C++ compiler to the C++ standard, so it's less of an issue for me. Just wanted to put this in here because this started to look like an ad for Comeau. It should be seen more as an ad for compiling with many different compilers.)
This one of those "It depends" questions. For open source code, it's good to be portable to multiple compilers. After all having people in diverse environments build the code is sort of the point.
But for closed source, This is a lot less important. You never want to unnecessarily tie yourself to a specific compiler. But in most of the places I've worked, compiler portability didn't even make into the top 10 of things we cared about. Even if you never use anything other than standerd C/C++, switching a large code base to a new compiler is a dangerous thing to do. Compilers have bugs. Sometimes your code will have bugs that are benign on one compiler, but suddenly a problem on another.
I remember one transition, where one compiler thought this code was just fine:
for (int ii = 0; ii < n; ++ii) { /* some code */ }
for (int ii = 0; ii < y; ++ii) { /* some other code */ }
While the newer compiler complained that ii had been declared twice, so we had to go through all of our code and declare loop variables before the loop in order to switch.
One place I worked was so careful about unintended side effects of compiler switches, that they checked specific compilers into each source tree, and once the code shipped would only use that one compiler to do updates on that code base - forever.
Another place would try out a new compiler for 6 months to a year before they switched over to it.
I find gcc a slow compiler on windows (nothing to compare against under linux). So I (sometimes) want to compile my code under other compilers, just for faster development cycles.
I don't think anyone has mentioned it so far, but another reason may be access to certain platform-specific features: Many operating system vendors have special versions of GCC, or even their own home-grown (or licensed and modified) compilers. So if you want your code to run well on several platforms, you may need to choose the right compiler on each platform. Be that an embedded system, MacOS, Windows etc.
Also, speed may be an issue (both compilation speed and execution speed). Back in the PPC days, GCC produced notoriously slow code on PowerPC CPUs, so Apple put a bunch of engineers on GCC to improve that (GCC was very new for the Mac, and all other PowerPC platforms were small). Platforms that are used less may be optimized less in GCC, so using another compiler that's been written for that platform can be faster.
But as a final summary: While there is ideal value in compiling on several compilers, in practice, this is mainly interesting for cross-platform software (and open-source software, because it often gets made cross-platform fairly quickly, and contributors have it easier if they can use their compiler of choice instead of having to learn a new one). If you need to ship on one platform only, shipping and maintenance are usually much more important than investing in building on several compilers if you're only releasing the builds made with one of them. However, you will want to clearly document any deviations from the standard (GCC-isms, for instance) to make the job of porting easier, should you ever have to do it.
Both Intel compiler and llvm are faster than gcc. The real reasons to use gcc are
Infinite hardware support (on no other compiler can you compile a lego mindstorm code on your old DEC).
it's cheap
best spagety optimizer in the business.