C code and C++ compiler - c++

Does anyone know if there is any performance penalty if one compiles C code with C++ compiler?
I have a C like code and use MinGW C++ compiler. I'm using qmake to compile the project. If there is a performance gain if I switch the compiler to compile in C, I'll have to update the code, there are some incompatibility with the syntax and want to know if it worth it.
Thanks.

There should be very little, if any, performance difference, to the point that if there is a difference, it will be very nearly immeasurably small - assuming you're using a C and C++ compiler from the same vendor or collection. Using a C compiler from one vendor and a C++ compiler from another will likely show larger differences, but only due to the fact that different vendors implement different optimization strategies.
There are a small number of potential optimization opportunities that C++ calls for that C compilers may not natively support - but again, unless they're from different vendors, the difference will be meaningless, and many compilers implement similar optimizations in both C++ and C compiler frontends.
One exception to this is Microsoft's compiler - Microsoft never made a C compiler, to my knowledge.
Note: I am assuming that the code does not use C++-specific features like templates or classes.

Related

Is there a performance degradation / penalty in using pure C library in C++ code?

I saw this link but I'm not asking for a performance degradation for code using "extern". I mean without "extern", is there "context switching" when using C library in C++?
Are there any problems when using pure C (not class-wrapped) functions in C++ application?
Both C and C++ are programming language specifications (written in English, see e.g. n1570 for the specification of C11) and do not speak about performance (but about behavior of the program, i.e. about semantics).
However, you are likely to use a compiler such as GCC or Clang which don't bring any performance penalty, because it builds the same kind of intermediate internal representation (e.g. GIMPLE for GCC, and LLVM for Clang) for both C and C++ languages, and because C and C++ code use compatible ABIs and calling conventions.
In practice extern "C" don't change any calling convention but disables name mangling. However, its exact influence on the compiler is specific to that compiler. It might (or not) disable inlining (but consider -flto for link-time-optimization in GCC).
Some C compilers (e.g. tinycc) produce code with poor performance. Even GCC or Clang, when used with -O0 or without explicitly enabling optimization (e.g. by passing -O1 or -O2 etc...) might produce slow code (and optimizations are by default disabled with it).
BTW, C++ was designed to be interoperable with C (and that strong constraint explains most of the deficiencies of C++).
In some cases, genuine C++ code might be slightly faster than corresponding genuine C code. For example, to sort an array of numbers, you'll use std::array and std::sort in genuine C++, and the compare operations in the sort are likely to get inlined. With C code, you'll just use qsort and each compare goes through an indirect function call (because the compiler is not inlining qsort, even if in theory it could...).
In some other cases, genuine C++ code might be slightly slower; for example, several (but not all) implementations of ::operator new are simply calling malloc (then checking against failure) but are not inlined.
In practice, there is no penalty in calling C code from C++ code, or C++ code from C code, since the calling conventions are compatible.
The C longjmp facility is probably faster than throwing C++ exceptions, but they don't have the same semantics (see stack unwinding) and longjmp doesn't mix well accross C++ code.
If you care that much about performance, write (in genuine C and in genuine C++) twice your code and benchmark. You are likely to observe a small change (a few percents at most) between C and C++, so I would not bother at all (and your performance concerns are practically unjustified).
Context switch is a concept related to operating system and multitasking and happens on processes running machine code executable during preemption. How that executable is obtained (from a C compiler, from a C++ compiler, from a Go compiler, from an SBCL compiler, or being an interpreter of some other language like Perl or bytecode Python) is totally irrelevant (since a context switch can happen at any machine instruction, during interrupts). Read some books like Operating Systems: Three Eeasy Pieces.
At a basic level, no, you won't see any type of "switching" performance penalty when calling a C library from C++ code. For example calling from C++ a C method defined in another translation unit should have approximately identical performance to calling the same method implemented in C++ (in the same C-like way) in another translation unit.
This is because common implementations of C and C++ compilers ultimately compile the source down to native code, and calling an extern "C" function is efficiently supported using the same type of call that might occur for a C++ call. The calling conventions are usually based on the platform ABI and are similar in either case.
That basic fact aside, there might still be some performance downsides when calling a C function as opposed to implementing the same function in C++:
Functions implemented in C and declared extern "C" and called from C++ code usually won't be inlined (since by definition they aren't implemented in a header), which inhibits a whole host of possibly very powerful optimization0.
Most data types used in C++ code1 can't be directly used by C code, so for example if you have a std::string in your C++ code, you'll need to pick a different type to pass it to C code - char * is common but loses information about the explicit length, which may be slower than a C++ solution. Many types have no direct C equivalent, so you may be stuck with a costly conversion.
C code uses malloc and free for dynamic memory management, while C++ code generally uses new and delete (and usually prefers to hide those calls behind other classes as much as possible). If you need to allocate memory in one language that will be freed in other other, this may cause a mismatch where you need to call back into the "other" language to do the free, or may unnecessary copies, etc.
C code often makes heavy use of the C standard library routines, while C++ code usually uses methods from the C++ standard library. Since there is a lot of functional overlap, it is possible that a mix of C and C++ has a larger code footprint than pure C++ code since more C library methods are used2.
The concerns above would apply only when contrasting a pure C++ implementation versus a C one, and doesn't really mean there is a performance degradation when calling C: it is really answering the question "Why could writing an application in a mix of C and C++ be slower than pure C++?". Furthermore, the above issues are mostly a concern for very short calls where the above overheads may be significant. If you are calling a lengthy function in C, it is less of a problem. The "data type mismatch" might still bite you, but this can be designed around on the C++ side.
0 Interestingly, link-time optimization actually allows C methods to be inlined in C++ code, which is a little-mentioned benefit of LTO. Of course, this is generally dependent on building the C library yourself from source with the appropriate LTO options.
1 E.g., pretty much anything other than a standard layout type.
2 This is at least partially mitigated by the fact that many C++ standard library calls ultimately delegate to C library routines for the "heavy" lifting, such as how std::copy calls memcpy or memset when possible and how most new implementations ultimately call malloc.
C++ has grown and changed a lot since its inception, but by design it is backwards-compatible with C. C++ compilers are generally built from C compilers, but even more modernized with link-time optimizations. I would imagine lots of software can reliably mix C and C++ code, both in the user spaces and in the libraries used. I answered a question recently that involved passing a C++ class member function pointer, to a C-implemented library function. The poster said it worked for him. So it's possible C++ is more compatible with C than any programmers or users would think.
However, C++ works in many different paradigms that C does not, as it is object-oriented, and implements a whole spectrum of abstractions, new data types, and operators. Certain data types are easily translatable (char * C string to a std::string), while others are not. This section on GNU.org about C++ compiler options may be of some interest.
I would not be too worried or concerned about any decline in performance, when mixing the two languages. The end user, and even the programmer, would hardly notice any measurable changes in performance, unless they were dealing with big abstractions of data.

Why there's no dedicated compiler for c or c++?

Seems all compilers can deal with both c and c++,like gcc ,msvc...
Is it because these 2 languages are exactly very much alike?
Actually, GCC (GNU Compiler Collection) has two different front-ends, gcc and g++. To specify C++, you can also use a .cpp (or a few others) extension, or -x c++, when executing gcc. However, this requires extra options (like linking in the C++ standard library).
cl, Microsoft's C++ compiler, does not support modern C. However, it will compile C source files as a variant of C89, and you can specify this explicitly with /TC.
For both, you are correct that there's a lot of shared code, regardless of which front-end is used (GCC also has many more). However, the languages do have significant differences, which are discussed elsewhere (this question among others).
There is no dedicated compiler for C/C++ because there is no such language....
If you are going to write a C++ compiler then you will have to be able to compile C too, so you may as well also provide one.
There may still be some C compilers around that do not have a companion C++ compiler with them though.
No. That's not true. Look at Pelles which is a C only compiler.
The semantics of the core language constructs for C and C++ remain more or less identical, and C++ was designed to add structural elements to C rather than change or remove existing language features. Therefore if you go to the trouble to build a C++ compiler, having it also compile C is relatively trivial (at least for ISO C90). C99 diverges in some significant ways from C++, and some C++ compilers either do not support C99, or include C99 features as extensions in their C++ compiler.
C++ is also highly inteoperable with C, C++ for example wholly includes ISO C90's standard library, and can link any C library. C++ libraries can be given a C linkage compatible interface for use by C code (although that is often less straightforward than C++ calling C code).
Early C++ tools were not true compilers, but rather C++ translators, which generated C code for compilation by a native C compiler. Comeau C++ still takes this approach in order to support C++ on any target with a C compiler, which is useful in embedded environments where some targets are not well served by C++ tools.
TCC is an example of a C compiler which is not a C++ compiler. Actually compiling C++ is a huge pain; the only reason so many C compilers also support C++ is that there's a pretty big demand for C++.
C++ is a superset of C. I don't know if this is still true, but it at least used to be common for c++ compilers to convert code to C as a first step in compiling.
Edit:
I've always heard that it is a superset. Since GMan says no, I looked at Wikipedia which says, "C++ is often considered to be a superset of C, but this is not strictly true.[21] Most C code can easily be made to compile correctly in C++, but there are a few differences that cause some valid C code to be invalid in C++, or to behave differently in C++." (See http://en.wikipedia.org/wiki/C%2B%2B for details.) So I stand a bit corrected.
Edit 2:
I've read a bit further in the Wikipedia article. Sounds like this is more accurate: C++ started as C; C++ evolved by adding new features to C. At some point, C++ changed enough to no longer be a pure superset of C. Since then, C has also evolved, and now has some features that C++ doesn't. So they're closely related, but no longer wholly compatible with each other.
No, LCC compiler is for C only.
GCC has gcc and the g++ components which compile C and C++ code.
clang-llvm has a C front-end. There is an experimental, separate C++ front-end.
IBM's Visual Age is split into xlc and xlC compilers.
Portable C Compiler is C only.
Last time I used it, Labwindows/CVI suite by National Instruments was a C only compiler.
It isn't true, there are several, and there were plenty in the 1980s before C++ compiler products started to appear :-) However given a C++ compiler the marginal cost of producing a C compiler out of the same codebase is relatively small, and even going the other way isn't a major increment, at least compared tom starting from scratch with either.

Do all C++ compilers generate C code?

Probably a pretty vague and broad question, but do all C++ compilers compile code into C first before compiling them into machine code?
Because C compilers are nearly ubiquitous and available on nearly every platform, a lot of (compiled) languages go through this phase in their development to bootstrap the process.
In the early phases of language development to see if the language is feasible the easiest way to get a working compiler out is to build a compiler that converts your language to C then let the native C compiler build the actual binary.
The trouble with this is that language specific constructs are lost and thus potential opportunities for optimization may be missed thus most languages in phase two get their own dedicated compiler front end that understands language specific constructs and can thus provide optimization strategies based on these constructs.
C++ has gone through phase 1 and phase 2 over two decades ago. So it is easy to find a `front end' of a compiler that is dedicated to C++ and generates an intermediate format that is passed directly to a backed. But you can still find versions of C++ that are translated into C (as an intermediate format) before being compiled.
Nope. GCC for example goes from C++ -> assembler. You can see this by using the -S option with g++.
Actually, now that I think about it, I don't think any modern compiler goes to C before ASM.
No. C++ -> C was used only in the earliest phases of C++'s development and evolution. Most C++ compilers today compile directly to assembler or machine code. Borland C++ compiles directly to machine code, for example.
No. This is a myth, based around the fact that a very early version of Stroustrup's work was implemented that way. C++ compilers generate machine code in almost exactly the same way that C compilers do.
As of this writing in 2010, the only C++ compiler that I was aware of that created C code was Comeau*. However, that compiler hasn't been heard from in over 5 years now (2022). There may be one or two more for embedded targets, but it is certainly not a mainstream thing.
* - There's a link to their old website on this WP page. I'd suggest not clicking that unless your computer has all its shots up to date
This is not defined by the standard. Certainly, compiling to C-source is a reasonable way to do it. It only requires the destination platform to have a C-compiler with a reasonable degree of compliance, so it is a highly portable way of doing things.
The downside is speed. Probably compilation speed and perhaps also execution speed (due to loads of casts for e.g. virtual functions that prevents the compiler to optimise fully) will suffer.
Not that long ago there was a company that had a very nice C++ compiler doing exactly that. Unfortunately, I do not remember the name of the company and a short google did not bring the name back. The owner of the company was an active participant in the ISO C++ committee and you could test your code directly on the homepage, which also had some quite decent ressources about C++.
Edit: one of my fellow posters just reminded me. I was talking about Comeau, of course.

Why is it important for C / C++ Code to be compilable on different compilers?

I'm
interested in different aspects of portability (as you can see when browsing my other questions), so I read a lot about it. Quite often, I read/hear that Code should be written in a way that makes it compilable on different compilers.
Without any real life experience with gcc / g++, it seems to me that it supports every major platform one can imagine, so Code that compiles on g++ can run on almost any system. So why would someone bother to have his code run on the MS Compiler, the Intel compiler and others?
I can think of some reasons, too. As the FAQ suggest, I'll try to post them as an answer, opposed to including them into my own question.
Edit: Conclusion
You people got me completely convinced that there are several good reasons to support multiple compilers. There are so many reasons that it was hard to choose an answer to be the accepted one. The most important reasons for me:
Contributors are much more likely to work an my project or just use it if they can use the compiler of their choice
Being compilable everywhere, being usable with future compilers and tools, and adhering to the standards are enforcing each other, so it's a good idea
On the other hand, I still believe that there are other things which are more important, and now I know that sometimes it isn't important at all.
And last of all, there was no single answer that could convince me not to choose GCC as the primary or default compiler for my project.
Some reasons from the top of my head:
1) To avoid being locked with a single compiler vendor (open source or not).
2) Compiling code with different compilers is likely to discover more errors: warnings are different and different compilers support the Standard to a different degree.
It is good to be compilable on MSVC, because some people may have projects that they build in MSVC that they want to link your code into, without having to set up an entirely different build system.
It is good to be compilable under the Intel compiler, because it frequently compiles faster code.
It is good to be compilable under Clang, because it can give better error messages and provide a better development experience, and it is an easier project to work on than GCC and so may gain additional benefits in the future.
In general, it is good to keep your options open, because there is no one compiler that fits all needs. GCC is a good compiler, and is great for most purposes, but you sometimes need something else.
And even if you're usually only going to be compiling under GCC, making sure your code compiles under other compilers is also likely to help find problems that could prevent your code from working with past and future versions of GCC, for instance, if there's something that GCC is less strict about now, but later adds checks for, another compiler may catch in advance, helping you keep your code cleaner. I've found this helpful in the reverse case, where GCC caught more potential problems with warnings than MSVC did (MSVC is the only compiler we needed to support, as we were only shipping on Windows, but we did a partial port to the Mac under GCC in our free time), which allowed me to produce cleaner code than I would have otherwise.
Portability. If you want your code to be accessible by the maximum number of people possible, you have to make it work on the widest range of possible compilers. It the same idea as make a web site run on browsers other than IE.
Some of it is political. Companies have standards, people have favorite tools etc. Telling someone that they should use X, really puts some people off, and makes it really inaccessible to others.
Nemanja brings up a good point too, targeting for a certain compiler locks you into to using it. In the Open Source world, this might not be as big of a problem (although people could just stop developing on it and it becomes obsolete), but what if the company you buy it from discontinues the product, or goes out of business?
For most languages I care less about portability and more about conforming to international standards or accepted language definitions, from which properties portability is likely to follow. For C, however, portability is a useful idea, because it is very hard to write a program that is "strictly conforming" to the standard. (Why? Because the standards committees felt it necessary to grandfather some existing practice, including giving compilers some freedom you might not like them to have.)
So why try to conform to a standard or make your code acceptable to multiple compilers as opposed to simply writing whatever gcc (or your other favorite compiler) happens to accept?
Likely in 2015 gcc will accept a rather different language than it does today. You would prefer not to have to rewrite your old code.
Perhaps your code might be ported to very small devices, where the GNU toolchain is not as well supported.
If your code compiles with any ANSI C compiler straight out of the box with no errors and no warnings, your users' lives will be easier and your software may be widely ported and used.
Perhaps someone will invent a great new tool for analyzing C programs, refactoring C programs, improving performance of C programs, or finding bugs in C programs. We're not sure what version of C that tool will work on or what compiler it might be based on, but almost certainly the tool will accept standard C.
Of all these arguments, it's the tool argument I find most convincing. People forget that there are other things one can do with source code besides just compile it and run it. In another language, Haskell, tools for analysis and refactoring lagged far behind compilers, but people who stuck with the Haskell 98 standard have access to a lot more tools. A similar situation is likely for C: if I am going to go to the effort of building a tool, I'm going to base it on a standard with a lifetime of 10 years or so, not on a gcc version which might change before my tool is finished.
That said, lots of people can afford to ignore portability completely. For example, in 1995 I tried hard to persuade Linus Torvalds to make it possible to compile Linux with any ANSI C compiler, not just gcc. Linus had no interest whatever—I suspect he concluded that there was nothing in it for him or his project. And he was right. Having Linux compile only with gcc was a big loss for compiler researchers, but no loss for Linux. The "tool argument" didn't hold for Linux, because Linux became so wildly popular; people building analysis and bug-finding tools for C programs were willing to work with gcc because operating on Linux would allow their work to have a big impact. So if you can count on your project becoming a wild success like Linux or Mosaic/Netscape, you can afford to ignore standards :-)
If you are building for different platforms, you will end up using different compilers. Moreover, C++ compilers tend to be always slightly behind the C++ standard, which means they usually change their adherence to it as time passes. If you target the common denominator to all major compilers then the code maintenance cost will be lower.
It's very common for applications (especially open-source application) that other developers would desire to use different compilers. Some would rather be using Visual Studio with MS Compiler for development purposes. Some would rather use Intel compiler for claimed performance benefits and such.
So here are the reasons I can think of
if speed is the biggest concern and there is special, highly optimized compiler for some platforms
if you build a library with a C++ interface (classes and templates, instead of just functions). Because of name mangling and other stuff, the library must be compiled with the same compiler as the client code, and if the client wants to use Visual C++, he must be able to compile the lib with it
if you want to support some very rare platform that does not have gcc support
(For me, those reasons are not significant, since I want to build a library that uses C++ internally, but has a C interface.)
Typically these are the reasons that I've found:
cross-platform (windows, linux, mac)
different developers doing development on different OS's (while not optimal, it does happen - testing usually takes place on the target platform only).
Compiler companies go out of business - or stop development on that language. If you know your program compiles/runs well using another compiler, you've covered your bet.
I'm sure there are other answers as well, but these are the most common reasons I've run into so far.
Several projects use GCC/G++ as a "day-to-day" compiler for normal use, but every so often will check to make sure their code follows the standards with the Comeau C/C++ compiler. Their website looks like a nightmare, and the compiler isn't free, but it's known as possibly the most standards-compliant compiler around, and will warn you about things many compilers will silently accept or explicitly allow as a nonstandard extension (yes, I'm looking at you, Mr. I-don't-mind-and-actually-actively-support-your-efforts-to-do-pointer-arithmetic-on-void-pointers-GCC).
Compiling every so often with a compiler as strict as Comeau (or, even better, compiling with as many compilers as you can get your hands on) will let you know of errors people might experience when trying to compile your code, things your compiler allows you to do that it shouldn't, and potentially things that other compilers don't allow you to do that you should. Writing ANSI C or C++ should be an important goal for code you intend to use on multiple platforms, and using the most standards-compliant compiler around is a good way to do that.
(Disclaimer: I don't have Comeau, and don't plan on getting it, and can't get it because I'm on OS X. I do C, not C++, so I can actually know the whole language, and the average C compiler is much closer to the C standard than the average C++ compiler to the C++ standard, so it's less of an issue for me. Just wanted to put this in here because this started to look like an ad for Comeau. It should be seen more as an ad for compiling with many different compilers.)
This one of those "It depends" questions. For open source code, it's good to be portable to multiple compilers. After all having people in diverse environments build the code is sort of the point.
But for closed source, This is a lot less important. You never want to unnecessarily tie yourself to a specific compiler. But in most of the places I've worked, compiler portability didn't even make into the top 10 of things we cared about. Even if you never use anything other than standerd C/C++, switching a large code base to a new compiler is a dangerous thing to do. Compilers have bugs. Sometimes your code will have bugs that are benign on one compiler, but suddenly a problem on another.
I remember one transition, where one compiler thought this code was just fine:
for (int ii = 0; ii < n; ++ii) { /* some code */ }
for (int ii = 0; ii < y; ++ii) { /* some other code */ }
While the newer compiler complained that ii had been declared twice, so we had to go through all of our code and declare loop variables before the loop in order to switch.
One place I worked was so careful about unintended side effects of compiler switches, that they checked specific compilers into each source tree, and once the code shipped would only use that one compiler to do updates on that code base - forever.
Another place would try out a new compiler for 6 months to a year before they switched over to it.
I find gcc a slow compiler on windows (nothing to compare against under linux). So I (sometimes) want to compile my code under other compilers, just for faster development cycles.
I don't think anyone has mentioned it so far, but another reason may be access to certain platform-specific features: Many operating system vendors have special versions of GCC, or even their own home-grown (or licensed and modified) compilers. So if you want your code to run well on several platforms, you may need to choose the right compiler on each platform. Be that an embedded system, MacOS, Windows etc.
Also, speed may be an issue (both compilation speed and execution speed). Back in the PPC days, GCC produced notoriously slow code on PowerPC CPUs, so Apple put a bunch of engineers on GCC to improve that (GCC was very new for the Mac, and all other PowerPC platforms were small). Platforms that are used less may be optimized less in GCC, so using another compiler that's been written for that platform can be faster.
But as a final summary: While there is ideal value in compiling on several compilers, in practice, this is mainly interesting for cross-platform software (and open-source software, because it often gets made cross-platform fairly quickly, and contributors have it easier if they can use their compiler of choice instead of having to learn a new one). If you need to ship on one platform only, shipping and maintenance are usually much more important than investing in building on several compilers if you're only releasing the builds made with one of them. However, you will want to clearly document any deviations from the standard (GCC-isms, for instance) to make the job of porting easier, should you ever have to do it.
Both Intel compiler and llvm are faster than gcc. The real reasons to use gcc are
Infinite hardware support (on no other compiler can you compile a lego mindstorm code on your old DEC).
it's cheap
best spagety optimizer in the business.

Should "portable" C compile as C++?

I got a comment to an answer I posted on a C question, where the commenter suggested the code should be written to compile with a C++ compiler, since the original question mentioned the code should be "portable".
Is this a common interpretation of "portable C"? As I said in a further comment to that answer, it's totally surprising to me, I consider portability to mean something completely different, and see very little benefit in writing C code that is also legal C++.
The current C++ (1998) standard incorporates the C (1989) standard. Some fine print regarding type safety put aside, that means "good" C89 should compile fine in a C++ compiler.
The problem is that the current C standard is that of 1999 (C99) - which is not yet officially part of the C++ standard (AFAIK)(*). That means that many of the "nicer" features of C99 (long long int, stdint.h, ...), while supported by many C++ compilers, are not strictly compliant.
"Portable" C means something else entirely, and has little to do with official ISO/ANSI standards. It means that your code does not make assumptions on the host environment. (The size of int, endianess, non-standard functions or errno numbers, stuff like that.)
From a coding style guide I once wrote for a cross-platform project:
Cross-Platform DNA (Do Not Assume)
There are no native datatypes. The only datatypes you might use are those declared in the standard library.
char, short, int and long are of different size each, just like float, double, and long double.
int is not 32 bits.
char is neither signed nor unsigned.
char cannot hold a number, only characters.
Casting a short type into a longer one (int -> long) breaks alignment rules of the CPU.
int and int* are of different size.
int* and long* are of different size (as are pointers to any other datatype).
You do remember the native datatypes do not even exist?
'a' - 'A' does not yield the same result as 'z' - 'Z'.
'Z' - 'A' does not yield the same result as 'z' - 'a', and is not equal to 25.
You cannot do anything with a NULL pointer except test its value; dereferencing it will crash the system.
Arithmetics involving both signed and unsigned types do not work.
Alignment rules for datatypes change randomly.
Internal layout of datatypes changes randomly.
Specific behaviour of over- and underflows changes randomly.
Function-call ABIs change randomly.
Operands are evaluated in random order.
Only the compiler can work around this randomness. The randomness will change with the next release of the CPU / OS / compiler.
For pointers, == and != only work for pointers to the exact same datatype.
<, > work only for pointers into the same array. They work only for char's explicitly declared unsigned.
You still remember the native datatypes do not exist?
size_t (the type of the return value of sizeof) can not be cast into any other datatype.
ptrdiff_t (the type of the return value of substracting one pointer from the other) can not be cast into any other datatype.
wchar_t (the type of a "wide" character, the exact nature of which is implementation-defined) can not be cast into any other datatype.
Any ..._t datatype cannot be cast into any other datatype
(*): This was true at the time of writing. Things have changed a bit with C++11, but the gist of my answer holds true.
No. My response Why artificially limit your code to C? has some examples of standards-compliant C99 not compiling as C++; earlier C had fewer differences, but C++ has stronger typing and different treatment of the (void) function argument list.
As to whether there is benefit to making C 'portable' to C++ - the particular project which was referenced in that answer was a virtual machine for a traits based language, so doesn't fit the C++ object model, and has a lot of cases where you are pulling void* of the interpreter's stack and then converting to structs representing the layout of built-in object types. To make the code 'portable' to C++ it would have add a lot of casts, which do nothing for type safety.
Portability means writing your code so that it compiles and has the same behaviour using different compilers and/or different platforms (i.e. relying on behaviour mandated by the ISO standard(s) wherever possible).
Getting it to compile using a different language's compiler is a nice-to-have (perhaps) but I don't think that's what is meant by portability. Since C++ and C are now diverging more and more, this will be harder to achieve.
On the other hand, when writing C code I would still avoid using "class" as an identifier for example.
No, "portable" doesn't mean "compiles on a C++ compiler", it means "compiles on any Standard comformant C compiler" with consistent, defined behavior.
And don't deprive yourself of, say, C99 improvements just to maintain C++ compatibility.
But as long as maintaining compatibility doesn't tie your hands, if you can avoid using "class" and "virtual" and the the like, all the better. If you're writing open source, someone may want to port your code to C++; if you're wring for hire, you company/client may want to port sometime in the future. hey, maybe you'll even want to port it to C++ in the future
Being a "good steward" not leaving rubbish around the campfire, is just good karma in whatever you do.
And please, do try to keep incompatibilities out of your headers. Even if your code is never ported, people may need to link to it from C++, so having a header that doesn't use any C++ reserved words, is cool.
It depends. If you're doing something that might be useful to a C++ user, then it might be a good idea. If you're doing something that C++ users would never need but that C users might find convenient, don't bother making it C++ compliant.
If you're writing a program that does something a lot of people do, you might consider making it as widely-usable as possible. If you're writing an addition to the Linux kernel, you can throw C++ compatability out the window - it'll never be needed.
Try to guess who might use your code, and if you think a lot of C++ fans might find your code useful, consider making it C++ friendly. However, if you don't think most C++ programmers would need it (i.e. it's a feature that is already fairly standardized in C++), don't bother.
It is definitely common practice to compile C code using a C++ compiler in order to do stricter type checking. Though there are C-specific tools to do that like lint, it is more convenient to use a C++ compiler.
Using a C++ compiler to compile C code means that you commonly have to surround your includes with extern "C" blocks to tell the compiler not to mangle function names. However this is not legal C syntax. Effectively you are using C++ syntax and your code which is supposedly C, is actually C++. Also a tendency to use "C++ convenience" starts to creep in like using unnamed unions.
If you need to keep your code strictly C, you need to be careful.
FWIW, once a project gains a certain size and momentum, it is not unlikely that it may actually benefit from C++ compatibility: even if it not going to be ported to C++ directly, there are really many modern tools related to working/processing C++ source code.
In this sense, a lack of compatibility with C++, may actually mean that you may have to come up with your own tools to do specific things. I fully understand the reasoning behind favoring C over C++ for some platforms, environments and projects, but still C++ compatibility generally simplifies project design in the long run, and most importantly: it provides options.
Besides, there are many C projects that eventually become so large that they may actually benefit from C++'s capabilities like for example improved suport for abstraction and encapsulation using classes with access modifiers.
Look at the linux (kernel) or gcc projects for example, both of which are basically "C only", still there are regularly discussions in both developer communities about the potential gains of switching to C++.
And in fact, there's currently an ongoing gcc effort (in the FSF tree!) to port the gcc sources into valid C++ syntax (see: gcc-in-cxx for details), so that a C++ compiler can be used to compile the source code.
This was basically initiated by a long term gcc hacker: Ian Lance Taylor.
Initially, this is only meant to provide for better error checking, as well as improved compatibility (i.e. once this step is completed, it means that you don't necessarily have to have a C compiler to to compile gcc, you could also just use a C++ compiler, if you happen to be 'just' a C++ developer, and that's what you got anyway).
But eventually, this branch is meant to encourage migration towards C++ as the implementation language of gcc, which is a really revolutionary step - a paradigm shift which is being critically perceived by those FSF folks.
On the other hand, it's obvious how severely gcc is already limited by its internal structure, and that anything that at least helps improve this situation, should be applauded: getting started contributing to the gcc project is unnecessarily complicated and tedious, mostly due to the complex internal structure, that's already to started to emulate many of the more high level features in C++ using macros and gcc specific extensions.
Preparing the gcc code base for an eventual switch to C++ is the most logical thing to do (no matter when it's actually done, though!), and it is actually required in order to remain competitive, interesting and plain simply relevant, this applies in particular due to very promising efforts such as llvm, which do not bring all this cruft, complexity with them.
While writing very complex software in C is often course possible, it is made unnecessarily complicated to do so, many projects have plain simply outgrown C a long time ago. This doesn't mean that C isn't relevant anymore, quite the opposite. But given a certain code base and complexity, C simply isn't necessarily the ideal tool for the job.
In fact, you could even argue that C++ isn't necessarily the ideal language for certain projects, but as long as a language natively supports encapsulation and provides means to enforce these abstractions, people can work around these limitations.
Only because it is obviously possible to write very complex software in very low level languages, doesn't mean that we should necessarily do so, or that it really is effective in the first place.
I am talking here about complex software with hundreds of thousands lines of code, with a lifespan of several decades. In such a scenario, options are increasingly important.
No, matter of taste.
I hate to cast void pointers, clutters the code for not much benefit.
char * p = (char *)malloc(100);
vs
char * p = malloc(100);
and when I write an "object oriented" C library module, I really like using the 'this' as my object pointer and as it is a C++ keyword it would not compile in C++ (it's intentional as these kind of modules are pointless in C++ given that they do exist as such in stl and libraries).
Why do you see little benefit? It's pretty easy to do and who knows how you will want to use the code in future.
No, being compilable by C++ is not a common interpretation of portable. Dealing with really old code, K&R style declarations are highly portable but can't be compiled under C++.
As already pointed out, you may wish to use C99 enhancements. However, I'd suggest you consider all your target users and ensure they can make use of the enhancements. Don't just use variable length arrays etc. because you have the freedom to but only if really justified.
Yes it is a good thing to maintain C++ compatibility as much as possible - other people may have a good reason for needing to compile C code as C++. For instance, if they want to include it in an MFC application they would have to build plain C in a separate DLL or library rather than just being able to include your code in a single project.
There's also the argument that running a compiler in C++ mode may pick up subtle bugs, depending on the compiler, if it applies different optimisations.
AFAIK all of the code in classic text The C programming language, Second edition can be compiled using a standard C++ compilers like GCC (g++). If your C code is upto the standards followed in that classic text, then good enough & you're ready to compile your C code using a C++ compiler.
Take the instance of linux kernel source code which is mostly written in C with some inline assembler code, it's a nightmare compiling the linux kernel code with a C++ compiler, because of least possible reason that 'new' is being used as an variable name in linux kernel code, where as C++ doesn't allow the usage of 'new' as a variable name. I am just giving one example here. Remember that linux kernel is portable & compiles & runs very well in intel, ppc, sparc etc architectures. This is just to illustrate that portability does have different meanings in software world. If you want to compile C code using a C++ compiler, you are migrating your code base from C to C++. I see it as two different programming languages for most obvious reason that C programmers doesn't like C++ much. But I like both of them & I use both of them a lot. Your C code is portable, but you should make sure you follow standard techniques to have your C++ code portable while you migrate your C code to C++ code. Read on to see from where you'd get the standard techniques.
You have to be very careful porting the C code to C++ code & the next question that I'd ask is, why would you bother to do that if some piece of C code is portable & running well without any issues? I can't accept managebility, again linux kernel a big code source in C is being managed very well.
Always see the two programming languages C & C++ as different programming languages, though C++ does support C & its basic notion is to always support for that wonderful language for backward compatibility. If you're not looking at these two languages as different tools, then you fall under the land of popular, ugly C/C++ programming language wars & make yourself dirty.
Use the following rules when choosing portability:
a) Does your code (C or C++) need to be compiled on different architectures possibly using native C/C++ compilers?
b) Do a study of C/C++ compilers on the different architectures that you wish to run your program & plan for code porting. Invest good time on this.
c) As far as possible try to provide a clean layer of separation between C code & C++ code. If your C code is portable, you just need to write C++ wrappers around that portable C code again using portable C++ coding techniques.
Turn to some good C++ books on how to write portable C++ code. I personally recommend The C++ programming language by Bjarne Stroustrup himself, Effective C++ series from Scott meyers & popular DDJ articles out there in www.ddj.com.
PS: Linux kernel example in my post is just to illustrate that portability does mean different meanings in software programming & doesn't criticize that linux kernel is written in C & not in C++.