Related
Say an OS/kernel is written with C++ in mind and does not "do" any pure C style stuff, but instead exposes the C standard library built upon a full-fledged C++ standard library. Is this possible? If not, why?
PS: I know the C library is "part of C++", but let's say it's internally based on a C++-based implementation.
Small update: It seems I've stirred up a discussion as to what is "allowed" by my rules here. Generally speaking: the C Standard library implementation should use C++ everwhere that is possible/Right (tm). I mostly think about algorithms and acting on static class objects behind the scenes. I'm not really excluding any language features, but instead trying to put the emphasis on a sane C++ implementation. With regards to the setjmp example, I see no reason why valid C (which would use either other pre-implemented in C++ C library parts or not use any other library functions at all) here would be violation of my "rules". If there is no counterpart in the C++ library, why debate the use of it.
Yes, that is possible. It would be much like one exports a C API from a library written in C++, FORTRAN, assembler or most any other language for that matter.
Actually, c++ has the ability to be faster than c in many ways, due to it's ability to support many translationtime constructs like expression templates. For this reason, c++ matrix libraries tend to be much more optimised than c, involve less temporaries, unroll loops, etc. With new c++0x features like variant templates, the printf function, for instance, could be much faster and typesafe than a version implemented in c. It my even be able to honor the interfaces of many c constructs and evaluate some of their arguments (like string literals) translationtime.
Unfortunately, many people think c is faster than c++ because many people use OOP to mean that all relations and usage must occur through large inheritance hierarchies, virtual dispatch, etc. That caused some early comparisons to be completely different from what is considered good usage these days. If you were to use virtual dispatch where it is appropriate (e.g. like filesystems in the kernel, where they build vtables through function pointers and often basically build c++ in c), you would have no pessimisation from c, and with all of the new features, can be significantly faster.
Not only is speed a possible improvement, but there are places where the implementation would benefit from better type safety. There are common tricks in c (like storing data in void pointers when it must be generic) that break type safety and where c++ can provide strong error checking. This won't always translate through the interfaces to the c library, since those have fixed typing, but it will definitely be of use to the implementers of the library and could assist in some places where it may be possible to extract more information from calls by providing "as-if" interfaces (for instance, an interface that takes a void* might be implemented as a generic interface with a concept check that the argument is implicitly convertible to void*).
I think this would be a great test of the power of c++ over c.
Given that "pure C stuff" has such a large overlap with C++, I fail to see how you'd avoid it entirely in anything, much less an OS kernel. After all, is the + operation "pure C stuff"? :)
That said, you could certainly implement certain C library functions using classes and whatnot. Implement qsort using std::sort? Sure, no problem. Just don't forget your extern "C".
I see no reason why you couldn't do it, but I also see no reason why someone would use such an implementation. It's going to use a lot more memory, and be at least somewhat slower, than a normal implementation...although it might not be much worse than glibc, whose implementation of stdio is already essentially C++ anyway... (Lookup GNU libio... you'll be horrified.)
Kernels like Linux have very strict ABI, based on syscalls, ioctls, filesystems, and conforming to quite a few standards (POSIX being the major one). Since the ABI has to be stable its surface is also limited. It would be a lot of work (particularly since you need a minimally useful kernel as well), but these standards could be implemented in any language.
Edit: You mentioned the libc as well. That is not part of the kernel, and the language of the libc can be entirely unrelated to that of the kernel, thanks to the aforementioned ABI. Unlike the kernel, the libc needs to be C or have a very good ffi for C. C++ with parts in extern C would fit the bill.
Is it possible to write the complete C++ standard library (including STL of course, but self-contained, only internal dependencies) using only C++? I would imagine containers and <cstdlib> functionality would be doable in terms of chars, bitshifts, and for loops and other byte fancy things, but stuff like exceptions and perhaps std::cout and std::cin seem hard to me without a dependency to begin with. Let's say there is a set of OS functions available, that are completely implemented in assembly (to avoid any C contamination).
I'm assuming the compiler understands everything from classes and virtual functions to templates and function overloading, these are language level things and have no place in a library IMHO.
If this has been asked before or is a trivially stupid question, please forgive me. I'm not trying to start a C<->C++ war here, just trying to figure out the limitations of implementing a beast such as the Standard library...
Thanks!
Since pretty much anything written in C can be rewritten fairly easily in C++, you're asking whether assembly code is needed, and the answer is generally no.
Unless we're talking about embedded programming, operating systems have all the necessary file and I/O functionality available through system calls, usually (nowadays) in C format. The library needs to call them, likely through extern "C"{ ... } declarations. The operating system functions are not considered part of the C++ library, and typically aren't exact matches to anything defined in the C++ Standard.
To implement a C++ standard library, you would need to be familiar with the language itself, know the OS calls you're going to use, and have the algorithms you're going to use. At that point, it's a relatively straightforward matter of writing the software.
First thing, it doesn't matter if it's C, C++, or D. Any compilable programming language at the end gives you (mostly) the same assembly object file.
Second thing, STL is written in C++, you cannot write C++ library in C or any other language (well, you can but I assume, that we're talking on reasonable solutions). You cannot implement STL containers in C, because the're strongly use templates.
GCC generate really nice output in asm for exceptions now. I recommend to read about C++ ABI (if you're interested in it).
C++ compiler understands all C++ specific features really nice nowadays. Thanks to really advanced code analysis and optimizations, it's able to produce fast executables (see first paragraph).
I hope that I've at least partly answered your question.
The only piece of C++ that needs assembly is the exception handling. I suppose that it might be doable in C++ if there exist libraries to handle the necessary register and stack management.
Of course, those libraries would then include assembly. There just isn't any other way to do direct register management.
The STL is very heavily reliant on #includeed header files. Those pretty much have to be C++.
Anything that isn't in one of those header files though could in theory be implemented in C, Ada, Assembly, or the other systems-programming language of your choice. However, you'd probably have to be maintaining two interfaces if you don't make at least the top layer C++.
I'm in the process of developing a software library to be used for embedded systems like an ARM chip or a TI DSP (for mostly embedded systems, but it would also be nice if it could also be used in a PC environment). Obviously this is a pretty broad range of target systems, so being able to easily port to different systems is a priority.The library will be used for interfacing with a specific hardware and running some algorithms.
I am thinking C++ is the best option, over C, because it is much easier to maintain and read. I think the additional overhead is worth it for being able to work in the object oriented paradigm. If I was writing for a very specific system, I would work in C but this is not the case.
I'm assuming that these days most compilers for popular embedded systems can handle C++. Is this correct?
Is there any other factors I should consider? Is my line of thinking correct?
If portability is very important for you, especially on an embedded system, then C is certainly a better option than C++. While C++ compilers on embedded platforms are catching up, there's simply no match for the widespread use of C, for which any self-respecting platform has a compliant compiler.
Moreover, I don't think C is inferior to C++ where it comes to interfacing hardware. The amount of abstraction is sufficiently low (i.e. no deep class hierarchies) to make C just as good an option.
There is certainly good support of C++ for ARM. ARM have their own compiler and g++ can also generate EABI compliant ARM code. When it comes to the DSPs, you will have to look at their toolchain to decide what you are going to do. Be aware that the library that comes with a DSP may well not implement the full C or C++ standard library.
C++ is suitable for low-level embedded development and is used in the SymbianOS Kernel. Having said that, you should keep things as simple as possible.
Avoid exceptions which may demand more library support than what is present (therefore use new (std::nothrow) Foo instead of new Foo).
Avoid memory allocations as much as possible and do them as early as possible.
Avoid complex patterns.
Be aware that templates can bloat your code.
I have seen many complaints that C++ is "bloated" and inappropriate for embedded systems.
However, in an interview with Stroustrup and Sutter, Bjarne Stroustrup mentioned that he'd seen heavily templated C++ code going into (IIRC) the braking systems of BMWs, as well as in missile guidance systems for fighter aircraft.
What I take away from this is that experts of the language can generate sophisticated, efficient code in C++ that is most certainly suitable for embedded systems. However, a "C With Classes"[1] programmer that does not know the language inside out will generate bloated code that is inappropriate.
The question boils down to, as always: in which language can your team deliver the best product?
[1] I know that sounds somewhat derogatory, but let me say that I know an awful lot of these guys, and they churn out an awful lot of relatively simple code that gets the job done.
C++ compilers for embedded platforms are much closer to 83's C with classes than 98's C++ standard, let alone C++0x. For instance, some platform we use still compile with a special version of gcc made from gcc-2.95!
This means that your library interface will not be able to provide interfaces with containers/iterators, streams, or such advanced C++ features. You'll have to stick with simple C++ classes, that can very easily be expressed as a C interface with a pointer to a structure as first parameter.
This also means that within your library, you won't be able to use templates to their full power. If you want portability, you will still be restricted to generic containers use of templates, which is, I'm sure you'll admit, only a very tiny part of C++ templates power.
C++ has little or no overhead compared to C if used properly in an embedded environment. C++ has many advantages for information hiding, OO, etc. If your embedded processor is supported by gcc in C then chances are it will also be supported with C++.
On the PC, C++ isn't a problem at all -- high quality compilers are extremely widespread and almost every C compiler is directly associated with a C++ compiler that's quite good, though there are a few exceptions such as lcc and the newly revived pcc.
Larger embedded systems like those based on the ARM are generally quite similar to desktop systems in terms of tool chain availability. In fact, many of the same tools available for desktop machines can also generate code to run on ARM-based machines (e.g., lots of them use ports of gcc/g++). There's less variety for TI DSPs (and a greater emphasis on quality of generated code than source code features), but there are still at least a couple of respectable C++ compilers available.
If you want to work with smaller embedded systems, the situation changes in a hurry. If you want to be able to target something like a PIC or an AVR, C++ isn't really much of an option. In theory, you could get (for example) Comeau to produce a custom port that generated code you could compile on that target's C compiler -- but chances are pretty good that even if you did, it wouldn't work out very well. These systems are really just too limitated (especially on memory size) for C++ to fit them well.
Depending on what your intended use is for the library, I think I'd suggest implementing it first as C - but the design should keep in mind how it would be incorporated into a C++ design. Then implement C++ classes on top of and/or along side of the C implementation (there's no reason this step cannot be done concurrently with the first). If your C design is done with a C++ design in mind, it's likely to be as clean, readable and maintainable as the C++ design would be. This is somewhat more work, but I think you'll end up with a library that's useful in more situations.
While you'll find C++ used more and more on various embedded projects, there are still many that restrict themselves to C (and I'd guess this is more often the case than not) - regardless of whether or not the tools support C++. It would be a shame to have a nice library of routines that you could bring to a new project you're working on, but be unable to use them because C++ isn't being used on that particular project.
In general, it's much easier to use a well-designed C library from C++ than the other way around. I've taken this approach with several sets of code including parsing Intel Hex files, a simple command parser, manipulating synchronization objects, FSM frameworks, etc. I'm planning on doing a simple XML parser at some point.
Here's an entirely different C++-vs-C argument: stable ABIs. If your library exports a C ABI, it can be compiled with any compiler that works on the system, because C ABIs are generally platform standards. If your library exports a C++ ABI, it can only be compiled with a matching compiler -- because C++ ABIs are usually not platform standards, and often differ from compiler to compiler and even version to version.
Interestingly, one of the rare exceptions to this is ARM; there's an ARM C++ ABI specification, and all compliant ARM compilers follow it. This is not true on x86; on x86, you're lucky if a C++ library compiled with a 4.1 version of GCC will link correctly with an application compiled with GCC 4.4, and don't even ask about 3.4.6.
Even if you export a C ABI, you can have problems. If your library uses C++ internally, it will then link to libstdc++ for things in the C++ std:: namespace. If your user compiles a C++ application that uses your library, they'll also link to libstdc++ -- and so the overall application gets linked to libstdc++ twice, and their libstdc++ may not be compatible with your libstdc++, which can (or so I understand) lead to odd errors from the intersection of the two. Considerably less likely, but still possible.
All of these arguments only apply because you're writing a library, and they're not showstoppers. But they are things to be aware of.
I am writing a program, more specifically a bootloader, for an embedded system. I am going to use a C library to interact with some of the hardware components and I have the choice of writing it either in C or C++. Is there any reason I should choose one over the other? I do not need the object oriented features of C++ but it does have a stronger type system. Could it have other language features that would make the program more robust? I know some people avoid C++ because it can (but not always) generate large firmware images.
This isn't a particularly straightforward question to answer. It depends on a number of factors including:
How you prefer to layout your code.
Whether there's a C++ compiler available for your target (and any other targets you may wish to use the bootloader on).
How critical the code size is for your application (we're talking about 10% extra maybe, not MB as suggested by another answer).
Personally, I really like classes as a way of laying out my code. Even when writing C code, I'll tend to keep everything in modular files with file-scope static functions "simulating" member functions and (a few) file-scope static variables to "simulate" member variables. Having said that, most of my existing embedded projects (all of which are relatively small scale, up to a maximum of 128kB flash including bootloader, but usually less) have tended to be written in C. Now that I have a C++ compiler though, I'm certainly considering moving to C++.
There are considerable benefits to C++ from simply using references, overloading and templates, even if you don't go as far as classes. Certainly, I'd stop short of using a lot of more advanced features, including the use of dynamic memory allocation (new). Then again, I'd avoid dynamic memory allocation (malloc etc) in embedded C as well if possible.
If you have a C++ compiler (even if it's only g++), it is worth running your code through it just for the additional type checking so that you can reduce the number of problems in your code. The C++ compiler can pick up on a few things that even static analysis tools won't spot.
For a good discussion on many invalid reasons people reject C++, see Dan Saks' article on Embedded.com.
For a boot-loader the obvious choice is C, especially on an embedded system. The generated code will need to be close to the metal, and very easy to debug, likely by dropping into assembly, which quickly becomes difficult without care in C++. Also C tool-chains are far more ubiquitous than C++ tool-chains, allowing your boot-loader to be used on more platforms. Lastly, generated binaries are typically smaller, and use less memory when written C style.
If you don't need to use Object Orientation, use C. Simple choice there. Its simpler and easier, whilst accomplishing the same task.
Some die hards will disagree, but OO is what makes C++ > C, and vice versa in a lot of circumstances.
I would use C unless there is a specific reason to use C++. For a Bootloader you are not really going to need OO.
Use the simplest tool that will accomplish the job.
Write programs in C is not the same as writing it in C++. If you know how to do it only in C++, then your choice is C++. For writing bootloader it will be better to minimize code, so you probably will have to disable standard C++ library. If you know how to write in C then you should use C — it is more common choice for such kind of tasks.
Most of the previous answers assume that your bootloader is small and simple which is typically the case; however, if it becomes more complex (i.e. you need to be able to load from an Ethernet port, a USB port, or a serial port...you need to validate the code that is being loaded before you wipe out your existing code, etc.) you may want to consider C++.
I have also found that the bootloader and the application typically share some amount of common code so you may also want to consider using the same language as your application to facilitate the code sharing.
The C language is substantially easier to parse than C++. This means a program that is both valid C and valid C++ will compile faster as a C program. Probably not a major concern, but it is just another reason why C++ is probably overkill.
Go with C++ and objchoose what language features you need. You still have full control of the output object code as long as you understand the C++ abstractions that you're using.
Use of OO can still run well if you avoid the use of virtual functions. Avoid immutable object types that require a lot of copying in order to pass values, like std::string. But, you can still use features like templates without any real impact on runtime performance.
Use C with µClibc. It will make your code simpler and reduce its footprint. Can be found in: www.uclibc.org.
I got a comment to an answer I posted on a C question, where the commenter suggested the code should be written to compile with a C++ compiler, since the original question mentioned the code should be "portable".
Is this a common interpretation of "portable C"? As I said in a further comment to that answer, it's totally surprising to me, I consider portability to mean something completely different, and see very little benefit in writing C code that is also legal C++.
The current C++ (1998) standard incorporates the C (1989) standard. Some fine print regarding type safety put aside, that means "good" C89 should compile fine in a C++ compiler.
The problem is that the current C standard is that of 1999 (C99) - which is not yet officially part of the C++ standard (AFAIK)(*). That means that many of the "nicer" features of C99 (long long int, stdint.h, ...), while supported by many C++ compilers, are not strictly compliant.
"Portable" C means something else entirely, and has little to do with official ISO/ANSI standards. It means that your code does not make assumptions on the host environment. (The size of int, endianess, non-standard functions or errno numbers, stuff like that.)
From a coding style guide I once wrote for a cross-platform project:
Cross-Platform DNA (Do Not Assume)
There are no native datatypes. The only datatypes you might use are those declared in the standard library.
char, short, int and long are of different size each, just like float, double, and long double.
int is not 32 bits.
char is neither signed nor unsigned.
char cannot hold a number, only characters.
Casting a short type into a longer one (int -> long) breaks alignment rules of the CPU.
int and int* are of different size.
int* and long* are of different size (as are pointers to any other datatype).
You do remember the native datatypes do not even exist?
'a' - 'A' does not yield the same result as 'z' - 'Z'.
'Z' - 'A' does not yield the same result as 'z' - 'a', and is not equal to 25.
You cannot do anything with a NULL pointer except test its value; dereferencing it will crash the system.
Arithmetics involving both signed and unsigned types do not work.
Alignment rules for datatypes change randomly.
Internal layout of datatypes changes randomly.
Specific behaviour of over- and underflows changes randomly.
Function-call ABIs change randomly.
Operands are evaluated in random order.
Only the compiler can work around this randomness. The randomness will change with the next release of the CPU / OS / compiler.
For pointers, == and != only work for pointers to the exact same datatype.
<, > work only for pointers into the same array. They work only for char's explicitly declared unsigned.
You still remember the native datatypes do not exist?
size_t (the type of the return value of sizeof) can not be cast into any other datatype.
ptrdiff_t (the type of the return value of substracting one pointer from the other) can not be cast into any other datatype.
wchar_t (the type of a "wide" character, the exact nature of which is implementation-defined) can not be cast into any other datatype.
Any ..._t datatype cannot be cast into any other datatype
(*): This was true at the time of writing. Things have changed a bit with C++11, but the gist of my answer holds true.
No. My response Why artificially limit your code to C? has some examples of standards-compliant C99 not compiling as C++; earlier C had fewer differences, but C++ has stronger typing and different treatment of the (void) function argument list.
As to whether there is benefit to making C 'portable' to C++ - the particular project which was referenced in that answer was a virtual machine for a traits based language, so doesn't fit the C++ object model, and has a lot of cases where you are pulling void* of the interpreter's stack and then converting to structs representing the layout of built-in object types. To make the code 'portable' to C++ it would have add a lot of casts, which do nothing for type safety.
Portability means writing your code so that it compiles and has the same behaviour using different compilers and/or different platforms (i.e. relying on behaviour mandated by the ISO standard(s) wherever possible).
Getting it to compile using a different language's compiler is a nice-to-have (perhaps) but I don't think that's what is meant by portability. Since C++ and C are now diverging more and more, this will be harder to achieve.
On the other hand, when writing C code I would still avoid using "class" as an identifier for example.
No, "portable" doesn't mean "compiles on a C++ compiler", it means "compiles on any Standard comformant C compiler" with consistent, defined behavior.
And don't deprive yourself of, say, C99 improvements just to maintain C++ compatibility.
But as long as maintaining compatibility doesn't tie your hands, if you can avoid using "class" and "virtual" and the the like, all the better. If you're writing open source, someone may want to port your code to C++; if you're wring for hire, you company/client may want to port sometime in the future. hey, maybe you'll even want to port it to C++ in the future
Being a "good steward" not leaving rubbish around the campfire, is just good karma in whatever you do.
And please, do try to keep incompatibilities out of your headers. Even if your code is never ported, people may need to link to it from C++, so having a header that doesn't use any C++ reserved words, is cool.
It depends. If you're doing something that might be useful to a C++ user, then it might be a good idea. If you're doing something that C++ users would never need but that C users might find convenient, don't bother making it C++ compliant.
If you're writing a program that does something a lot of people do, you might consider making it as widely-usable as possible. If you're writing an addition to the Linux kernel, you can throw C++ compatability out the window - it'll never be needed.
Try to guess who might use your code, and if you think a lot of C++ fans might find your code useful, consider making it C++ friendly. However, if you don't think most C++ programmers would need it (i.e. it's a feature that is already fairly standardized in C++), don't bother.
It is definitely common practice to compile C code using a C++ compiler in order to do stricter type checking. Though there are C-specific tools to do that like lint, it is more convenient to use a C++ compiler.
Using a C++ compiler to compile C code means that you commonly have to surround your includes with extern "C" blocks to tell the compiler not to mangle function names. However this is not legal C syntax. Effectively you are using C++ syntax and your code which is supposedly C, is actually C++. Also a tendency to use "C++ convenience" starts to creep in like using unnamed unions.
If you need to keep your code strictly C, you need to be careful.
FWIW, once a project gains a certain size and momentum, it is not unlikely that it may actually benefit from C++ compatibility: even if it not going to be ported to C++ directly, there are really many modern tools related to working/processing C++ source code.
In this sense, a lack of compatibility with C++, may actually mean that you may have to come up with your own tools to do specific things. I fully understand the reasoning behind favoring C over C++ for some platforms, environments and projects, but still C++ compatibility generally simplifies project design in the long run, and most importantly: it provides options.
Besides, there are many C projects that eventually become so large that they may actually benefit from C++'s capabilities like for example improved suport for abstraction and encapsulation using classes with access modifiers.
Look at the linux (kernel) or gcc projects for example, both of which are basically "C only", still there are regularly discussions in both developer communities about the potential gains of switching to C++.
And in fact, there's currently an ongoing gcc effort (in the FSF tree!) to port the gcc sources into valid C++ syntax (see: gcc-in-cxx for details), so that a C++ compiler can be used to compile the source code.
This was basically initiated by a long term gcc hacker: Ian Lance Taylor.
Initially, this is only meant to provide for better error checking, as well as improved compatibility (i.e. once this step is completed, it means that you don't necessarily have to have a C compiler to to compile gcc, you could also just use a C++ compiler, if you happen to be 'just' a C++ developer, and that's what you got anyway).
But eventually, this branch is meant to encourage migration towards C++ as the implementation language of gcc, which is a really revolutionary step - a paradigm shift which is being critically perceived by those FSF folks.
On the other hand, it's obvious how severely gcc is already limited by its internal structure, and that anything that at least helps improve this situation, should be applauded: getting started contributing to the gcc project is unnecessarily complicated and tedious, mostly due to the complex internal structure, that's already to started to emulate many of the more high level features in C++ using macros and gcc specific extensions.
Preparing the gcc code base for an eventual switch to C++ is the most logical thing to do (no matter when it's actually done, though!), and it is actually required in order to remain competitive, interesting and plain simply relevant, this applies in particular due to very promising efforts such as llvm, which do not bring all this cruft, complexity with them.
While writing very complex software in C is often course possible, it is made unnecessarily complicated to do so, many projects have plain simply outgrown C a long time ago. This doesn't mean that C isn't relevant anymore, quite the opposite. But given a certain code base and complexity, C simply isn't necessarily the ideal tool for the job.
In fact, you could even argue that C++ isn't necessarily the ideal language for certain projects, but as long as a language natively supports encapsulation and provides means to enforce these abstractions, people can work around these limitations.
Only because it is obviously possible to write very complex software in very low level languages, doesn't mean that we should necessarily do so, or that it really is effective in the first place.
I am talking here about complex software with hundreds of thousands lines of code, with a lifespan of several decades. In such a scenario, options are increasingly important.
No, matter of taste.
I hate to cast void pointers, clutters the code for not much benefit.
char * p = (char *)malloc(100);
vs
char * p = malloc(100);
and when I write an "object oriented" C library module, I really like using the 'this' as my object pointer and as it is a C++ keyword it would not compile in C++ (it's intentional as these kind of modules are pointless in C++ given that they do exist as such in stl and libraries).
Why do you see little benefit? It's pretty easy to do and who knows how you will want to use the code in future.
No, being compilable by C++ is not a common interpretation of portable. Dealing with really old code, K&R style declarations are highly portable but can't be compiled under C++.
As already pointed out, you may wish to use C99 enhancements. However, I'd suggest you consider all your target users and ensure they can make use of the enhancements. Don't just use variable length arrays etc. because you have the freedom to but only if really justified.
Yes it is a good thing to maintain C++ compatibility as much as possible - other people may have a good reason for needing to compile C code as C++. For instance, if they want to include it in an MFC application they would have to build plain C in a separate DLL or library rather than just being able to include your code in a single project.
There's also the argument that running a compiler in C++ mode may pick up subtle bugs, depending on the compiler, if it applies different optimisations.
AFAIK all of the code in classic text The C programming language, Second edition can be compiled using a standard C++ compilers like GCC (g++). If your C code is upto the standards followed in that classic text, then good enough & you're ready to compile your C code using a C++ compiler.
Take the instance of linux kernel source code which is mostly written in C with some inline assembler code, it's a nightmare compiling the linux kernel code with a C++ compiler, because of least possible reason that 'new' is being used as an variable name in linux kernel code, where as C++ doesn't allow the usage of 'new' as a variable name. I am just giving one example here. Remember that linux kernel is portable & compiles & runs very well in intel, ppc, sparc etc architectures. This is just to illustrate that portability does have different meanings in software world. If you want to compile C code using a C++ compiler, you are migrating your code base from C to C++. I see it as two different programming languages for most obvious reason that C programmers doesn't like C++ much. But I like both of them & I use both of them a lot. Your C code is portable, but you should make sure you follow standard techniques to have your C++ code portable while you migrate your C code to C++ code. Read on to see from where you'd get the standard techniques.
You have to be very careful porting the C code to C++ code & the next question that I'd ask is, why would you bother to do that if some piece of C code is portable & running well without any issues? I can't accept managebility, again linux kernel a big code source in C is being managed very well.
Always see the two programming languages C & C++ as different programming languages, though C++ does support C & its basic notion is to always support for that wonderful language for backward compatibility. If you're not looking at these two languages as different tools, then you fall under the land of popular, ugly C/C++ programming language wars & make yourself dirty.
Use the following rules when choosing portability:
a) Does your code (C or C++) need to be compiled on different architectures possibly using native C/C++ compilers?
b) Do a study of C/C++ compilers on the different architectures that you wish to run your program & plan for code porting. Invest good time on this.
c) As far as possible try to provide a clean layer of separation between C code & C++ code. If your C code is portable, you just need to write C++ wrappers around that portable C code again using portable C++ coding techniques.
Turn to some good C++ books on how to write portable C++ code. I personally recommend The C++ programming language by Bjarne Stroustrup himself, Effective C++ series from Scott meyers & popular DDJ articles out there in www.ddj.com.
PS: Linux kernel example in my post is just to illustrate that portability does mean different meanings in software programming & doesn't criticize that linux kernel is written in C & not in C++.